var/home/core/zuul-output/0000755000175000017500000000000015130034677014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015130061662015471 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000433760715130061576020276 0ustar corecore~c`ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD /@VEڤ펯_ˎ6Ϸ7+%f?長ox[o8W5^|1Fr_?c^*߶E٬:rv筼ح_y~̎+\/_p/Bj^ֻ]Eo^O/(_/V?,<']_kmN:`Si{ C2i1Gdē _%Kٻւ(Ĩ$#TLX h~lys%v6:SFA֗f΀QU%Ty2Kݙ$ӎ;IXN :7sL0x.`6)ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`岮X7̝4?qKf, # qe䧤 ss]QzH.ad!rJBi`V +|i}}THW{y|*/BP3m3A- ZPmN^iL[NrrݝE)~QGGAj^3}wy/{47[q)&c(޸0"$5ڪҾη*t:%?vEmO5tqÜ3Cyu '~qlN?}|nLFR6f8yWxYd ;K44|CK4UQviYDZh$#*)e\W$IAT;s0Gp}=9ڠedۜ+EaH#QtDV:?7#w4r_۾8ZJ%PgS!][5ߜQZ݇~- MR9z_Z;57xh|_/CWuU%v[_((G yMi@'3Pmz8~Y >hl%}Р`sMC77Aztԝp ,}Nptt%q6& ND lM;ָPZGa(X(2*91n,50/mx'})')SĔv}S%xhRe)a@r AF' ]J)ӨbqMWNjʵ2PK-guZZg !M)a(!H/?R?Q~}% ;]/ľv%T&hoP~(*טj=dߛ_SRzSa™:']*}EXɧM<@:jʨΨrPE%NT&1H>g":ͨ ҄v`tYoTq&OzcP_k(PJ'ήYXFgGہwħkIM*򸆔l=q VJީ#b8&RgX2qBMoN w1ђZGd m 2P/Ɛ!" aGd;0RZ+ 9O5KiPc7CDG.b~?|ђP? -8%JNIt"`HP!]ZrͰ4j8!*(jPcǷ!)'xmv>!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8/0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+/_+*{b }@UP*5ì"M|܊W7|}N{mL=d]' =MS2[3(/hoj$=Zm Mlh>P>Qwf8*c4˥Ęk(+,«.c%_~&^%80=1Jgͤ39(&ʤdH0Ζ@.!)CGt?~=ˢ>f>\bN<Ⱦtë{{b2hKNh`0=/9Gɺɔ+'Х[)9^iX,N&+1Id0ֶ|}!oѶvhu|8Qz:^S-7;k>U~H><~5i ˿7^0*]h,*aklVIKS7d'qAWEݰLkS :}%J6TIsbFʶ褢sFUC)(k-C"TQ[;4j39_WiZSس:$3w}o$[4x:bl=pd9YfAMpIrv̡}XI{B%ZԎuHvhd`Η|ʣ)-iaE';_j{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=oi_)qb㼃{buU?zT u]68 QeC Hl @R SFZuU&uRz[2(A1ZK(O5dc}QQufCdX($0j(HX_$GZaPo|P5q @3ǟ6 mR!c/24مQNֆ^n,hU֝cfT :):[gCa?\&IpW$8!+Uph*/ o/{")qq҈78݇hA sTB*F$6 2C` |ɧJ~iM cO;m#NV?d?TCg5otޔC1YC֥s`Mh,욕v`;VI&Ίх{211 VSxSew{~o}Lvr7/Tcj.R2>e3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!fޙ-did˥]5]5᪩QJlyIPEQZȰ<' I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,G4.͒]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI]iSCQ&s~In/SZ % 'I Ƿey4 BVQXodՔz q[*ڔC"1Ȋ-R0ڱ}oF4 3vFf#8^Vє+k@ :)@%9@nA B q 62!/ 6G (" u:)fSGAV(e֖t܁ ft~c.!R0N<R{mtdFdHÃФsxBl] " Δ<=9i/ d ␙F9Ґ)Hnxps2wApP!se]I)^ k?'WY|5EhܔS=lgӌ4U?jO_-T: ͰĵOVW*h X5 qq㘣ٛ̑6bɣ!I'Mahi9;]2`Ҟ6+?ZtRmyΪJU r!}R}Q& eHi0%!Q.1APdд.Ar:h@Zy+H3p7ٞ%eOi9u[ txYΖx_eɑvťJ*V.0+^ԧFIcu '‹y9Hj }1f]fsQJIVrQWq>N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQK$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU?tWӻ"N88YܭU}p c-csC{T-MfpqǸה-w֢ҙ40䎢^ R&%b&>BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dTb$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=OʯM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_Mq8!ד|$@D.ݮl`p48io^.š{_f>O)J=iww#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v}/{&Ά+4*Iqt~L4Ykja?BH6!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$ėczF_/D6 d'A`4ɦ5oEjRnYĊItb]Dm%1HcI?P"0h~m#,UIy[U Jn"LU*D%QUR]iHc_i!W$ʴ[V1^mU7T=I.A!R$ xBe~o D<[K U)q8io5qAz\Mc_UᦝZ&.VAAGL . H]Ub#}~xnǓ(K':ьqYO^2kSQ'y$ z?/ wj_%L d|G OIوGn__a//;3Be0"Ceqenl}Y\9̰bz?7Ww'o ^Wm+6UhcFawЌ 5qȊy}Ik6cb\,f;vtW˶}9oPd櫐=/_nZrĨM&uOf;QX?Ig8L/HM\2 MY/Hs fkqdAd$qSWC; %}| ö(/I_УHzszkA eٺkV2sq}; #it{u൦D2$^Oc3Bg̺Н ݿ0AL_Uׂ«X{$ߴ<>[a븦K {"kW'9\۵/|#_*tW^{2Kcl%B,?d+ JRd2^k"+KXҗqWo9%gIL1ž ;bi;Md։7D/u^53Pl; M3O^KU]*Ĉ-uxyi  xXҙiZasۊƞ=sL"iVCaO&b#SD"$Hr 2*^`iڒ5M2<*y" G()ވYRJ^ ճMA|b2'|Ĩ/.Fu)By|e_uG]~GBfMS֯'wt7*zU-)b/y:E5`-FIW-ped V4}Kߵ笻fƫh`Vު}G$MQȊ9 iĭw?YYu\4]4_$SqpQ 0ZG'秼 wOA5ϣuVwϓ3'h+Dk{+{cZxZfet%C 2*K "y`A#zF .rFO;T:s9wpg],8jVy4M˨3OsoGޣӚ` N:ΚNu瓊ฮx%K!"Q >5a􆇗mIb%9? c9 56m*4h$(Z{6Y zV%sG2 g8i4^SloΓi._cfB3n29Kr2( b^$E0S}&YKg< Lge*h7kG}UQ]N o/qZ t^>Ġ&bPDmRWm:"]UyK2FK<~ 짭cam~Di|:zwdKÃTGjq%"%Oϰ;:lSfTĈ9\s|FcœCrV2JEZpKi@&J>d93j+hrqTπ -xt"Ćf0S{&Şy) hp XN~/|@bF|Q4`k[ȳD4ba,ON0 GjEe%H%!cKR0XE9` /iAçc9&BO ѹ fA5d`2wL<;fSI`L‰']O[(#gNʴW&o*7}C,sQ|_N{{?pJBX!ٌ$*Dlǝa;Nj>%U4-wǴК>w?CDx9Lh=V|,8~ ]ǥP2~?ΈJWX""^4> N֍,4Cnai)* aO># +74[d-J.qlB#:Ǔ$ 5YԢif4(5į͋A5Fk_QXћl/nG 'p@i!1/][ITI B.msn[`:=?cZv.( &"grqw=8D:x&Ɠo` OkX_F?zzi49LiH\]RAzxp^cݧ_<R6Y=6$@p%܋}x0)"U\tRAXdEQa  cG &\d>F!ylTV jy/)n"Kt/F46"Z*I5-עC B/t"5`IC[0moBێTj*Т5m,8;ʚ׏Yx9 6AY"%OI?2a=!I]Tw7\ȏ)W N|NM tk&D/ˁ'aG? c%$|ca@Ұ!ȥ'̈&PJNP0LƵ<C@qRD2'haqg˛HSj]-aSGCk+'Q[]j^MɆ kUs D"hSx{zFq[*iٳ6"?O_W*Z@#ݿʦ/Dꊀ^1ڪV=XPZbt@$e蛐’3<E"mR;O߽c*+<%c0Ó |ZY5MaNyQVGtKКuҽNL wCxog|;Lұр0˕oytJk4/* `~y #y"aTU@>;<*C(oYey£Zn&6K3s;+R:$4]]Ӗ߷DkRyv*$\V B^0in.&Ӗ*1\ѷ\SEh-`Z[ǃ+WӶ6-<[Mzx^2 w/Pk0X#C 5F\7lP봬w+ (u'VasWKm jWM/H) .6X$2aEVQ9_ WH!W9*J-#ވВZrT(2C>ʽb0L"sTۣFBѴKmnub1zƙ6M}q?kzq[aZv>3Ћ}s.<݉}ZXe41@kȱMkM"sTNx*r ZO}U~kK"lE~@=DᨼP+@?-ꡪM!ӛUU9EqUwIg"jnt 鮌xN⼃IDs[K&7וliT;^neK$!.'S>xzS D}k]FzxtkAtݎÓWUm6/P CSҒ{I7e\CZ*]DtDFዢmH ]_f"+ rW0&xM! wPE;.? A,z4†$¸7@=sh$%w%vD{ 04wqc.O3izXܣKZcvcλ {HŭIf:)q^}k5:EOEQNFn&gD9n&0M auI؆º4iݝ6Kb ȫm"##Yj\ipE*E\ٍ'. R5UR-"-7ß(UO =б2_uXeYF^ eq&Rk.Wy=<+@]3Uma5[6 34>ޞP|"&ft3ߑۑI܄(ӦPvyG ݹ1s&o'ngi}{7c[Qǽ38;0Q)}A+ߙqSX1gelrL”0{ߑɃdeyAeAMa&eR.i;8ݘ{R$X+p4;O8v o+27weH`vi%2+YUJ~3Na3f G(N cwgCFky9[;%>U_`HPHS-c^v}Bt`cyM}&M0:f #,U+7ߑј SuB ll'q, ajTv 2s_{ ZaѮng[qnV 3swO ~:QOώۛi2l6GJv| dɮl1r6H|v,b"aJ^c,=[DG"sḊ]6^L` "x~B=p33al,S׵{?uzXOyHW>`nf\ ߳ޚOVҞ֦p@_qnNx zwd܊ڟϫ|V~:QzaU'b&ʈ"WVf{8lŻuՈoI\\c[[{~S~y`:TE(ز/+:5fXtk]#נ=b X:@VE,)'?qډxLnr#_[aYF`+``x {q>3`gqr#6` GHI#(fx!n© 1sù췱/Gܔupd 8cĮ/"MP65Qqvo @qi&_{V?A$y2\7Ǧ&K. 籞@P%4#@mlg;@ LF,΋ -CH F<%0%ڊ˻q'EI8!ƍ\m! ChO>@pc㽴 _>~ prEDK۱1 z43z t8M_&º`濬\ZYN;{Ϻǩr)/ IF`<y/}Gfh@H` 4cA ~"8O$Y6] }'JѲR;նu^M TAaqUA`E?n ;7Ď(/xŐٗY^mj{}jmNvpN9p[fq:Eq3JO m-swႣJ3.bԈP%vsh`.<|Q<zcY*hE$e̓a˨=E)j~_SgE&pwoUԧ>s~ǖqU\1RɓxK\`9j]KזS}gKH*6$qoRKqR&UvE +kDzVxrvv t^>R?rbq/ jx\M2Ow' 'e&CD鈍 $}#7&>k;rݶ">CA/_N)h<UD4/ӶP0Og㚌`!"# *vmUӔ}b@[+7|Yk5?JӾɹȧwСfƪW6`!Fy=ъjdw:|췬HU#?zE-Sj ~?agT,/E⍘: XeӲ }ѤF'b\FnjgW9W50)CȎq"?E0A]$\L{Xd-6|QF~%TQenn?cϳq1BwS caq icAy~ҴR4Ls3ꦚAj^e1IO-3M0 aZ^@qԆJTyS;6]i{i݃D9 Х|*ފVГj l9T"IM#Qќoh.Yڃlęl{˪pxzC`} k| GL-nMEEȲoȪ@Q̸ ir XDEY`-I7i8DNn0h]",+Qd d) &+Lz2 ؑVe^mjj!17Fag@Kk.`JfYCo57ۘ{FcUn{c80J2n(=dwiB[qF.>J4MC@{[JU͋2EaqD>"7Wv\Uw2 ^UV)&۱}O bt~ fW$bP4xZ-bqSR2"Y[Jg"hP5m@F$Kq _ȫ*-5UB eO#=P wmӦʞتʶj׻ U>iڶnmZ^ނX{{Zj?R{ BUB-u'y# u ʷ oO(-uW u ݞPi$݂PoPo B FHB-W ߞPi$߂`` B FhHB- W 4ܞi$4@.cx ,+4$iGEy]aQTuwȘve bOWy]Vx:BVr/Ŝ$%Bji1@W1ɮy:nwfK3֪e˽xd#r Mst`Ix_kEV'м?s Uy_bE,eޟ΂AxMmFO+UA?>lĒkN2']ZvaB}|"4apVխȞxq.bOyVg)~Wy=VAI9*tE`hWϫ6joH=SO+<Ⱦx#xGt;{ -&,Q F]DfZ[ :ߊ,i,| Hիԓ<Ɇ 4ϠV^C?4vm+߿?:EٸiE0 {QdzUH?^[ُ5"2oŸ Aw֬LEZQLV)B}2fU< x(wS> 5'l8e֗k&͸bMa6׬?\<-, X+ 햠7Bo8y^ gIL|*3}i5`h%}W^   V@< km|Y0` )'9}^`ILF쬔p ^ ٰȴ`+|dnp9}ӲW2Q&&WƷ5![{/Wp֢{6n˸|Ʃ.nWY8f[NǝƬ]Y]av+$o6>yMmYW%kN<ntE(u"|C4&$Lƒ4U$VN]-R*F5ϣ+Ἡ^[pYu3җ9ӢuMtA_ep.zZwH!b\wcwYRuZ"wx:npI:R놊8E3=g i15Kģ,ii&AwPc\ʣC4ݷ8>$ #@״Vd0ˬuqpz(6gA=TGci-v հU:qǧfY)tXxYt Bi%BBKBl"^5 hZ:h0E9%EU`#Tq ,~2Nꈄ_m {Zh]/~eWu-,1zׯcoNIF%#y` 'q'@73:B'`0wIBimwqpCI| 1aɻ6*cr8v4GM{mNkqXT[zۋt/[1D|}>֋lzI(B/#]o/J7np4h_(+/6EY%X, $ &-?x4 ोcnqowq; /,vL3gOK LlsAɪVb ^%!/POGch zSb_Ԙa(z? )Ysϳ: )X[ 5M;7SRRwͺ>:97nn2C.iIC*Òh+CxjvNϽt2IA&Vo읅*p&b~;ئre+T8ҒF&$Qɱ5`TqFڥHt,۠KtXY,עKR;ZTXzk8K쒐*te 9fvA1%ԥ/R5 쏋9M/aЭ9g9q/*:鲌UYCbNX`S@8B"&ZGQUԲr2,7N%8q&F^EN.Mtl YUiWxs=ג!eFc'5iP#"pKy|8}٭X2K' |wKZ؃a׌$9PRIs3)IX\UMH)eTs j.R0ifoN b֑]?-H4;eቔ'R*K)\|1sUٳ bvZ1zz-GJ`!!:YmvcDpSB}]*C1{XlF0II;wI 9#`<=O9ZOtvC=<8(BQR:#"T+š4cM1k LQ' &aP Bo2M_X:s‡秃gQQLI#Z0?n>Pz3pK qDGB q235ǛcS!IRIwZLrWEZY]/s!s;[CyK` XzOpdYp/t'@\Ʉх^o }iIy {ޱUq6^vrς䎶>8;m. #/llTbΠtʖkeLqvPh0V6GdR,ahdj, lxiY0=1.Asp +d b :9,B-)< `oYtE[[t)_#'NI(`aEvZXzuXL1h*>`*?8YWW. nyY]cpWb[: mu{z/siVa"'^bh'!)ƋF7Zg%3KDNT2Hы?v:=};pyh;]M6MP._\SOtM[Yr-*Do fA%ۢ28/Bt>9prN Ő`kkN\IpX3ioabA٫/t%r&.S;?55Z֐}%S9e$=6TЈK|~M)O+9/pz T$Hxz^2׃{0ߊS5A  dk28Rv+Ho?7o;|OބƔk=t:;cΘ!];(c(cpȃc+:[c\Ӂ3#1 -g?xuRda5GmFJ]""Kg8 ,8N~-t;gYpiP0gnyd-֭MfbHc!YPEK, )Fdc)/8y)eNn2)ߟԒ R +l?h:lOɀ\3N@)fS;悰V+z"Kpu}"3 Ք.|f(XْrcD'NavUe@w]T5BC' _ʻ^,ܦ=prw1wt,Jaf+&z\Jd 4A*̥th>vo@{$X:.wt*L (xOdНr4(u/.-/s?^ ?luuf*LRԏ?X W5Q?n/Rq$5¶kɟSC &īz\Q^&Om`e'˪M*nbU Jv},ǣ^1O% ?)`Zۨ!ztG/ie_;E,{/]!m"hMU#Uid ֞57CTsܬ79ǧo=zB`ôҺ BX:RxJڏ:<[]iAz6s!uq*cv6Q'4RdJC$UJ.O1菿d2tX -jq/Vw]-)}[q` KK8;%&tѲV.AspD GU򺞹^7R޸)*;ydR%7 QHSxfrǥ:܋WYIlonYp<᭎XOMkAEgվ*p<܅R٩yJ2NN/&/(䏎~O,8N~zH!ؙӇp-LUzKp_QJ0{tJIUfOUoayvYe82 d/ 58Vkr W=CCR2I 3aw_UuWW+F]B3wSh H:--yMdڕ8;.eIOwoN`HFˠ1C/gRkzviivNT4,ֹIc׋:Bx;<[}`.vFuwiiyQsqoRR,*kxWZwfvLslUX[V>NpU*8S)\wd'?PrlODFEwa rχy(ٝuV( ѥw tkcD@Ͽ!'00ہI2gmbW}/4x4u0tяcQA qfٿ^H$Gwps? 2qa 7UdZ F*1㱱SG-D-Q cA@\*a^)3GN:7mF&!Dᯆm-;4}:C{|ޝ֓ͤ-0 d2eF'Fy:ri{'s\?#֏O~~SǗO#qCdno~3+e3|+iKW-&ЙY3gyhT{6_VS,7{U0X,=\J!))kI~iqOD8\|tL|p%*.*G<^RޛzO:z;>'G nq|{#MS?]e7In h x%c{Pu9RH^4xD /=-z#_ˇ1\ 64Kޖ9ug3`Fw5?´;U[M+s䋋H/=04S; 0[0*giLDU2+XtLy>U>)<-d`T78 K1ǩ;cC;yqȇW6S<X@7ŝiw(ݭ—USm:a<2yCHB+5bGݹ)0o={x!mCI*[Pڲ8sSpI>,` ƀR;w8xx03w:6UEʱ(,˝Se믞Zm-t*\D_-ew% wד~Lվ+ǽr܆SUih0"JU( \ݯ̔GSnp)VH fdi')} / ve+c/a%SsWOʗc&OvGJ:g5)Ǐ@I;~{r^W6Ï 8КІKt5 UkBIKV/5AiOa#0#AH*p6I}ɤG{_oo <H :T4nUh*X*ȃBSZB*W>*=:R)۫n&[22hڤ ZhūMn(JXQm-[\RH2[ IˈF/=m`i~݂6PZ Hjء߰v7s!e;hȓv&eA0"O"Vt)iXMVIXVS00>|M?In[ȓJ\$}oF>es',ev&vX<ʁ$eunD[t{\ _1r8`4NYݘI!tU# q  iܦ-Wi+68ܝ')3O'#JE"Q#bZ~<{w3YRp.'-գ8.K2ÈvxutYdE.y;< oǂ8,%dS9XI弓| Iqj̟*tcr1T;6̯šnԗЯHHenZ)R9~HGDf/bYQLqD{zx;Xz@nǮ=oBh֦Zz`mO(.oMvQy[@ )(֘<;nby 4?e3h߬e[NPRxѨYc7G? !)  72:h*)3%N[&K6CЗ^ynAnGxy"\ >ˋJ;-fwjwt-Z,jbh831 [b-(9 s@>CHRHK=rff隭`vTٚ)p+.ZlҬ]PNKT*(D ƩZ[K+#]&0UZxcbcDQIמu;5kvt1jA4\Xl'9!Ik{)蔱!M4"&)gp{֌Ҍ?9T 𪼪^U^)TfSv@٨D-Z؉,쀸 *(-iFU^E SÕn((ﶏ(ŪV0(v\yϊъû8$g|;'dxbVH$**nwwtj &L3ČETy~?bh]˛W]gH"%,Ai?I~I~G?b܅YuGH#o?ٻӂs\h~$q6: -@$(8 >ZOB/9Iq=~Ϯ G蛃e玦Wp\s `")[:3| >E|Gg'3JےйqM{T~kѵG?ijrw5z0bZP;e2ӛ6"Tlv`|۵?5{m/ݰ2-Fc`|;?ɷg?40nexHqSoeSN\'ܭ|('`WTXd45Yo@{r/JPfR,$qqYwT2TZ P,(2[4?sDVqЙ4HDm= Adp`Ģ gQ>p|JT'.ʍx}ݲ>}urA{Va=YUujvJ Wwk|M^fdv3q>FҲʜc.V%/ aꀑN{G* "6@`%80JAl\|)o>7kV4FK-/k NKf0.Q(8_Ew9m=|J&bP9@P0/zPI3+Ze$H"3 %k;U J\0QwJ9B0SBI<  dK.11dvk U6_jAi\2&uqWcb拉QH lq{l^=6viǭNpAJ] O9*0lRas lU9I5^3R0[^[3e|bDoe\/P(.=uzIH o !ɄĚkBXV!|xwg7>vLz(dl\W~c0 S^WLW#1z:q4h\)D,t{A\}HKNOW/fz,a$#äё54=\ %}.!EP:9S|1b,+yC1JM,uHU鍀$X= 0 V#D+%{i J1 Y9TA2R^8FjIٳH($DrΠ1I \s0I62g ɂ1ӚIMĔ n$.A.&jMsK@As0J!Sb$ r^" {ƍ6 {[BX< h)[@:xKjSS]z!3HBXjJ]rbM{NCLBX` M N:V#W`Zh,1i*jhA j)))B(&Nf'hTjl!F [ǸX$TB !2V@Dᑠx9Or5v/s {RajRLAKrY 8ExĂ$A8TCEGKC ~#0T5[1.Q`/^jnҤftBx4I*&q:Cu=cC$#t3wHJ\3#(GJk SL vYGȯ2l-I&T`Hv[I$e: w=nL9D^OHZ<A' xev[ߐkYeFPw-Sf8*ũŊ6 "Mk%O3$"!Qd&p4 @j`27 wIC.H WΈv+]h]T -X)8#K $L>;_MR zlWnͼit{GH+0^J77 ner2Q?MV_\Vj@̘8ЮRd8$X3,J9P8XY%=b3>ô#6shlcKLՅ䒈Ni .pG4m{ P+6 <&nq@S L"cn B8JYF3S"_g1 z`2ݹH:&$ĉ4KN`rDi"b R܇gda,ZjB4Tg7T|>%d`t2z"30F _ѻAG]`Ň!& &ap慚ws- 1X0p@8K\Ug9lHkw;$C18##DeԨ eD#-@o(Ä6hBg&eLfٌr:$Fb#/cD/ۏ&3XdȦX51Y? \? Q>)+@4u5hʥYKi5BW!\o[ Ev'T[o6KPI&|׃qNT-]/奙闗 zK)??yk<{z0{rM?\z 2S0_QauHw|*`zuT^3vo[)Qe_l!goafoa>{ ߯/RYAi 2ђ}+,@x6h- H*pux7f;=Xrv  'R.ou*٫н"d|eT2^,jhFi+~/,XPWW܏j,F}cU<./ln mx1D+@#w5jh~J?e 6MPWU5Gr"wc\>m. [fX|sdpZ7bl~,Qbt[?bѿϾMΞAރ\Si=?+*~.wa=90.d#cóLBVr֡R Q ނď RZrAd5Uf}AWeSIt<7^8e: ʵ֝0&>j_ iL #d~#ŀ#Po|?n!L`C,Ov] v[`eYy,"38O{nAgZ6gG6o9ǚ 6r-D`8r,t~m3]zE)p0#2_w!qGCwa^Ǔ?F~ʳ&-~Ĵu2rS|ͫF3VNf#?/jynšQK/A$Jwb/ކD?_E# G jpFZ_ߛNj2qNc.Q&AsB]_1'pOWheFMz~Jg%U7/+Dd|LQ>l EFGҷfzGZ.\y4rM4q % OP͵ڊɫT?C+KL.5j+ek7d͘\m=sx+yaa)'%YD%nՌ55F#r@=_Ηh ub]Zt?zug^_|8GO̥YImcοL+^e5*֫+&Tn@eҳS|PևE9ghǹ{?M(/Ha-M2w8m Gvc$qFR3\RTfx#楕02I=Y#-AV<`5-!R0 S̲Y'$'e$jƫA E9^\-ؔ&IRhk20(6 Ld#6UPAdu㾎-0Ua!?NY3%;w?慹[ӳSy(dsf~XX{J`ɋQ0;JvGEۅbj*JO6UE.o,2ڥ?f>ICO󱽊Y>O~a'PulKsM68I*lH9,NKdџ@W r?)ħwLKTYwhÒqfP^/Vv<` 6 íw2o݀]vK<^f#Tsh|3N7q++d]TrѪv|,/lTʫ-+/zy݃uyod+KV?;ҋJ9Y"P_Gâlc;]gj{*Wsm&<2?/L@zU/çkyi ㉻v3 ߷Og ħG`!>>Dahnq5ájݤxZ{+RoAcZU<CdxaTVlu *pj'vj'vj'vj'vjGՈPy:>N8\Ro|flXB$"d!|½c5vБ}pY=y{̜w3fŁ^]f(LкFW#&ќl>7 l;&3b>| ^F a{m~.jlfa$p[]Cs/+]ꐎ~Iء9_(E_'Qqݤ%Dd'Sw)5rq?k67Y-bAI(DX0}2>uEO`Q1KI8OcdqRf$I$Gf'gõyAᐏz$z(F8p{_YR|jaqu A4biV#wׯS mNac8t, lO@h5RX(7 bA&w@w2Vn(dR0bV:å = D;RXY9˨SL#,8ZI zq ZlNZzg?':,>{~q xcA_UV#18}6!"8s2[ĉ8`ZC%d\Q͔.`%m]vEڂxrG1ES[Gmzpl-~`;cg95L2-2 34Q,s cȤjT$XQ Evadh_j*u4 ]Hka8ޕ$bi>4X3D$-ITQdIQ*U9U?4 :**32"2miZt?<#U,?ruSsr?%u}`HJoK]QFO??X=mQmI)-$H#KN ^쥦eI<j+OӰj<.E9rqR\|Όs4$lZe :ea@qp: :OBiAP1;]ٕ ]Mdmk6hc2O06AxgdB,~ZU->2frrSsmhȉZnM8/ 6N=WۅAӨ=Ɗ)OyRl䅪uNgCYN[ ccum^Z~?bhv'Al>N /wnQNhoЮڪGAѺhty}8tTW vHuX"8? Xf6Kh)PgVh;;G+Eo?u/Q:x_M+)(sm2t:w&RtQL)C 0nW4{Mxk| u?}ow ~/O|@yy~xx4lNN'x0xmy5kז-o[tn#̀y_#^t}kcZ! Bapϫvwvn_ný9]">~\{?~;wrz )j|eDhsDzdtp0_Ǧ"Gf^ 3;)4(+)A1=G;Ǣۉ$DQ}78}q^Di9r^\\m2*3K& $HS{1wƑhLXaۂA94%T(ГA 6yy.| 1qX74IPyVacǴ)RiZ{1<:BdJB'v |R6Ftd\%F?"C"%6VĨyZA#u7|[-|X_m#H8KQ87ӠG&YZH:Lb#+rJ|TGCb2HR[< c"lpybl`ꀊ# *w&"UM&sܓ`/eB7T##o_ެ?i M^Mw>x8*Gӻ7z=ƻ7ߖӜAov~_gFlM5l!Rq0 D{]Γyqpc7/E~Mooژh)E+|\EH=q#!|xuX(l Wh G7" Sώ Twg,@;v?&# *T`^Gq6d=qE >$G?9{;B<ڽ;BDW0:p]ApA  .ZRRpVrGr.(ѐk^uDt1Eܧ폤ϯ셐YM!E]r2ivwԳ_xjƋЛ+}; ", _'S$֎;Ipu"=2JO8 +an"KDϿMFo&;l"1vDj&E2"{5%[kJ qK~ƹf 5CâūXWiٵw] >d}f\l^*GIrH- (K#J`sJ`Hḧ́僕&!ә g-2g #%hT&gŋSyH;`Ǣ;|!k'Ĝn9'f w^ԞEgodbWD0уU~6~y-|,VOg]-HX\mЛo/\uz½Sg'<>x^]MC9+ژDdX(Du6YE7"6e 0Pg: =QOWn-] hqVM\ lQ-_[ֺN\UJWpY '\\PnbǂkiK*j{޵h:x)瘚 n; \2,g[5>xG}<1c!]Ŷ?aެ75cgqŒlZ \"^k&39\CPcіMw>Z<܄ٌY{*ZbE2o9eMx;-޳mQlZ>O9WfI$V3$2Wd,c=2Qc!b.Gb6Y`lUf rg;1u{֠Ǣ:e/#6U}] j;_SsaDŭ gȹGr 836_ĭ4!G-r6a8‰uTh/'}@\)/rWpΛjArS5dJ!!Ԡ;`-)sxH@0& Ǿ;_g$ .=2m( 9xv}V.Of¢kOS11cAEA_rX(&aR^x'LOi84E6E\G +jh4a l7ʔ94~1odG] ڴA ~\N]}72s0e2bW`DO:鮳" 0uF bp?gjT~З;әu,ݚG/9˵Z^_,fj5u'խ/r3s#݋d| b( "ș=,354yn]I!t\.DLY;0J,`<# S``XȰcqr!|7E>j7]TwH 4I[* v`~kMuRr0{ H7X"fyk.=ڛ Nn}WqXE~d=2M6ȕk*_ ؓ/\gӐƘlbФS#B\pnvM>c5熌lܲm N;k;9{ptXr_\7tϋ:m lWu ;:0y'[`PƼxVjNK jDߟkudN+HC2M7E>n*1DJ.L$agb)&8+]wE>r7cvaU=2mfmu?i'u,_.,̀S2|’hi8g8^Âc"g&ED|‹}u\-n6du` h S.`obؿ%UD:7*QQ"% Xm5`H !Z|! |,jzGk㐑  '\ng3{8IfFk4^V8Fcn%_.Z5{GFs?aBkY7|Řmj(B2 4[Bi;G7"ݬuԝv7z[cz[ w_4060<-"5#s7!gp:ц:M 6Z2?ɝo$yp#O;Q$T`wbڹN3m8O#? h@#V4L:Uλ E11aq0yg#/͍eudg7i~|a $ѐ[<Ȫ$vBq<6|$䠃krR}MZT!J3 u0{&{c Bfj61";M28C(ee[$oOO?w]C>WMEC/QJ6l? |:l?#P9=2ZRsajN*j%7|q4F9hiǚYXD9dA)uJghdcGj،b:Kn[2#S8+jevG]:$v@WN''jpg{sl̍[7GF#؆]IZuZkчۢ)8l{ӭI<= gnF z!:rLhAZҀ;)L{3ߝ/Ӆ>Ch֗˟kY7D $H\͗Xpuq{;rT? wtBǏ"oM"zN`s먺҅ y&(qGn#|` ͝@M}nov/'ny&2n$PPjLdF!XtQ(hLë_6|{d4ϙÏ tOTro8Y +{U\}O!A=ep :KC쏁 hb>u2_L wM\FʙAz5<;Q-{ٖ+6 5 Nrn8FVF~ k{IV]2XH60A#8r'2WnE>r9_?.j=2=ń--@P=C(aΛ1:PE90]r(̱6*ƥ-֠,^vSc*.P=2,ʉ Un7~ʾ\9L4-)^®ڃ7i$VR\AJp@<E>yu?';oz-0|q@$ 6ȩB])$#j{Ǎ_1NK|nr.v'pٽnOm%wL~dݢmXT=|*Bگzkm~.&F;D~#f/蔝PiX@y 4^=`Z# Oۿhȯ9Tl*rX©r%ն)sŇxW +"EMKtbTw Oom+,ybd3NJ5y/JYQum+ +_8 R%* C*9cz+$ }+lvЀ>brT Q#Rz-//]oxJrx`2hz:$`9?r'6HF%-ǧ7Εc)3_M|٢||Bi>*6O&~lzL]O4vpl$1ڲZ9`6YK{2v\js0N`|H;Ja3!;M0V5PC` y59Bv,-f4kJ} ,pӀ=:5_rDŽ%OX2iܿ"歂 =&4y#,Efw!V[gxVAy׍8==ȸ鋷+b1d=dsE%`>g-Y+_+r2: Enw"GyB<(?N`if ʱ$)(d8p~ uX 7:[.$^31#q}KoUbԭx,N\8U}>N hѳp|sPK?aZkTmul©uY8 Tq9E IQ@^Nب{OWڷpHH1঵K,B&^[G>8f:K?zݖ͛KL]Rwǹj߁;-Y#ͻѷnz7G:DxQ_d)ῤ2ά*Pu'mwa{ǰ@e{Ǡrki D|S6h$Ц{H@ău0 _^llؼ''^K@Kǽ5/-3ӎ*M v 6j?530zm 8g#7Fq! JV+μoQQՍHF+i E@PG|wDF%U{?FO}7l6+FҿdQ@0)F?zտ_0S\ӿe{|F,I 菭+<_uhkos*o=;1dʫ?'9s*p1 :(+2VSGOiPI"'6ZEs m A~At8,n齃ȏZۨbp~ߪ7A Dϊu ck>|ߞT?~h9^[YVxQ݅_X?LgE9Ίfxt*fEcPʑr1xq⫻7ef*q}/t}rl?eK3c0$H FQ&C0,g)\{vu w;UiNOf?zQ~=+싢Sߠ?;ER5Q2߯C}dD凅7ϛf6zӇհ>_=)\Rzx3YC4z^[&VX|6 kOW̦$Y'~Xn"p(flX5CJ۴R f/!^gw~ \,5fKD4jTr~Lsb,O%6Y.Ṳ:4T;ye0#:CT)ճx:.]Y}<]N&Foˉ[FY0MG ɫ.FK hi/jR'i76rWnڝqUy7ϊz2~R2_/֔}eR_"|Ǧ_,+oGq΃~}yݱu%煉Z0bK\CqMڮ~=P?ݭRF[tJ}^'aW2sow_LG_V+kWؖʽ fY9p>T΍#w-E D{a{ab0$z[N r k4ҏp(bV kPefX u].C%ڰ1D>S<)t#+UO@WGn Ut`vq/qր ͢ulQS L04xa"FZr F<{-.2!>c!NugBqe˜Oa.Œqu"&}>Qlv-}~~Ŝa=|6]1Ɗ$s(ߢx,[:ieŒ89KV*e63Tpe>xBPKZF+cH˝,!Ǒ%2p̌S/c!"UTw}8Syv?x\eg,Ta+Rc|>Etncb>.].}tB׉a`)9j@nVmmr!&үBRյ_M&54`02LZ D=Dܙ,p-_Y*s R}Gg#cw t˔3e {K݄a>B韵~vLMV;U;lp Q@){$y4< N^rq$*QWգIwIz4Ax }y;7r6C;d+x? @P?b.**V 0H{Y9xaCf}!`,QLi>0 cm@dl;7|˿ǒPQ\4!elx&bR.r2_&r5ɻɷeyw8rx1z^D@ "7fR̦Y 2e 1BqFLbŜI`2|Yy|pHrfBs_MNn cFT|T$1uf"fY&pXJ0/ T E=Ojܑڜ?]xD<09KMmT٬-f3ڿ6>pz?+#5YmF@AcϝChn97Gi ݵ1)4Jmf:^(+39θA{{5sC+nbs{K,%.zEu(QC\~Pneq?-C ~?I.C*Ȃ#e,Ȼbq0bd%b ^pEb3#b4P6 :GJrPWM&=7W 仟4.ĭ~5Hc/]G)xl2m gӲ,:UuM+Ԝ-g9?_Ηp 'X/[2Fm *phV ã՗߲-^{N_eJ0y`X,(!ƃ`x(!̮Y/a b2db_Q9Յqkah:.6ϟJ#8(JT BByoFCR`]܅psH{%P:T% 7~vx:lo\*ehi{b@ئJY{(Ņ2ϻ3˱arDP3ױE#sm0\*LT#y`;,®N_Z|1j[_gA'5  Njz"ep6s& hq],;>o~O|,3ߞgW'd\mpw6ۙN'u0 /® DfPArN35:L؇7*^Y?2;&CӍqSi% vؘ~W{3uBckH\}$G"B^:rޟ魆4-f3 B #3{fhrq?ooO+ S2Fy&#5nя0E,"` vY}t_o5uk-%1xlN&%0]瀌ɔ@ YI|DSj7#1@Ex3ÛoHdqwCx)k=,ϸT$4>Awg5_|KN^zq$j0ςPJgŒHV1rW΃IGesnlׄ\Pk8-/K[7m7.Çwb߿M]KTg608nZDTo:q~=ż ܲРMxnpߏ]_?7|xf~YyYk;te=]w/~J٣tTpg aZLz:\M~]ᘄnM߁\쯋j3j(o ̎旟$MǨվG_ZRFA+o,(^0;Ǖ/#bcJa-j$d"w9t>53?|agCYdwqyu0 ]CE-i̷Un6/7~էݙ<&#n!4OA5]x|:>*F654#7:.li'v' *YW64Or`?i~}o諞} ^nO*V7F.~OiɖnQVwn?t e ss֎u]K'|ֈZݵVs~Zi͢_ zyfgr&9T'REkLja;?БXpCbx"j﹝"ck}]l݆0XΉM\M~\Uh親%+8 k]26䂱>)U\0o7-VUj`^j$$&M5Vh9R8w[\xB x440E|-DH! #8%!x[) ݯ.do_ x8Rw rZ9͚:ya682q0ΔA\`TQ.3ijwg}gNn\>a6XzfHu]Ug5ݎTff6Sh6͎*"uQi<1٢VhK2e@jEup$\ Id x e򰢮bˠ;k$bIxYaF>Yę 5*\f4t´q> vX#Gc7cC;im(#KixOqS{=_UZ-]~8g빋ˋ'O^ShkgB>p|a~ls~iWuY[%m}66畇GjrO034zɇ+Ɲ`p[ExW1!n +P se&A8IY @Yh9]+;Šd ntRg}+ʐ\f˿1;vODlؾHtJ ´{ĤK#!1ƅ(psI~bB!,e [˺8l CC6F8(4 $ ʍ߮xmC'ʚ`%)_ŵ42Q95 GͩSH`aNR]`Au&\[[J{Jbs%PeAⶬxA&1g}w_/!:90cέ~b\mlF,Q QqnE#%7YoૠBmx9Af+JlHe< $A@ G$)Ϭ!gtu"U>h|SnGyUQ-0w#o6C'jʅuK/ۢ5u诋>Ch( $f b{IlO*mp ʘכ;%# (%EU|V䈇YFNIUV%/|C^v6D Z1R%T:as8ڄ]ǹdSy&V6 MEW t-]F& @i9$T O).&NX\@ 6_b!͆QԦ"\I;,&L+YrgS?/5< > ם'OD$" Ahy"~k[] ӥbkc)Mp<_=.#̗>erBZ~vY="7]V]5MVEu}HT7>4OJU'\k₹g%EW!{Wݴ/>e=~]ưBpN3-rŊvx20(kFE"2,&l`w,Er"ıw1}oWjK׶i+"y>JV}vTvyL!?f|Bd{RL;3SKWekgZ5b5*Ȫ[o-(Dzzd&=)[%5u^k1ddsjHӽjb1Iz"H c>'S欽Wo?63fW7*gm TB3L,z/ǤSG- !+!iE ٕqtw:Żj[iU&2z&ǹPjuR՗r EOH5TpZQ.9T-fD@kdX1̹Wf(SO)53mb3t))J%MȜM*!xl;:P/]"6a>zV؊-D"RGjMŶ"^zZV>د&`.>c6zJ>Fݷ.>@mKm7xJl|4YB>K3D} غm}d~_6 jbر%hKnn)3׿{kznvilslk̢1t6 >+oLlSv?i|~?={}/+I1&cN)c8cNqig[ ْ*zȞm~?xK~;[x;'??pzQc`srt׫`b, c5`s]xݮ9Ws4fw)A|0m^!Mun9 9f1eW|wڿ7lu^h/f8}@]~{Y*_ؤxɿfD#@ߞ[/뫗'@+Sg#O_^C/./p El8sHo\%IǴx/Ww%~>3̘w\'ܭ:d^[ܾSas=U{P?ݤjָ 5_kW4fhεOHK7Zv,0#09=Dɘ߮ƭ#DR\NM `rzM45ggnxsrV:q z߻nm*' /^\iMkNHK[mg\hO+bL]Yywܜhy@PA~r| #4 1 9wl99SoSB"H!e Jqu%-c|-nY^x K_ۿy4s-a7GS${gkL.p S61yzI>؝zS-QlɅ)5gHBXo}̥T`%Ԑ4C#=0hPʱuS3h m (hsft%Òrk@M&DƮ1r*CE3NWʽdq=bìD*޿fԻZu!2dIY|#͘.k;KP3cfp5FjZ%gba|A4{B:dvR1c?TD†c)go[4 t-1T0H1!6 4ʻ\{O99x5~kR)((D 0^7#^oyk{w&kdbJVCKR()%Ja"Ĩs A&*hZl͚[iH%bMwLA l4a}nFddk ,֥@=w(xnPdBPt=KE Ȏhm6dZ#o "Lc&G˂ohZ36yˆK˱ < ݫFG  ` Å:]pmqb\\ L3V=XC ut%^(}qAk[8 Z挅#Nuh0 ͳ#n,hkIa0TAt(1kIiYdZ{)fUh@&M6d[[E\n:g,~/[3H%FVXV"2Uh0:p%]bukdr2zפ v*#7Aw"xF0n`Ɛ)u6Dђ: e,Hg {`jRAOI-Э؈ :k,|q$,`&It\m0cGU J ;󜸆@ͼ*A.:2NDX+4hoI!SѝFp7GRF5Phϲ"=@w/:  8A 8FfCwFe] %ѧ5sJKExkA!sn+Hg)p ] Z,l``M6b 4y@g˽nkU/T#cƬ 9 bQҢANDB }2f|~:_ŗBUx=P^c-`s mIg$?ηV-^Cd1 `1)YK͘cwqXd ,$8?c,MҰ u cpVt16 X=Z,Oaa,E{&B5eyokN~,U@EDgqrV`o {.9Y+ 3caVHm0P捍3Wv3Ӵk5-^mW17 e:D0uwo]hѳq$65wg#bjЭYXkIv癐zh횡@/`qMŻM J4pȀMBpf)zH3C[ wCDGS"in 2$0RU)A"P Xշlb؏h#*TO7"X$@\eډkaV1;yE'ySTbX 0p;(@iY#q!FDYoYt k?{ژb2QLKfYt7 nbpN~n|7`~z>YEbUͮ,z^iJ_xV8R{[N߾xGh'ox-G7y̮Idz~3'>*20X?*2\+ϥ̰S"3XE柰%kcl/(̾ avI([|-yٻ޸n$Wy`xb}YoYRԲWݺd9;K+ YX>X˹jvk`q+siXfVer%hwh_ֵ_6?Ro?־!O[|ɶһ7Px2Q3' *]%+^\ؘ5 lGO_!?ݲfx76~s秿~v??MZL5eSrXUXYBՅ7qz-ڴQ % nbx_,ý\驎ZaE+j7{TGUƾ]%VeVYrأț(˩J~†.RƇ+l+u?]J=]Q~iv^4NYQV@K5uؼ=cO38GP ?/1أtTljLGv % : H I(?jX"]?Q}tUi> Eoؤ&XY9&~%J l1£U }C{ T;'[HhIUTՔ⹓՗7EV}=:xS=|=Wyۋ{yt>z͏cI& WqU옮e,pV.>7y&n:HC~]!XT+{Rdf='_[)~] ZB2}˼%%*a, ϋ j)y U\^e*+/-p.Y9{/lI5ҹy ge=-.ޟQ&T+kkc'vH=;=4v3MYCUx:ܽD|ab:9\l{wڂ"g*,Bnp\MZXRony:꯿]p8bbC{$iI+7*gP* um/57K6 zc8'X:RlYKT4zb֠'?^:1PkK٬9VNn%8%"O뫕x*iRk]S^Yx3{[x&[*^l~61Ǻ1Ke1'-qY!C9ɀssTzQ9]Y :V?{4X.}֌Zs}exul.4s<|mup3'm>&/6ZOZxzrU73jݫdi@XY?+ ךg*'4M+n)cmi+9VVN^~=^'*01ǒtRmVOۊb:RAl+wvٙf}Vʳ=*ٻ_}_y\ԕ϶ F#HOwu| ~u\r:i֭|}&KZrKR팙էv>=$?xMZi՞|殾kN Lv*yS(sv5V-Ϊ,bQq˴͝-*;g}݄`+[L(Yn ?BYdS@+ gDPwWM:ZFHum1@%oX{Gg4sOl t%puNsӈm[fksVZWfhDwIS6:|7zѾ ~RZhẖwjNUrU,F63Zø q%bäPmr9;:Ԑ |>Z4$QX#5;|ƻ${!Z'aZέϰDhc3c1!C7T;F+@tdR=Q  ?;B,xI1'{wM|iIb6.JGH:XmmٚR FcD!%4;nc8j)|sUcN}yM9 "u 7"YJu;|D8\`^2!BgL$J{yߜȮ&^X}D)Iwʦa4r#8MaMM!KZגxa~k6~@wsf}AkkcAFՓ31/N[RR"9 ɕ N9C(X[hY7I.KU:e'-r2:$FDG`[Fc2;xK "M ]E5k!":jX;w >b.a:9"^W Xd)d\RXvQioأY܁L?1h`-]p}q$o)k5$JL7[&dK>@3ucc6.ZqzA\P-a7HoGZHv4賴R]ODjK`QkXDF):yPjWzލ{"C)jfkf]_ ku0W K5T!ٜ`&tf; BHM2yIa%E$ؕ7;F̀j@o]?<w c6o} 14K $,\ʰLh_$,2aDZ]MAO[[KtRXYwta*uf p`R0 3OZ *A."@Ћ\ u5%B9r B0HjtΰRP6ZgZdsB[v Bq:M.߽((pX}k ,}Zdwym'@()ς($D0P(g e 6P oafz񡂵4R1P`4< d3S>`J/H{Mg9100F{U1 B /{}N0olg`wT=Wwr;T0\h;?z7wmb)ka'a=/MgeTT" ]FbV Xņ4PbBG/ZTXP@z+ (  Q*H"eQ*z/yZXXet%re@dعt(Dl;~`uqQ$q0p_wlϰU;;?`V/`$` `~yTD8o~d}L4+`+p-F$0,]]%o0.^,s@ t9wK}\GQA>>#,ZhoSsd%*k-ر,xPKr/PVv%m)E1g+QHd @G`Z]tu3場;M#"I`͚giQR1C},0 쬐!dlK'jnZektQB]0 H06p n$,"1),e&ĿEn$le|w`_a6(A i7/ B{eZ؈up!>y]@9BV]Ӌ~s09x0u懷7s]9l| F ;=Pͺ0UWrr);*AѠfc/ zz5|6Œ4E| dE.Έ<) C[=%AT-Y\ ⇡? m>pG0'S"RLI<\y;R'|VP{Mhg&Q)I#x =I S"휕/SHTUD&@ߎW=P&jRZJy 26"߷Y]+ҧxi -42M..ѺX"m@2dQf⻗wWH c_kH䈕H:BKʣS.)F= ?4F9U6Zց%;6UV8J:%#kQ9RtJ3>Qv~ۧB[)-4*3@ þ9':]&,c(/pPX@sg3k9W' 9eN)$%-s?{!r#C`։/liTf;]NgeRHa rH`,#r46AOW!np,iemI"P`%&kpf(˪ u*1\}:_z:ZhTfc.I4WE/:YoQ96 [^$R˰FePU9>9ߣvP6߭(&Te#ڪ1!F]攨m〴o{T}ț@JBm5 9`ZdrB2s bzl4v hQ9xs){NI[hTf,G.Jp(:P*<2&R⣒&[^`N-4*3F9jk# Kيs:q$58<l2 焍 /0uio9-&T'fÂn$$ju'I6geFe1&<)q1Sө"`FS(m,\eYˢ@˂S{uC"QQKB2s*~P=n4>޼YwPNj\!֖Fe| Vi%h2SqM`K%k'\D&):Ă)Pqd8HSsV]fI̋>COrϾTHIiuP>s) PSYKCdzuƏC%px:Jf} ehWNjdp* >%S:0.tիJ*m1_]B2s8d J׃MO{incۭMMDhh|3-VB^^!eRـ&EU_ʼthS8h]KVٶYy&CqDJk~:wF6'A8 > 6'pw[MEZt_%F )hv;}(;}.wv>ʼ*pǨŖz2A%8w "u9dp.Qw3SB2s.: s4^B U(aT>N20(`ys&}@R/^O)c{sjCKkx~: =%U[D6av;Ш̜[Mtv{qc;gmWfFr}X`ādBQtw W^M~3[}mxxX*.Io2rj>`gr݁%ycVlӧ/6Äe\-E~|_<L?/tٴwS>;|q$|gxzlq87GͷߦѦ''D=|:7wVt9jR]ї߼mz{FQ˝zߌ@GgŮ`n]foO=z/t3Yl d JBGh5F6p=́8жʺւ}|2Wg9oҮ]:O?5E-дz_XSd9( ~;&ǖdI|'țռ%zG,4/$j Рch{iG | ]6ɖ͈5ru˿7f\`fAnbBf}`"= =/'I8$A;|]X&aƯk15-kx{>s^ހ?X`/2&NB81x O'O9Q%ftW>S;5fxxs+Lv_o w瓃FKgeWpsAz½N'?^,Ƿ/G]s% m&ܻy}rez3Vsc90 ɇGϩlh.F[ dqOޫ7D_?V:,m?bC0j5_^`!Fqv)zen(|Ջ8nS^دT3080g5}WC0cILˆ1\)$D16O9W8vPv=@HLkB<.hMĒ\T_$ց}2 ZPPb;n塍u2Y6 ?ro> mڍ@h~֜}nT茆n ̥fCa jPM$وnW{u)- |!z+NU)=(\H#vn=a uځfԀқfΥnCTO?O,Tg!VjGW`]f4#{ 0Sv1U,x4Y^~1O`K:PI -?'fg&s #3xh)I{:E"5 9 uD:%| &s4eu jDlJ.j>>>2J[O F$ q qVH7^FW>1F)MwW' GzS.5zG0%ޒRK8K1ZWts\~21>ܵFFCȎ%62nu4[maYOۜJg&$ylzkd"WF,΃V)+$x1& r]koG+$AR llLfl5ik# I &ES(\ Dbvխ{ϩ{dϲ4Vы;5/&teOC%@@ՆP>qDČ6Z/b&Ac )[)D ;2x{(\]ry# 2S 2ieBXe#r cck@Qlx`aZC?U`;#0pZ(Ϥ  *D+ј+v@g*ژ[IoR2d5B*̆uJԌ9&)˲ C$b`ig a5F4؆s(Z$x@r{;s(J[2JA"!cq_EVlu:U%*\!bb*2%CdPhe0p~Lc#X."zY|RsyV̀j@o]?B0.@Ơ) y0jb4vRW98bF`&|D`P*c*ʦ)IU*[Z9 ` lX[ q3XEfr J>1 J8 Ls$eAcj VڳYo/Eze̯H_u/R D( VHݳ+en0ti)kJ@IBPW ̋EϨU+TʣBGz DpP()46 Ja<|4"nҰkMk?fR3&i0tj9_֭b_CLm]LՐhE1h6Ћ3si,zF@܆3pi/F9 HT&oHq/룫#D ^H@Aʀvw %tMކFL^k[ 8 9@ hwXEy=*f @cb- Fy'0-Ghإ6FuF 39(h ` e@Ao "pa,p0",0wϢb!t",U@k6֜c9ڳB&0 ֨ `f)U U嚶2++"Zz7*-Ia. f&%Ta6t.YSj:8l8Hc^yحg0:-vuiIlb!f4o @][tqtU  G4TLLkށf1xj]Э! k sQS9)  `Lyxᛏ 3| gh  xS9njl>%Z$Q'%\ Ec˄Lԃ (0 DYZ# RЃ0V k&iD jV$bʅs <@bv)E^gc)WwԢOke\ $)úLSH050 6.LH$vr6sӥ,2qb$(6>D:_n&йJx9$URI7.R\f~r2//`c~c_,HXk:.䯳y[FMfuBOf+_|SvgMWOɫ2|oO^| ̄??xo{ _x7p/H?2aT'~+][p| c5ΚNѴc>i׌^>bzӾM|''?f=Z$zgOHѬW-MNyɓ0$ 4ZZ>ZLW[@6V-u6  iҿ ,l/vGg~gÇ'w^k!OV!n=yi? ISTz]a ?gOӒ1{vغͣKXs"xn.8Vk7xw:;W8$zSvfݹdҦ2{S{qJ땫 TF_}iL_]27LV2J[t+n%ӭdLV2J[t+n%ӭdLV2J[t+n%ӭdLV2J[t+n%ӭdLV2J[t+n%ӭdLV2J[t+dJ/|7p}Zsߟ52WtEJ+R>pehݢ>8M0jgSr`ůawoK:>}93w֕ ΃ɗϦGÉNV2Ϝ:# mMdm`ߒ{ С1enrhD}LRV/ep2'%-'Ïp([ ӂeXDar} N?~+O]n$#?_~s,&'liCx.{qkɳ'})|/]~N⫃wس5S峓5[/Lֱz>j5Oqqߧ%62HZs_@ YadջA Jl:x{ꌶ̫;#+ؽA4F] G}~4fIX>7v]P:EVw^yy 1A-ȸ倌wm6.&?BJp.v|WI)tm 焚Pvm\@]zUү\f/V7Nd)<(-[(moIQZxQjb>*S[xڸz=t'Vuq}ڼ 4~}j $=5-}>;j @:wj f.4_[Ÿ1=ւxxܺ!1H2 Ӝ$Y3yw=I>d~;{3)1[VĪ/27˭s4?=&{6d0C`&@o8ѧgTH"U3xIJIhB]OhEWnsQ{:1’HM(M‚W L# J J:Dr*Son-4G$`u<bgwK6wYx 5N&$be$pe')}ө/xtB´OX4z+˅ K?2\vWt‘KӕEhOV<?arɾŔ^_W'/!%;!M[t,`j˒)xP4$S`&Qذ)k|R{ь 7}ZΔ!"9^x\`oRsfvD$M R X(5Zk牃y4m)]-1x[H"6)3)a(pB:;,x=u'&&==--7,$G9y0%Pc kTZڐUoK0oM72jD` / / ~{v$&93‡}HlJAP`2Jlj6.сdx+$k#]-Tm$|Nu) )bb,[Wqb~}sEsդE.3@' $vnurJf0 #`Rk86bH9tp8+e(Z1{K|5UjCMsi?n V^K.~&@ |؜͑1%c4,2R-Pp҂|GuMTԢ{)`0fԁ918.K<\LNEX 3dk址mw=~_9F{-@Kz? Xvz~]o TBA!/rȩeO5ϡMn2#7roڑhn(͔ѥgs7i0pcɦΎ5̳.8O'Iٿ";z\uC=UH<4Յ Ѩh>PQ<i旜7)?DÁOP,?_z?L-:g9mGd>/z3Jg=uKAyQF#>zݯ)60ܮe;V8z R(^&H:儚N : ̞߄a<(˂1Ȓ>"G2qB$.YnJ5Nϻ'lk9RϿ+IstL^z&`k-'4x:ژ|4Ul`xJV⻣Wx):`kȕVB*˘:޿l`< V)Ekܷ4n͆> ~VYhSDP"&祎,R Fql姀ji1HsO$. ZSeD.4`MڀIlت|NXKhWf^٠oS"QyͽC,)iVSHIi!F!D}Lk谠5_QC{,S)C' ,%5 *Cuع4ǰ4l5ʂC?:{ZN诩dkteB[05F1_^R'Y5rj^{*7Ծ xJ~oEGCxQ^vGx,1IP-|êhZ.HR6rIޅ0E;Vc?]f6 ɡx* j)kM낻W_%ܪ|x> p)x5*BxBDg敤uESJҒnFP5G]t1NQUD2R qD =ojaXZ=M1i p}-m2*[qX~SeR8/X1~TUQIfaڨ5g"LTg km=m)f\o7eKcgGQu8؇)􇖵 kOΐ#w |پt:`BI m̼phVbiamkƫ jj5&q&P ,vbQ+F([ЉX) \izw-ZwS'1[ov/>*o-URwh 7l؃[zh:݄Jpa6a@HoD, "@ .dBʰ*@T=R B;8z{(pTMGR^؎R`+@26Z4gtf+ON&,Ͻ-WplU2VI 2U;K0PZL,F& 5=諩?Q̺ ;AHl I9l.ڜ3>A m ̄iAȿ-ZqhE+md2R:3Q{e kBtV{i 6(NjUH8|(X΅….r݅i܉?(4>qyq9Ԍ?O:Ewڞ_^w<@`ÀC^?Ч3n~ku'aPe:ǝнu& ?ulI=9d|q/\PkT҇ea $ro6?O0W-=輝LrV]QKgU'o/ 즗CҖ tugyږ wg+cT3jDy5^E"@:>?#ՖNo0.|imP0-aBύmo]Y'ek, ]\. OviӳSXƥ]+߾U*R f.Q_SWWS"\DvV6 7R6f(p~9BѮ+>N- [`'%\Ԓ*rfV昽 rRldkݷ+c%|=Dw=C7F*$0PhIQr].2cfL̴7FrOż1Xb}ԕ|;η9Ͻ(n9_^4?L@]"A/MVK+R| #pK[dNHGDE.D "ӬɴY*a}mft y.)5 tJ"B8K6sRU*-X<6"m$LZD?D\~ `bTUl[=YmUHBrU7WۦMy=nWkb%Ǡq))\Kd)='YPEVBHZcL{ Vܖ0W[&#W6.E0ϮW&7jy:#UH3FzD3`I~f =>amo~%R?D}eM$kss)KB`)/c` jGIkf:ZͭmК[wZOvm%S/ Bm Ç1 9Ijjړk:6nFsdDR(]Mu">*5Ha^ZmzQ[MԄ11jyQ&3&y"Gi"$qr,$ $%h-zai䘒1DZǔ'F(ɢ.EFm #|^73DYhSV 3h[Π@@j) i|mC!q=m{Ƒ@$ݑ ye%$D'&L W=3|OÇG4{X̵Q*g er+gI1i_9|.pm zdO;\eq0SFOHc((=&  JPa&JIm1QmG*V<j7hS7f?|N[,ַlZ,rjZ,݁˩Vldžn $tW dl.vN`VrliCmJvvk ~8 _GCJ9ō8$\*h%{gqoR0#4'(xA-EFnYج$a:x5w*VW&"iZ;8$CwIF&[o ހH 4͗mn$L2wE's-ɝ2(*]#qfM>ly۽gŏ6R YLs-jh)GM9M3{8P[0:뼅2JsdZf`}!$Hi1țX8j4LyC`HFRK { @^yNzB}#)ӘDX /VN+ :TN XAz_AuoBGlV(v$c$<$-}Ge80.,ma`ljmN#WDN I0*`=EXzk2B+D+H Ҳ,UsJ/*8AirNepXQLCFx gR-R-x]aC_?(c$7w2ښ w i5۹ /BLtF_:"u 3\580Y?f)uOq֯~ﴂXҩЯ:q:o o_]U/WWט\^_}ay{V$lAܫGZ6jxT ^p{EC^y![Z7qQ:㋞)*|mʈ*ö)ZeuN6)b4N䄦(Zi1gSIw~}/hn}N3 ~*}Uz^ۊ0x᫼輁i/,ԘaJ^D]g#'.Sj@r$G T/{s}T3<ǍjS͑z\@Q-)JvZ4Xg~16G 3Wx>t~~#wf3L׹Ye` y,WUK05{؟__\/(K(7B#C > .?tN˯zƧ7aE^SbVusN"XUί1ߒ$Uw?OL1 u ;ϸXX#!u8h5F.%ؾC#,/c g!")' lL G$^Bdh"! l(3~@ `JMIht6gyj;S1y̖lrQ,R7n2֝UO~޽F/Miv[> R2hʸ]u%g𬳇nuҝSvj@/F8ySO$R;C-0?_ K1KwMI<y,![nuI|eePO>zWscc`ksp)BSϘ8P=d:FY:ygaJ[ܢ_q5I3Mɲ]OYMoi(( >4D.e%,QԱ%*n>43<H]%~6*+sQWZ]]TTW Sn{瀨{]`\ GS!#RfL%l\(z|qGL l@iիl"cALɎnt;s=oSlOΖ6C&x<{@QRYj*`rYr5:({,[uRH'0"Mg$6Iu%So.Ffrξ>Gq_Vy6_L딴>֦dՓǤWB[*~KL{~6{IV@H`).~]6г/2zuhP,)jƽ:I{7`UF 50-F|4m|eBnO-E~I@ , Qq6KSRۻ0^w8Ԫ%4hW, 53Apd B9y WRR/kߏ[lC4ňU \[ ݬP-k\^N63<Y+.|趚6m67 >E1[V٤,e mɰP{&Ҧ>o`\LϝP--#Mk/ܰn >V{C2ڤRkbJͼl1t'vSIg-: 9gBLbk/LnQZC:GvkZ>2{5uL"CLEBXzgYrcK#!Ƒ jTX+Es(joFI*}rwv}[($S1d1x˜.&X.=X2N-Ob>\-6McK Kȇ YЩ \% pդz B2fB8X ,ԇ'RYU4.ZbZ3"|lH%1QM j>|\eSp!jn񜁖.Ҁ- ϒX$ȴP80n6\+cG$c)=$th-nג)E ^40{(}=~`pC!fH *Oc.*b-bϖ=[U9JfT6:|z+mMØTi&3xs|I&rR{Q=gxHbİhZjZjZxp*PF dG-e$g?\#x8 +'aPP%?)Έ!D!e!ebj3jX$"W4zl5ͭ&)6?w7ܗ'%KCzvzLĹ__4Hh*:|DZ0ܔǔ}&Sjt]ěwݿ}کJ_YZ?ۘȕTߚd :N@bm9.ǟZU;{ŚNmV$ݎ]ͲChC˼^ws=O.oT!] ]İɘ7Y^,lʭ=Ԅ t\B|>pZ//Og~ 2j<9B<{7n߻\t ޲N ay[4j-\)}gf4 xo6NE!5!Xh-yFXR鄡v }n_e[M&l  _R;%sj~|kfTgB򇝏>+cn_Y(zUxgLhf0'E9C"9:k1ӗKM&/}Z9{F|5Fr<̘;w|B8܋Ms).sS@Qz|`/z@:7&Y>܄c@ύ'@9 1^ L~{yM)<YȲu.-%GDgZx6_vs0=~ `&M67`.i&) %%Ha3Y]գbݞgNkryv,p!VrMJ5m‹-MwsapD&=惞 PNfנ٬ >6l^*Hj#H!\>`ٓ,ҔBPx֜Tۼwe*_TЦ_cLѥ&ov?va._+N)nZx5׽_!K@H gwat! /3:ECM!JwqGWZEluFehH `,k%GY>@0"͔v- c楹{]+$f5 P_ c5>J҃c$Kd[IredZV"SX:tL)J 3ϓk q6[8"[Cf5Uи-AyB<]ʹk&6ٰg cBD̥z'P% *QKTF?„dAq6qTuG;gՏf`uY~$ed=%(fQ 0 Z 0NEF#ѲH&mٳ> >OhR$k5vL3DCtI!$  `yAC"%jQp69,QgC|J PCʱmۤ%1xEWxdSz;<֠xk}8o5PWu8*|[r4JP;J9`j1ŖN{) "dK)8kq`sJ6i(˖PpWѕQ1FA1l6]{{l1iwa$P}g"8,RigbƬA$W{=:3^A[N8T 44IU,*w[^ΣS&8ڈvOu!(iX ~?1Q9'0_^v2lzo~./5, ^ph2~t^WzA{x1մO x,=6Tu_Lux"'ox"F;adbCl" p'1h\96ҕe>˜Gǰ[ʽ\jk+3F$DR K+I-cz)AtapjZi.9.@k!MlƘ:fCĴz+% [MgU,8Wq&}5p>V?N oM+'投m:׃eY' b ( ΂Gi wk@uop0m328t0`z#ߋ+ FY/RaP&oz?gpޟw߼y7w_?w7ݷ_qS4^5AS ptH;M椱 5Hg}qzu&_ tS$7=.U oz>=ooFwxeP귧g.s'=Q>+̖ŬIN+D|oH@(S/fF%`~= z}t((RHJ!W`ua~wK޺#[E?/-=ɢs¾MfnzWWɤ>,BθbFk1,͞ Jѧkcd+'a\|X_j ؓGbo qI&kK. Z5([Xb{^+.t:Z08-R{=+ǫ\ʿH4UFYU_ذuFhUd`XKc8{^_\wb\틉08?$s v^~o1}//?[ N?.~"MLYM iU */Mc "'t]`H`%S4Inג-0 @l,c2,Istg2p41bi+jit0T- )ĽzC"!T "FD&n:?P|+hL&I] gtlčkP+W {)+/:KpUy\θ .9K꒣.9K꒣tչT.9.9K꒣.9K꒣.9K꒣.9K꒣.9K꒣.9K꒣..9"Tl:r`ߍ7Y`7IW3UyT220јy8sѴe"T1t x%5&(!Չ'dqf* W| 6%HxqlⰇ۰Y6 "NfgnM -e?S~~=XḭL'f:H24O!ݩ+ImOP qօQ?IYŸJM{ T`ED*t ۬~.mV,7"0}ٰPVt?1IT%a|"4Rؓb K9|g=;I KTR-h1ak%p6ɠrvll7|ntȨͼ% _kBp`r=*ۍ*h\nD,Zu!'腮h;ڰGp'#l70bx~s=BfZ}ώZҿx e>`g3 aKzykCoM:+h\8l;wy$f_Flv4zb)-ƹ0z4O369.dH,Bܮ57]v1JȜPyC^N\X/ j8/c67yPK/wn8 ; ^fVƕY626w>B2CJ$m*] ,pa5}v)o͍u򢳽aE8*Y[L"{[XOZeз?zt"`lnX8$wU=m ]؜>n|Uڴk.Z5ZmTQDu!#_{4p:b ha#v)C]5k1nYﺫ)68lyT3Ej= I QY3nьXٻFn$UGe`p rn|f|ɞl~,ٖmQtql5Ub_M`U)ӠM5A8.4,K]F2@)A$ 'n½?p>fi@d#~2بcHǬa* PlVH>` ɄpN3c[6j!5EkP©wu"K/!ש+^w:ZL&ߨ .'aNpˌ*&p=,A?Pl GPфcl{րճ5?[&̜CL o GYfQ dUw ܷ6:`4P-4*X꯭փe!2cJP99UuKqr+b{SjEzߥ+ r Z!BEKKyd(3qacߞ{NN|p5s)1iQjJkxgBȤbhB2Q[a-PYWug`|]45%*#$˝͐lN$\3VN>`tRQ !MA/_Y.21w9c,ݣNj3tc2~D}ps][٫@ϫ֫fbů0FIX)[(p!NaT _^ NA5ޓ' h$VK $KprU9;v.il O|̰o9a cR(s, 4p-xk4Q!g:&lx/GiSD:gKIJm+&t|>=__a4x&߮/b4?+g\cM6}h9}~[80} ~ps=,R<(C&8kLݳ6@jV:BcA*MMw+25*qЎ؅IT:ۭ4d(_x|=.-M޾?[:P>a; %Iz.gO8z ]Bzv:b`ԋg Yy nQFϚb;+Tl9kԛ:`4?({_"}5`۶[ >vWrg7;~_gڏomIK{}|RP˧etu=.ĥ}ݷh2. X㟞Lp6N~E~I/zKcvj.>~M$$XW}wP8ݻPpyH1r+n] ;z~ukà2gJ,ƶ$HP_9HFwRCg-#d NHv(J+_p6+<x^sįE`$_4H6[JEznv}AuG|Ͱ#QG*E2nPxoHvktTbFK5cv bvA,p: ?+DȜX%A&%^zK2(:1uDlX/7*K/!%ZZ!:2TEeIx!&霙,KZEct˞-[g+s)ǰlߓ6svr<) cK7stö[3`l`YdxOHd6wmg^3ywO<⌊"_M泫i)Nw$aP_gL`5{mqi.WE_9|_F-7Fp<XƟ^xuۓ*d\邰.E:kk30!+6~`YcL{Ve>=AVvv2Ma\[D{J7?{1ro@[ppynf1:h,=Yi/+ __jiXOd>Oft?yw'=82hCs:BJ+3Z_5 @L£vu1Nu9GkUy7'[/"_chI =2Ƥ[0-rq1aIh)`^ j3YDtPGLDJ%ZRtM0oμ6h I&Qo8yEj0_nsVƛ<֨^!~KJ*KZ 'H)nXsأC][?ޙ͎Ȍ gg$! $2\謏@FQ5n h79xx;B% oR[/r(+. hN{t~kV=No7wr\r s`I[};jТ?S~`p{'b|AJHS/gFy9N?&G3TmoykFWX V4Ze'z'C(@7'l"#Kb 8DdB%cB -|hI¢p%~!r7{\pPyN.%;z@G&e2"la~3k\Fg#-Ѧ%癡=!j0ST+-AF:4sFq4 uط8>vңUGdGcoqlێ8>NWuK`HBe5Zq:XZ=BZ[yoL֖γ+ ѕ(L|6B }%sfCq%!IbŠ1o Py **: 1#,T椲GC(TaVPh'4L+ }(`ނ +ʣ-EgOwzc@@NKL}6)[wIЕd_\Io)m\I LTK\hhb A~,w7٧iPghx=FKPjjyQY# M@B",ݖB!j0 ՙ7"$)H"6rB%S*DŽcB 5h R2"NS<8.M=PPyCe:SJ085 + u&D PyU? 2Q 1̈́y/].c~*K[[-lhIxGt{6,J_&MYu8Flg&&)-IN !vwջ= ZWXH ӛo2BWR4c3L$.EXzs+Ebm6ko2B׌^!@|Pi)幱| mobBu9,(\&D'[Ⱦ-lc9q2+ Z8sXĈ$Kٶ#yNs~v v:'d9i!W_@= ET.ZKיZQQvig,$~GӓA5ᚲX0 b4cϓ r\S2iX3Z+ {rsmiPlz<Zߵ>Xs?1}m^Y;ww5'h:8w_'jap\O*儑m,)-8d|,&Ֆ4&#CNSss  i*$ηn2BZ7n]n Ju#CJG<Ӣmyf(ZQhhS(9rmƚAj2kGQ\ZbPT&qKy&#y!hۗ\"+PX"N2Kwؼ4(l-6H#dBDe o&#ymhG]TQc,?A*Ղ0MFhaKѶ)̍F$;mm,9õԀe Fhc鶏F;7#eJ%*`D/"!иdV6yю4Xb2DDX ;8eFVPmluMSNsxEڅd,:9f֍ndE Biuz;,TZǶ]Kɽ<1 "+YS$Tv'FL-єFQ؁#Or)&bDI(e@`%'V_*& OnR55dT(;2 Km*k\x3,'Ӗ=-gdxtͫo7$oQDmClYt k/_ڱ:m9VʼGl"#(9R'.\"diJR[0 :myg./$2M8lI~,KZR <ӝ zIEs#P?| tE))9`2<&F &aDrA3Llq.MkE@zܿMr=s}YzUTYv-6H" !Kldhښ"qI{SD)J$m$~9rBϩ!iCFxUyJeOeL_^^YCfl;(W':}wӑGS@Ѹ R(d(lrbN{~2ԛ, s$%v@1Ai0cG}p[}MĄ=١ Ԟ`{zNXs %mHi!eF%O25)u2o3>#))w!2P\8c)(eBk88í7X9yv@LgBp T(_JM"R@"Ek/'ϭ2 x;* ' eLցw*%59&$Yh@B6 .8xdEH+,Q;.jSO{ R\F2 ATpz!7g^!Azj J*u^ZA[>#'먍Xa㰏)i{^8`ɀ7rFSD =O^Q{Vɤ7s`et=;(gD[t]MQ9ʷ_C?ƍw.v\S}VkWHpml~P>[|̙BOQ 4>&]OC\ DXGiNo#Pl#`7d|N; 2-4>rPʷD:ڷA&;۶=rxu3"jRBʭ!1BwG`c)qbD2 V ^RrR M'ݐY=luwqgq_`0z?O (e,qXaVm8GrI%wEMP EogpmȀ)qXRY T8L Uֳ\j/& IAʶbx/nj&KAA3bTiʌG%Q,jA\1>a(A>_E.R$³8trwՉ Yyo_xpa&uZP}T~<ҋjٓKA;`b70Aq÷Ex3+L@JѭYX|݌KE5@.]2a1Sv?JOʽInz"}1?FqK9Z^W5UF]ѿCqJk8SWSֆ GJ`wj7Ha&eF e 0rAYB`IVy&4 80p9m,95AE\w$?8hjHr"qFF78W5&A,E6EOu$X Rl.ʎeRW}+A% xꠜ^2>Xp$x!(k̕ƃ"w0~iȡZ 0eRgq8*4>iYVo35^1_giRLr4#@.> L1JEgU߆*'?}_~/^~ozooWρ\F}i R켄EwX8SS3L]nk37W Ux1oBƕ E͇?96Z2wRTkc;a(+l`EWlYj:T>jew#`Y]Jl_~lI+zQ"F!Un޴w|/"Hbߖz:^}P:䢊ν~uKM~{}&ƿ%!06Ww⺘k=wݚz"/ _ Gƨ05Q </uXz\›ZsR_@Aa}h $FwA݁?\=ۈߍ@~jKX%>eϔ>˚BQ(AixӋSأQ%~R|I #vl}L74W@ ]R\L0SWoÿON LDӋ,hu=M779_XݙxuײxX^9 u'`D:5G4$ 8JIZtء]s ~楏.)3ށ(V2dcHjmq/.bJh7;χz1Z%/nèGRBt:lčfi*$'G>s I+ɝU].r2BD2͍߀}5t]8)s{?_DOswԭ&*V)f]g!MiQ]t50 H̀\A{K a'PzNk=i}k2nj-ko$aDapI(YqpzbNXzx0rppaslv?GZŽ%o\!q݂T3l$.?РJh/OV6eѴGI}@L.wۂ*veke#ƉƘ&;>jlC˺|V{h+뛇3a+a_651`qP\gVG~@I&#.N[fnj J섙9`n.F>j|usdv|bƈbOoߔo!ݘs;|{g8f0M=̀ܓFV਱sH,N9gmƋ4j޵#"y:x ,ff y9/^mMI$%"KZ-ݖf*15;2iyeBy8Z^\); "88xr\qq|(̳sˎy UT9c!H,VZИ@ !HѸp8`WTq$kEA9 Y$ [# E2uQ:D HT3$sgo9 DM=M>83@ RVp5p׆ GaSj4Yf/ɠSG"XVJ g*"0;8l\h$ :2{fBe6A1H")CO_uP;Xdrmk|s'|PL#D)Km^p@0,S p%ұ {yx,VґNkß»>{~gP>s? fRXL#vy\ b> =8Ѽ6ē(r +%,K,l<;&,='>,K(TJ-B#q /ktYAF,'Hr8mgc*(^;iwg߷~d&ƏROOG.ǯQ;_5~3m?%zm꫹}q::m9ƾ/ȋrj?xcu>%Ɨ} Uog6Y5Q-`j-Q4CA*!F&$;th|`bZ4+PpfBڅH M{d4>q]k,5OQ^"[\t3.1L(eRC1#P*BTdJJi1F"@4>j/>"ɪ/,ZT^7M ^m:M5p)|lb M,['|H(odj5ե*^Hfp"\8Ǘ且:QBp9?N>ྫྷSwR. "vbFݗo@D@ Oq- *9EȃE *(&⽈T &H)%Pxm/S򁸌,|>n;ϓm*یN$REA{3^8 f,dVqg8)sZ0(2ʁC s;?^| (iiO)UKxP240g,\4o0ƷlGo>Rmظ^;6lm%o(eT'q1vsDcn>(TD&=םS%YC ?]mMT bڑ;)D1ٸ>Dn7832ܵxF?y# "F vCaЪ1τ?:;+S_=֙dkq|,T[{`ץΰa6dY7g7X@:K AY4\b0wkKzu{f]\`wݤZj5LUGԄ闖re<,w=33sDzW'fbjYj>NR]zJ6MEͽC%=b#arg$v\Y&_^{eՖϮu:2Πj>6hOʽޫ }nK0_,_g65R}p5p;:IyGgeTQ[X2\]$8^tAWJ: tsWMÊQ E-pۓ×Oǫw9-w¡gE|ꈹmIRX*m'1}ϙAUśw\eEuҠ~Dp (@`Q(uaA'-ACS¯N<P538s}oE㦘E%XQkAc @`:k@S#!8p# i I1|?ZY$ [# E2uQ:D HN$pFd;[쉇 nt1MD;`:"Ձ٠zCmR'a9[Xmnn76W%wnn4x@Z+x^&q8[INAф*g1qc)XLrD5L,-42ϝ'Lэ*jX+ `88rPzE U$UDDQp"S Z':8%/mC2[4+V4:{;o}1?i$dMgD=jRG N U*9E\PRSVPڅ"E ,G}8jBZ?u(+NuxeеdqakSfuE'\JxZ_scu업?w< *RZ1 WD{ay-,Y{/ {Y s  SEIf)0 38ByV5 X, #rXJ+!Xz62q9cc\14pIU3Ɉji঍>GyK$^H ygn/4Y 0x1R LD?*}@ A뿼-\% N\]U)RwW_I1?/(3Ԁ"wn_6\_6Zmz(MG4,edb[Yw- ^Txt I4^B0ǡum޶&O){EWʆZ=|R3:[0O4bUq(ݍYZg~3qt?[:_̾DhJL=z7tY)jkezmb>fZX L( mVXbɖue}5NU*_@A:I c!Djte (J{ |fDJ<@dJF7cD! Dٲm8ݖuŖlD!x SٲH*tf+n|GOd ”\C̻x%Ee1Z3oٴ5&Y=wS&b鈁fS@zȟ&uOᖷ Z rR(g _Ĺrpy%oɁ?%Ws/O$sDH|:aqV0$ ipXՊ. U!|,fD/#[Ujuyp~ߝʌl%s#:[-v,L@V]ךgUgw/I;s Ъit\'0ם:\È?6$z8s8.=>.Uq+5e$f}Fozhyx=mHe/In~~``7Y g/!1~ SLRv!DJ8Hđ8ꪧ& [\Gz.[x L_Ƴgp[P/nݶx3(pQ@B?+ vXzכRiVd89_AT ^dmȶlgP;fY6^0 x!O3iq?kB %C̢jT;" aBR\Z zۼn*NNW9-8IZ_"[[AvݬIƿqOaX'#Z,|xuXGu{[kaw|)gujN8S'&qu%Y1*lsJJMd3BD x`OTcSu]⾂B7I=D#ߧ@f^\?9ۺ_[xDQrm&$fr9;L-'f*+ɵ`r-F->:<59㳳( `+՟N RnnnM@aEcز%V9ZbK?%{HXl g&04-NlH!ּK\c5ey*D&-sxyWsK;R|;'z ''XlM`_Ђ}+*8B&;Zw>"Xk"jڻڝD{>5y'Syӛni x04+9O&xH*PN3 c 2RT^>lHs 89GK,Eye6>i̓ۡYz$;ͱc 8;sLdrќSej8sL. 09Pk ~O_K]OO>Ϙ&}&hy8)-YG>#aVXIB $>B R 6 `+oc qC 4vOp0&re&e6,Y9{&Nr_siCSqKa8_)8[-IvC;S/їx{[H"1^ ( u3&S1HprMXEtrEy{Q޹{DoDe-"H+VJi}[H@B@5m buS՟I`Fj.w 0ғD }QΌRAcB`ID:υFQQ#e i-p"0224ClFΖIK9? kwx>,S2UMVY? 0&npjmnGnZR.#j6RP7^t@7(j48@% :vΡIwd?b,S65\.mdJ 2l8S`!i4ъR-RsL 5@<@!mrܠEMaQeF-C6Mbg:ˢzyr^4""/#_m[@{-RgdTxClFI6 &^RmHPeJ]EUM&7!.zDtPNK/hQN8ElAp xZuU!+e$bTlT"I5 f!Q7:)EXӓh5x:OSVO|~4:08U9>ո1C_r/O_xO盟O|2}/N_#wȁYt RLDb;pm^zU_Cx-ִϷ3|qeaYX\mş?//g h2vRmbQ+H6Yf3V~T-Kfi{ni r9Fl?rm'Փ.jj |극C2ˈ+p9?Orp^r}>6 ׭/6W}%|UOpFTp_.O7/rD-n`??v洦 ?eD^z\i``ڊ)S=snڇeI]&r_| d0mߍߨYfm9YVoayf.&4nX)si~y'Fmx.%Kj|&t1Boy9icL_h^6;hbׇU? fo)t)usO b :/7:_JvO?)$Pם@GT:3G44|%KCP.HZC nC g<t^4_H-f T dlڒ#rbˉ4.wߍ wb&+Q u {z(u_vgrmso#dʆd+/U%9qp֐&/!|>oiɝC4`UKI D.5ye|)( ۾mk5Qz^G}>kvly<5 %p& &N 1*ÁDVHu mm"&o>Fo3'O᯿r%,B븳Cm/f{:)IO-CĽ"95ƢG#Nqv/M<>}Ѥ(jϫ4Ucyַ=y6 su|Fq?r {!Y+c_$< ՗QZ&XOL:v]Z=w TŒ)gh,R=ml_C3"ĞzYsGn\s}~qӻz"f/ =m;lۙ\#eFj%%m7Tm۶ Br 6h+$WPr,UC_2\[\$are2\ȷj(y.izщ61%AEpP~B%T5 789I'D?eTTv!7SB X{ e~Pw!H@K}<>s7rz}nj8ݨ4͢wghg=ߪpI|a'RCy:ŌBU:FySc|0Xľj8mrЕY,1 E3qp>!ڴqRp$_bP‚7fZ,qը9R,Ey%-d5.:nm8[_K"ĄHf>H>W{aPv 6L2H0#"H%) '#a>j)j"JS9=bY-#[`00k9RΛxU"UZ>^JFI6C6KqNMN;oŷۖ9ںCCN9f"Q<jN. Qل.ܥ\ \p .p.\RsC9c92A% Vrj"ϱX Z`A ,h-@ Z`AvX Z`A:Pv ZtX-V4hZ-j䈪3*>kXCT2Yx*4]M{S*{]@Gr'N-llɳ?Vv͗JpyjTW^m4Rߐ/l"NQDo6Zґ(4q-?63wѭ 3&: 1 i0mZPHO&egF e pB.XnҪO\h&6F 2wm$G_!% 0 |9Hv7ȇ]Cӈ-O<(␔8I{VգQFYJy>%(SK7&gˠ:_akS[7fm, k ꡷ŃDIe Z$}fA+: iH˳iYLAκ`xSF#͔W:D""n`l9$2O)hiC. o HitL*"Q*   cRC@)^`2yǬ ̺7 !#6Q+b`Q;12FSIZF(q0-,uD6 "ȠFXR&r"N`L a!l#HV}¦ K@Dyip@PZ9obt28,ɦp%~ 9::6bY?&=ⓆFr3b{i VsU/'`:k " {} UdcÑK]^׸7,32JCB?(o\*z.~|o/__&-,?L}D;ϝ(+=Gw?=ղWWSŶxuS|6?὏هH4{ok DO/oإ@wKθW|MfGl`Yj-3&a914^TSᯛm鏟AWV|lrs{6)Zi1)D:lu_xv|{ɓ{F%Ko/ط=m< 姗Wٸ _X 1Ôv&9" &ɑ7oݨyei<- Uw0n9(aSe,,h邐:2j[V/*i`/mʨ1YVc4guk^`dJX,( ɈtSM8da'JDI4ܨ`ՊښXbmGR}OymQfzU.}AAiB]Wu=UL_XVuxpp)D`$Hd 1e0Gir(\y:++sh.„k3D1βY1FqdZh X+Z}E !Ie*ӡ Tj4h`ERciJ.&x[}EJ}w!Q1eR$ނV 0 (KOk(\ͽCxCm*[-3{&*DXDŽ7 PjPBm1ocDe"xzaœNPiNX'"@'z.FTP}Qg@PԸ8y[2`I־=a!Z Q+~D!pp {Ǒ]a Q\N؅23g2F# oX4cV"3t@3_:y()aA*J" E)ΔgpÑO3Չbr {300Of8߉`E y} ,tv#_nr_jQ)Ԣ0((zg 6 @ _^)qݫ?W .B}6b{8jۖ@rg:{ȼ"~./O^/?_POT}w[܆g&`\nowf3UxA̤=Xt4#v4}yԫ_^ƭk7nAhKɘM)Fv^.㾴ai/0b҆QhT2#=cc$$\4b#Ҧz"*\@JS4[C`B<UXTS띱Vc&ye4zl5ͭ2F7̟[c0 -wئȗ~{9 ׳ p7yE4EIg솮+)}IgrH7/T.5Io!yƯ bRIןTSl"GQƱv>qGZ_pM%uŚ]Y]>co@E W)U0[M-͉J}b9̉,[gJ<)|N*5^Nxu6M(WR7R@9 Օ]w n_rސ\)2ް]IkZ &uD`SD@`쩓YhNlJa^iGjd~sVk/;^sXe;0NBòS.`NT< T䠕#~B.D0S'Hry*.DdHT+\fIlC9"JeilP&3Ńt.ag/C40SFsDkEiԠD1aVr#RaB)-< ߛVv`rlu=i6y,jX yn2}M "zqc|d(;e0ʜI9[y5}d[G!rFad#"ᷠQ "FHHQyĂrN !Xd ^Le+g"EYXŀ[G{+?u`r6#G#K'\PbBls^ʓC㛗4cr[qvq)LQy"6:&8ņoH혷7T_S( 曼\z3ج,-g72~ᐺBd40ʪgGc4GVEƙXf Y/p+Gp$wQ-h%1"hiVF9gͺﳴ5Ú: &ōEwj^z x#82AS-4kۭPrHRNh/52(B;N+n'e9˧"|;~n="Hq6DIt $~Ś֬m#ƔeF@ʀ28˥''r%(3 X5Pk@:U*M)0k V;Ǎ#I_ƨ>aW2oA2l""E6"<*&3#(K)µp@ a5,3dNC*%Q zR&FGt Dbz B2fB8Je^^;𠘱uffvbOCKeV8e*"]"v*.ܗ Y=K-DEhƻ_߁vݮfw}E#(dxEN{^)agkk.%'m'QijgֺMc.wCȔ1{ Fb [ I,\A_eMy}&>ᔫ6Id"{5Ÿ/W0t/t>_BE4?;7t?w~kzk|8X\kԺDhuJwWڈNe**(Q0Aj2p4ZzPi-٫>Tj 1X1 {9;kABR0|%gS4a-y#Y,7^ iq!Z\Kx7 b(mjI˟F4xR=H$\/⅋L*,/[u&k|ǭMP9=GtO!VqEti+pwPq1#W;Vv/Bs&X`T9F$㑅H>HԔ1тFQ4`pHYs^w%_i( o]gǧ׵39*TYw?<']M8~˛wvSe-ݴtstsT]Z@L .?Aso?ɠ֔L-6w3͊\3"4 -].KfK#%˺5tpfjD1HDyӚ#Tz;InN"5ܵ 0L6K4=m "5][rnNc/7.8#)$<鿫4mS|yLPm*>WF4'"ޡY6Ǜ Gݧ)ӆGw ,д=^͈wcJWD.x^4]|vJ5aC;i}:[Y<#Ѣ7$oSԻݞ b C1HbghʕbE"fR2I`xB02<>b-[w܏Vo u{<8,ׄoo/,j󪳂Z-KBz݆ڟ)Ԟ+f Jm 'W.fF\$lB `4j+Ÿ[RSwv>!iQfraTa_6PQeM(&ZK^hQ"H*pS[hA!бEVglYyX Cb;pFWQ&&g;NwU{פ|nv|oo&fGEɡ@)M8N*hJ-{$"R (eAkZJu؂sg*MV|ҨqBVy1j|}RS, ѻ_^x#GYvM."2BʡuA,~ujɮ8Q: Kd |){ eO.2%AJ>)T쿢eZjC~cRqK5W \}͚!8pr$jV(d='- xjHIiB] p. N6¯\XJE$;|PS((h~#+N􊶰þ×S2w쇤)|j=#Tagrd`3HQtDZQ9RҠq8oM AHRVaRPLwZ D佖豉hj4BZ"2fNC̫7_P]c1`XkU9Q6=ͮvmM :ܒqUe)P[ˤ}ѩT 7z0}N3V.,AnZFW MSܞ0ோ)ss,-.3%ǭ]9p[׶}sٸk-7[홋s̃ݬ]}uKހݯ%96 Gs6YӦ%dz7Mήgv=7\r# r@_+L8G+Gp&|ք?k:k47A0ɵ r B($<݀MN lwbĜE NVԶЁՀ֞ >zKX.jdzϕ/t굈YT=GSP;6.L% 3u^rÜQ [XcI"!t={ݱ`sb Ed $jjFn墧I' I;E\z@9DŽRn&x92J#"( -J)$17ElXB7p vJ'>ExN3lJVmh&KuhlU3>< Kfjp94Z!/=`.R6`)0fGSn5h [r/q^C!fH ^ wE%Kʶ.<.5yM 5A[j sb?\2Fʽqi#GA $`AЄؤ ..9SV:RjsNb'(c5iA56+Q -Rvu°ha"d#RNq,c*h%{gqoR0'(xAc;liRǴt50Q<gzSm>7&*_N7Uo9C(D.*0ZWbZ}L@[t]i\|\B:+ws|0j>FAR23L 68L L ,:{8 [R®' .ppkvЅ_hn 4J^ fw+%z ˊo=k #XtIc煡ZF̶֣z.ԣR+Ni f4D$4 SaVϹHxQ60K!pbC߀o! doI0OX?p&O֪f}t{Jˢ36}o56T4A<:ΠW2/-絩Nn!HCUr:>Hڷ" 30.?ySef{Mjj)ި9RO~yS3C6`FM>&/\vQACv6!;agr#=2 J\~PpeKisKg@J!^ؠLda CBr?r'OɉL=Z"4jXfA,'7+[.RA 1`Ӭ^y8 :3. wlP+S5g8j'֜+6S7.?y"ϔU8Mek&ѴMyXxfPB-C K)FDoAs*Grtf( bG(H,(K 0+Çq\XpAϪa"{&CDdX*.DpCLDXYh6.UlW'mnB)??i#` STy vwES4cǟ $dz ^ۚBBTZ_v%;ܽUogoIxd>ZR̥PG p،Q[M*Ffd j0D.QRWCZ;W\uT6,':>vHՋ_5(L7ByU42aR'ZK(I|T ARzAG)Z.DR(,r!c 6S@ w4H &$Z Bx`#:y䠰Z! #$$5pFBb%#1)DXҠu0I̥4RZKi-ҮŽ@+5ǓXVU8/)(cܹa c )aB)Ճc={.-R sNa]0$bNåh5 X_EHbxbS:5:h ċk`0/B %Ф/6[7wULۣ]؄7wプ_,$S,BE-i]['g]oi&)B / åb f k%0 R.La# lYE1RbPr *\Bͮ =,BE-(h# =YЀa}w@9%] `[ sɢ \z s@ٌHPެKHnZ. pPJX.E,t /5ȉ`KH B$!$@0a 3epp)!K\['|G "gz_rRl6,svrpM^=HɞK~[Kn[Rk,[j]M*XoI#h U%&H-҄d mGd.HnuVBk-!3%&z'#?')R~ILhA1,<KqH-C)nק-N\"^^LaHQG@4(?Wn^ z~xo\)~bٝQMQLLo{tWJ5kR`rߥQo/?/o߿_^¾?ziZ&D7}]ωf.|ןR.]1ɛI0;^1jmZU5nWntݛY=weyF[̬X(`ս:&=g-74y bsj%]O.]07jӗWl]Ib?^{?ٷ (2X*mB,e^VY-ӛV^˗o^3P/M`[/m?.G.|ۯfBozσ,okV!DV) lN# Z+s`a nLx4Mkg_i3zO1IWt"lVzpB\p͑'d<8 chf|Kq7pkzWFbeлrŜL3xLۋ^oqlIk~0رH:k_0~Hz>t9i'ڢ鋹>6*\M7n K D],("X1R(R%)ru+H1e.&,𝂬*}gi]-ÃOc»|ժE%WSaDt`C >mxjF2=zYߝfN1;t\f-n[ja6ʔ4a#ӿbuK-:>=(|w`BݵSxGe514-x` 4-{xWFe~;D'NFw_hOoXWӬX!N:״fkhؾm5N m47 pY FxA)Br!Ęr1FBPIp:sbq L6f566 z jCfS窓_0^+Bz݅ڟ( @~F$l]~.EZd"]sLF)AnVxHZ,T])ЦMdU>j^(nmD$$m47{,ς8\ l2r4g9cK(Q#7V8us`#,0#_AiГQ"g9d-SF O;MeDMJ`fr, \=cP[w75v Ϳu?WD!QD>:`)|G=0bYZ',X}N|D$0ru6!E\"[".D4 /7R\§QlrH3$-++4V’Q ,泎-vx2pLjs@-Qz_8ì,+4jaLLvxWy%=#FnmYS R u^HTYiTen1q Y|y ϳ 9xy{D,[.0m* $1 3ν>{ 8O]էF ʙB oDOwjgvjKMUZJxb@,pgg„\HG$L3m@)-/#ABFm!1R&)fqLJ.%z1 .} {bD̑JknnL=se's7Aj{:wuݙH64/]ߣ0~gjn]g٧쎪Uo:kt'[EkWA!t;SČ&؏ack#>ٶnk{5neVBa4novu7]t1/|uGё=X`80SoIϊx_#l'Ngu[JTR4Kk\/^GJD,#V]Bjfش}頜^0>Xp4x%FEO͵9~ ,ERb]&SZmB)ƥ Xe? ׌qh?$3Bu4kT`zF*C_vZ]5t*/Uga7q㻿ӛ)}Ç_?eÛz? W`}iAGG#x0CfCs ; i|qeSn!0E㖒hy\I@cݛѧT]{n;U c8a+6r?P!D=L<&f#pzzG6KW_h#%c[Ljy0?8A}Azam /eշWWŤTIe(P {vݛI=M66W XAԬ[h}T;5U~{+Q "Ⱦƞ[ Rȅ(R\N엶J}ׇêS y$|3K{~|!^yaFki J/50s1HN"B *KT{BrziIN{jSL&+J93 mMfo1>@KgCE500"$"&$Gc1jh7DƬZ1ZÉ%+نx*R_^ sot ׵2յFܸN% 0rWf-Hɽ#Wp$7.S]DJuֽJ&k@Xu!M`.&)ե4I#9DJ!gҤ(sD_21M+ROFf}LQ1 ,.+8B[ #`jcY8b$G EeӪHǃ@$SBp`2Qf$VE3?Lg6|s"ce2jڦb M|;?[9/覣 qC 40b hf * 82Gb9PYt@V ipɅ6.wݭ$-lL Ӣ`ä_3R.']]wGk(#̎0OG+uG tĄrϘ&J\^>Pzӫc'ۀHm[lbü{i4]7z>(N;.wN\nqut[; 4g-cH, 2xŽ8Yhwl6lsf:CI2@u#T{ \n8;6a7{̾ Mtǝǁ08,w DI'}*}+7;CS)qQg@Iwp;f-ƿ&i+[7:ś ́M }{5%,jBt|3H-MP^;W$Y(ךY%XeXc:<*\Pti*Dfv{4].wRx΍:s8 `~Uӆ)Wd!/zqK>ﮞM[_/Ïajl?ޞ|2o|CMX:8]j5h)Htv"ՉD*cb{T\L .S{1C s/f(r/>܋hU߂amp'AD ι+YR=m N>Ծ4r)RU)܌v#σtöS On^r=dWt|o ῌCtIdp%/)S8"M4ri#@SN:Rkr^u3Mp}j/`9L \ q;ô$ *L)Eyi!Yp)phe&P+ [h]Ou42QǸָDphJ#AԲbkg1/D5t3x7Fq\99]v4.X0c=œ4 y|_fr}~ũg᝻tmanvd`𾩎&fh7izDcQ)r' >6KJ3fN26Ժ`"(.IѫPmYM4{QJSlN=rs s(wQy` <Ϻo6\X3p4 ^O#i?FVхA9Z{8GbVk18A7QX)2ԃ=28BElOBd'6F1@kfqZ`Pk x3IE[J \."T1dK0ejiXeMP!Qȉ*d;kud( F7D>6ǬjD'G-lX*K3\1Dtin_EHޘ"e1x-L,ҔrV)"E4g^x;9z{"mZh_䆰^a#iu2,I=ZlSVT6&$ :;kΖ'yG\m<-9ֻ֖\ގa#! nu0qhð\ =wymv¤%ȣ5-iI>yIh-&T) iEɨd! ?|e󁆆gwI.|4M9KV6z ghE蝔%"n][,b{Ѹc;j:עݶd8n3APЋ\2$UBR5uN}%֘ |?eO>4m#HZmv E>J&&i!]A`l{AV61\7JqB䢷d dR@M$]@lA2@?YG?pe}:O)NW:{$ΦmR`83vl0 n؇e%i0<99~͠/o3 nexnu0jue e#x(JoPoJ{i~zϟyn}Y];n(aR ?-?O9q?[v|>Izzy&˹zթL+gߴCum NZoUZҸng<lU/1cqmDl&%(D~r%mJ(&+9$}5RǨ$ONwnٜ!X^q̗ty)b;/t \Xlj}@Y'̣/t.eBEPZ4 liPgbX`$ge2K{Zf(0 c# ʩ **g({%{ S @E 1rǔ?͡e kO'aI1dg6ڜOJ-+W_}ٛY[z"/&MϮYyӏ4~l>ck%,!K2ޔ5t;[nWT )ǀ6#6.\ xR k"!KDX;kg=9ʦM幃uE~?޲|R˺u@Y+%Xԡd ֢,An%}W%!﹞ mC+_7]wneƌ\X;D>>Ye/.9De޽ݦ''.^ 2[%T -lc]ʵukG2_ſKLG5Y$$ԞgB"cRV 7 Mhxj6[aU{kBQ*۟lyrP$)CdqJ=HM O]w՜E4duu5v}l1tqg7h:cn$Ỿ ikŕo9?wÝau2YL cWJN~"K3zYAK^,2NEIJֻR TTz FBN.'+RTךܬG) ;wՅc]({]xp٢nӳ@Y9;:9nt|ܗg4a8<4wPjTGPOh-H.E_DQ(zXJjB@IgJ>[*4Y̔l0lg3# J9t@`$uYsp ' /e@r0DDZ]*78x+RU٤$(Nx#b2 BYIE6\WL 2i (* \/mp*n-7ܝM _l[ԣSɰ!@=#TKL kdTcKWѝx,^}u6C*cP_&٤.mYD<bH8E@BlH&ul%KM0~#0LB(uA&Qo$"`O&.*wypRޝ߬{v]dTrY:,+s!dO%UBjo%y0;sN 1$1X@Rd=7KW9.c6l׶+C{].tn/t"p|_>,•5 7/aH3UHzhOGPh }pZaSluaFyo$z\uO 䢯=v_b/*"J@QZ$(FV9|QYY l FiJ_M1X$}e >J~IB>-ٻ6r$ewq脏+@{.3Xf,/Hr_de% LVSlvwREB*dSe#>\>"rFq˹PB6{ ӠC]u[`W.> ^U\%@-)H-R,k?|R*oF(ֿ`jt&#OAjU;zPUY%N+H 7>d'H֮+! D}T+ROKz]8Kp͊~@͂ƛ)θƙg>$ࣃO10behR꓉)UTbej {LY]w+.[epܰ\*h 8F a'T6ܵ=m6̏B.Bt$b4b2 ٢Ĭ+٪<%}}Mx9w :w;{ǭYt1c) _/rc&7mۧ.~ sQI7gF_F7ۧ9t1fw韭oF64rރdi[3_=LYc楑a4no~uwЭlGOjN[R9}SOKc1iN'̉O:Sp,dxOYѬIcU5g> J&q sMP5YgHl?ۄu,u2Y|` 2%)tr\J2Zpk%FެerAK9cgwo^_( 2f$^r'|i#,52'5.5'SaטVX5Z*!4ӕ3Oϣ?> i d!XkH^6A h΄ߒSSHGw^{x*1B$md,DPa P@q>+Yq )zc2<$`.̈́2 Y&f"J=pÈ-gd"IzM?o0>7G$j?ZB8p:No xlX|CmҾyK̡U*seVm2FM" b^ofs蘷 ΔfOC64Y *sbWʚ91F)e`ɁFIQExr%jB XN'Vpteto."d`/R{E:U۳ժʷ=r(qKڸy#n1Kltp7t>z "?8ZpӠ##UeF,©8 ܱ;*UߡUWn\4̞q.}O 8c6R9lSf)OQ꟨Ww,A?HY &`aYzY_i NȘQ'4fٵ:̙4o{M:_sXg5=jBy:Y{^w-4J CAefϚ9r 5xJ<,%8zMg։t|-͡-3h,˜a.Ao@ ̂KL{`ME'8q\]SllL](4 :jl~7n^x7 \bCL,tH(G*FV9n2YEϣ,3tV"BkFȡ+31ET$C|9ʣVXB-4+&2*zi$=|c;Qȡ!*sWroGCvȢ޻ܵZ Q ]lƩҀz<.p?+dp3 2Ʃ 4$i+j{eEEnRWn9 b[cw!#Ct>!EN9,s3>HጉZTd4mR*D/]u e )q!iA^`B J:&=DxEk-gϲ :?&S3nóe0w2qquZq7yq́Kd3sXx tkmTb$/:}`jLY$x\0:0:"%tMvS(sܻV?{&Frq5iKF,z?8?//&\5q\Mg:̻?#,S^1e/xo8Mm}ۓ*eR~Nv>^4}ӛ_~~Oosa߿o& K=MKK{-`Z_ܺnmjݚ7ǭ-Գ}uGp# [ܷE縐q-I? c,ٶXb߳"-SAWh qgS_ڲXդ?& 划9 H7,~*6M'i^\\d7YW%3crI⎿uRznnj~;_zGm|Fo~6m.ŭZ\樧~zD|X[sLwZnHx.˯V+y ~10-[R|lݶ^8 ?eӮ  s8E처vJf-wi&8m-.իw0/ K[ǔHzכo$ f[[o>ۚS/b]z۩؛ʄo28T|*_Bn ԅ9e KLx=(R$dreA,s\Ԏ]46e1,b;Xc ـa E5 f@8G7&G%UY%,&( Uަ l11:j4L| ԍ.!QkpU4QuɄY7>l..%k8L"Q]]Շ SGvPևHď"T%gYt\`Sk`1#|I0d d%trU1dE -DeHYrb*4$pZx[L#g*cjS7\_2:_^?qۇ]unBXVbltRGGXCy%r!-fGd4M^B-L\ :r;<_Hp Ir@Eas!jIXjgT[n^Id\\;Yzx 3QܠS ,F J˵JIjZ-Ac/Im u%Vjx}CU.qJЃ:~(*ؿl"96Ʌ؀̳&Ŕ}(t}:t<:nDH%}H|Rs(_vnSNw7>Qq+dI5 &%VYXfC)b.CDbt&s#'N]8n4Hո-Y֒`uwI.8y6 Wk] ?t4px42KKH ׭1P*{:l̒ =b}6g&E͵+TmXm9%v$e* +S* +*6* Ņ[1|_t@x%vt c*nh@dFz4(PR{]ji*)!d1Ⱥ֭.R5GA"YB`5 S4=f]ś)34Į; qI<]:ڕkK*IUYi1XŁWmHEJNL5a:1Ȑdael>iTc򘳥:RIG0\FCKaZH:ind6 sp@jMiuvLnx.ZW}ƣwweTYb+(Xz|S?+>gr2 f2 K.1nрdqFA U&vi JR\ C^S<)$'sdu%=2$BR&D0[ $| jEoƥYgn[3_>u+a~lzHRnTB +1%Y9~7>_L{eJܫ֏ .L'Dg,CA-aFmPkPuvi bV*~7+Mp1G0pػY w8'aERXi$WFyiZj}H<_ǤYrg-ktD{PTIxZi ΛYZ~2NO\Oݗ 1qs<=:ZXv _1NY,9={j@Lq4YrG=LahCzg 9]Ogt8y?n3DtLtio}N>y!fI ps\DI D %&2 % Q@NTs- M\f{'?Y,&gѻJqwB[廬a5EXR+gY8\.[{@-9yͲWX(0(V|˜hyƴݽvVԾqL 7=}ȸ.A3|6HCr׎IwnDqwZ]H- 'j[55]d9g+@"lAK*zB&(LsO66^dY*%ǫ Z$EL2ؒ!(2 rHHO)ɊdʨX2)רd&dP$ɻl@>Z[sn@ +<BP9M%O|$K )]*z/{WB(r 0CD@Eq2F3^SSx1"//.&,VMgȑJj2DS.DR7k6$;T6 b9κēb@c-քzt/>S鏋rmQ4;m57x=xY\Vt 'ؓ3dD r I˞2ZӊK+x}dY/I8LAyLР%*&t!j.YSAmvF11ept/1\|Ų\1滛;x EZO^j%[ 3LH+OFy.u,&Rfˍ.Ėr@8ϚϪwm&\Ƴզ"U=x|v/4Ik|;u "CA/rP[X2Ǡf %f`YAp =4P Z~f$2ed%P}}$-K1(,~t,JuB䢷dX=.(!Ye.T?__I+1[Xng'ɇ49#rsft<0>{'˧3Ob~r496䨞fi^hZFյ^$OqStjyg'O_o_~zëoo߾f? e7W`$^87ˏahɳz?6W{Y"_տoj.ߺA_?zŏ뿾v|G.u(Ajuco/hU_~n@-5v2ō>+t,%it<@wY kV-e=Wr}W+o[`*.axSgޟG=yq+ڮ_sϿY(R,ތ 1{w~L,o>a.AR՛z3՛:kS[Uέw.ɽ$OmyX:>a J=`GÆI 3FUBG:IT"\Y$7) ؗ>XdM qƁഔcqmDl&%(DD6%ȋ>dÍQ/7Fz7cMsͻfu|鋨ѫz,&Xv 9ʣ;G}eBbv?]"A5jрA]aQjwa8uӔO?CFkm=́;C؝f\Dm̸בAQ %`'4RZQTNVPQ9 ET\^Nثyo1OZ~maenm3tp~nf/'V|~EpP2B{f3z,!fgYߔu;1YP&Dڌ,G_ظ@Clr!(c|A:!KD;d_lFzW:xhuJ;rMk|R ,w cנ&i0(rSx a]"-J`Ų\c#Z7 <2Q>ysٚK51Q%@K,&ZW#=*cVO@-bL=sB"cRVL$1iB#}(^ ¡jfZlo#R$eރѤԳ9!jF攷|Zf=|(עcH89;ܺ/𻙫E˷u@9М/GqѬ N|fS`A+b=$!)!l~#]tds2Sm+uƘ}D!P)2.Ӓ50I}F,C)_%/ҥMQLGΑGxۭrE[wʏjbc bB +†lM k5c}}p e%dt$rCu:a@J !X]PcI,SZY9K+ U@9b<_|[u8덜-qvx~L0*?>T\F]J`@frR5GC*-DƇ '> -?/jiFRqȉF\11Sc[x񓎃՛Egǃ4I2{j0 xȱar/(Sѻ6)|=GJ\&ݢ4JegYC5X?{Xn 0xk`3$`ɾ%ֶZrKjw{$[uMY͇vPTVWź0K%P$ %>{8G=c'SYm<;*G _'E7nCH7,n@H8E@B 69R|ծ(9w8Ѽ4FXG.?]K?_f\%K :?[IGl2r|ϿZ-K˨\ᜦeZ꾥S@)Ys}?hx JSB[ǡJ6L߳E[rQu0vR:|ћMq~.oۛg_L~/\Zuκ2]UwQ~nݢf#z#FAg4jﳶڳ;lD }AŮN{Ob//X)&ܱ٣CP EU }I`/9˲hmO*?y:d}}!j>`bv3Sk>ߴdQq3v$^|O/fI5i|s|4dBSEN-Qk%J:\LBt7E7Yĕ [jo&A2:ۧ)̰"K_塆7gxC%&/O/;8s g6ly<\nGfCŴaK﫽jJw$Z6nq`Y'YI֐qPNFS6)wJRn$: q$4+)o+;W\+o޲0܅W&a>P4W'KwώÆs>s']lO A&i^n avo&1q?Xb5&mbfEz-_P/xީ u!:ikr J?(o EMډᘷ㤂ogFpǸ^FR$ Dژt1dZd˺&xL~ !5Rƹ` N^V5 HZJ>t Q@V_6\TrQ+"AiibQ"G8;RZ~j驅9bϢ]51fE#9,@:$u ,ӆ;UU0h T @*X쯭փe!2cJP.H"gh5r9:)]G~~qL ~S x=hu,9/uK8})|iy僐-"\Jsh|kLkxgBȤbj>j="gψ8sNFErg3$S:I#,hh-2t$SQ*=wp MBnB8z즣bYȣNj3tdD}*٠Fg),ɝr'nc04mtJ<ƲLJGyRV,< {Zs0?s#xNx68Y݆ZgLQK6Hu$He~H|y{ ͔!<g.fRBl\膣Z>us5PN}5е—`|fkY6u%"UiuRe2~MpDqRTE8-TY1Q+0T1vXS{RXbb3JX}q0hL*ϳ  ;>9?e..6;n#3&gDH`3ɪ<2:#8j&4E%r帞jSY't9'#nZziKoRnB9ܖOi/!_O !eJ3Rl MWB-ޕP)x JuNf6uL'˷ t:6@@9FJ9-fY7˺Y'jYɽJ1X[ G`N;OX`BCോR:IU6qnvq! º^gmm2d yf {JSk:Fn%4y ߀]h13r_,4bC,#YV C_x?'WPfgN~6zb)cDt4V'/ KvRX#pr:ɨ )!d &$jl?RbuuJ.%%37xl*9L$3mW&rd.{g+ /gRL6|vt|ESoHu]³bV[P=&Q`vV43๚1\ ?z9/vIw]!r+i:r ]d࿐ ^#yQz& /W xj>*smݶf\b楑x42B}Ǟ ~9NGwL]<.=pޒ'{5o>}Vos} \:}rkqls`RQvMFaDfdHHK.'uY('|(Hb jޒ%4Mv}YAWvo Yo`ԎcI0>4J@() b1$>G F&-o(%xJ(Yo/WCA 9jjQ m ud$ 夗m=c2+ǘUQ*;=Wu $ηE+F?9q?L*WD|$HS96tt$tsJGg..cj=u?v~~l)%zC>Jnƿlq#$* bk-Y-l#rrgl]EjrmˡLN,6fCg8Z4I+ &؄]2v7K4Q(&3Fe2HCD ,;VOgNoijo[y̟q|X헇k1P|1q(PBc K(NSgSPk̩c cޣ;F✢VWH\U֜|yB&ޡ2Y~N,'i氫Pkԩ+#9{WVJI\!OB+UWZO]\*":wq%|GWI ulci#QkJ$*W J6qWUKo.M?W/Dٔ]o?RAE.Ͽta$r$H:Mu>p tR1Td\9*&TTh?e.xNZQmYʘ!vp `^_'%xW,nf_ig]%ޘȇ_l 1[.6㯻(_~M]K47g ƜCr c w [Jd ma-%DȖR"+O>1 JAM1jਁcrZQ1j9F-Ǩct g# kuu uĜ#+}ng++X8ʂcRKĊ Z>RHif[0+!hXOJPsA#!UI٪"* ,Cʂ"j+̸UOsCUjB8[?V#gcA#[285qfNvyd_B}!uz5_]߄Yh0Gcpf ( EUI+f91dZ4qPe2{Uɢ18 82M&'Ǵ\%2UUj8 qi27l;[jM7]oSn+ey$Rhae})a/5( Eme3(wu"uHyߌUBmQtkA؈SQgNnöh=Y}kr:% )EL$3D eY\=m&x?+'k] |p؟L7> (5JPM( FnC2&ZA+I p jgNCNnj4iN]."9XHo4RLyԄ͢r McC7.^:62֯OϦgokZ Q()BumP{reHNګS혩xFpv>*w2(xŢI:JLӳ=dh(rݞcƯ1ҡ01 Q:fyt)S1&eR%c5rvK((x,de!klӯO!gvovr; #?@`06Oo\bGCLB(EvZ\ TJ! @cU: \b޵#",]:"psٛvo  E<-J-EՒ,q;i&_=XE )ؔ"=V&:Y@SJm~6[e_v58UjR[ R{@(rpBd V|08GD :8Y'h|dAZ. R0+.x8LȐiAH-p$E8*1&6pԯq`DFJDYY"A"q7NUh 't@rRˈ8 HI'Q&hNAʝtq_XOjTr\r8^ӅǓv]>Sl-kiq-Aғp;ot_)O caج7/dGHC&qm }J)g|&{&e.L].L3vMuv$bF!ҙ̲jP)ms@d Uv $.2A)D6Id}22f@r^QrB`<6U1$C{tC|gyLt:S=o}fv5] ֏3D9|10s2ˣ-ZMGzqd2ųf|QYtq 2dIq1x1sqn8x3sD)'gS"hssB:eXNB[Ɖ7Pb -:fI >I欬gզs$-6[|/KǰgRTX%oLErd \'0 d}9oa I1s w)r+yB).8 QjUZr2Ia~_=0 T ͜m d! .0$S*+(ͤ8Qp,E_\hD׻|HC(b U3a2Z>Q[j G J8, Ev5OMIݛ{-HT|E\DE2$OBU2'rAI,P+&2*z%czuH-A` nnCMefu6@#e $J: a6y`+ʳfy:դiE]oRxKw\^mx؞z|qGRQUVѢRXĨuJ%K_K*壓n}]J\% -Hol3^ʒ: e-UΑy6t'ffgrfwosqZq7y ̂$",Cm,!#HBo:}Єբk!P7JG@T2X,5Q  H.Bp^L$("4M+[\gטwXl K9AAdmBce碐>Aك6"QH? />M@_h:k{&,+%iqY ,%@)8SQ2;'=0R]F:)I&&L MAHƚE<g8:K-C)"7]5B5mȝt9i:[l*<=WmSh2~7|.X*t:0G?|(Wg4}(|5lJפfyT,gVwkvTW_Vǣ0x?|O?/Ͽ| x\H+p賮)hu)Ͻ&5ZCx롥#6[ ~̸+w{}͸K$:d )_~ia%ӧue*έ|\e#7 Mʨ;^5v HmrXلtDef.@:ڊ+Qo:?i[-;|wqQíJqJqgFoӧkH"O6#vKj 3`k;ȫw/[|-*͟ U6Gp,Gk3wF?_~~Qͫan i:V3# M?y{9nob3_o1+/`{Ә>>g}WKG»._Aq{m._v~ n$X.8eng}.l`x]ϣ_w;m v/m_.|5i@ЫQd?o.ô,/b_ߦ=,~>%»XFF*^ņVo;u6{Si.1vzH^и[_F?$׽"EBKPqEbH <- 2V%g K2֘W7! Cho'E5 AJf%UvY%MvQ)gfU.D5JB7zq7^DpUC,rZ&8/i;!u!H8}5KFfH+`RFZG^*~hpxKM{v ԛ*Sh|&PlR^x#Z0&Ģ \"EvɝKdc==Yvp1h0>9P2CR9p)qΧ8[c [k\/&QF$&$,64Kp(ot$!%uDlHScLy\ˉyͿ%0n7w7i}]MR4*4Û4]Rr>~ l4mZF0<绶~7LNg2]1E "*Qm^=<UjO ?Lӆj:Ԏct :JHHJ(nyuH5{?m4sI=:>L*@EQZ 5AP LjJH:=X<] u> f2;?/q[sBs1}d[/BH/ яךA?$v ,@xi!eɍ\D+A̅HMd:0\fF]kg2x0_\7/^Fg-6ݽMJcad7o8B:2< VZrI)!+B+'Ki1ercxyC8L7>%-_?:HӫƱSCӑZ6 RòwdӶ+<6]k1 o $nIo:;-acH*"iiqߎ"6fZQB{|l̨=NZMtMvmRKusǁΨM~?+ cysy_rj˶O~z2h Ky\2!G{Ƒ_z~?wYۛ8A~*P$CR`U 7)S3쮮WW1#vxLoi$zAKZ^mީ7pFۘҴT:4<>OH,>U,2!<# zv &`+;VV~9rz|m7Pц ׽vT 7J}B͛h;b0( di\gU(=Rj.p@:eN<šիzwKD-xv_ȆJݯ,{W~U+dgՓ ks|w5fww ryU +tЯ(5iaҴqMz h஼L; Z 9 J[$tNHi͙Œ=F3-֦Jb9B "Tp:aX4Z0O[NhX)QqU"bK(F[ :'(xAc1Y!u73_81T2ubvw~wc'ҝV`/Ԏ4*q|@ #z0.HR%@gHJ3v0*F\%h]Q\}>JHM= q`K :qdU(>Oqh]_V@ej"cp7p*w_Rkrʦo; {rJ\*//+:Go$(V S4y9uzh}=uNweWaT= >ze:VNod=yv$&'YH*ʀHUa믒 kBBoU9Nw$0)Ya& N T`@!0fdCuv}7NT셯;ej${[ʞ\؝Uc>`֦xb:KF]Oa'#RU*%9 %&ec&h޻P Z#+J25r%\Јsfʵ!,ҨJ]=H1Qѯy!@dcQE!5!ؔ s45AR鄡2:lrG:x?:0NjKyJ.SA%gS4asxW)lCjr1Ѡ߉52B%}~7a`;PX -7&PN|bfDJ씤9 7?4UDmcwOޕ^NZ#' ՂB|}'Q'7go0IٲEd}:Rsks1x9pf䞙&RJa%@!R ܙ7-2j/3sB9jzODVyϙcQ7=9V6uk+_"Ol)$4|flv\_L`>};jO>lCZ Wx'LV.?/gqs6DQ_eu u|.8FX7m y}^bU+g!;U 2DC E)|)ֻ*l2}bU&yFlva*t#.b|nLO|6TzBzw6X2NYѤ)kꍉ%hfyťҬ=^$CG淤[)F<<̐<|C 'h|Rcx^+ ,ራd%h%wq(+n8h (مmφm`x2(ꡌOpd#uV8ۅmֺxM<պ*Ŋl2Q LU]%ʆ}TV3FuobgMvނ0<_.f%]nbח4M^).&^vlg+7?q4/ɮ~$kːxZf=h_ڑXâGfӭӝ &"{÷ozY?[U ,X}9F`VVw*uF ;",  1lhɀ,X"> mP a*B'^?ABɟV=Ɨa:E6yߓc62l5jTפ6l«32(݆9Dnק𱈡Ѯu xhkY8TknIC-_.xt)bbm!u6D~ w`G}+\3֜9z:ZFiCS4-")TWs8֩n"Qg)A2 c*/,<kYugyi:ߣrǕ|]c"Av95]Qx8:ȳ(">=Z>t*]ٖ !?T.9LERZp f"к SRD@u @j[xs@%k_szh3HFj+hUNF03,*PVc  @$˻pcAPrD h2u:GfbU0f^&d&  3FSI"}Ge80u$'!8R:Va$gR怭A42zLDA.Q#)[CZ ,#됶hAEQ^Z%e (7ZZnw  Ka)|H é6lYmf7P0|%K7X6'jNӾn Bca>2opJH*yg7(`)>YK_jBޗ̵AO^\}_^뛟%&__^|%QyWto [PVCGS5MM 󍧦0uyUW_e^}Dy D D컛_v׽~X4H}:&z56.Ca!fCl@5 R-Nqlgzsv9*Zi10aq|KȝɌLRN?gAs5ɇE ϓ} oa Rc)y4݋gS53UlHz-(wN),y˗utdUBz34U]K}?O+O w]{PC '83ٛI5by"OjN / i^cY^hs/&`HPg\r,,h邐:}tB#KɣG S2:Y^ϟ}|aH`dJX, Ɉ0 sʣ<,lD)(,QLjQ#a$߹Mb4FS^ I} .gƍ#)s:R>XtH9C:\ЖRٖrR?y[\%'ket1' '7B6LBe9n>ڸ_aiyWހZrSf& kd"N['o;ZJV];{tt΅+&y[ٷU=ʾc;Yĸ]'8L[ $` Q%K(HB@%Y 2Su;aR/1",D!%/HM-a$EV 1 Y5ƥbn\0Nme045޵)[&噷<RUt>V4΍=߉8)vۿuөPߗ Sް}~?N!- +AU) 3R׊  8&璣 $F9ịUh4 62(@lrD`p'91psL=qE\&vNc]uͱ) {7!B`Ѽe}m U Ӡ3nnv}q!W΃fv x$ lou2LwZ0Y$"﵌FAMVHK\y]+t!K0Z#\%qt13꽍ͪ?G&h)"k" h85c:ͱJ)0H d!I14(jxTq87PNbXXHY΢*E#hav /Ɇl_ʜNB7Mjճ%#%Zo8&كx|{uRy.h8>ND\c-PzXp"$1|AlJP2zaTT(#DOr *.}ԖUbcL!մdl q$1Te!> Dim)< R-w ]dt:?RCg.FF)^œ#i4'SaD$qJYH馭mTb !&mД멐N0+R:"a8%u}͸ul*IRQ]Xg3C `NuJZ 1y jGjL iU3A3 hGC eR2 0GQ*Ht1p¨O׬m`}l*i%Q".x>'p4 0NA!:bB d%)*F%Vd`(Qb%hˬvdA% ޕ$ByG 4am4fv^X 򔴦(e'HJKRR, YJ"3 /kÏt[į^u 8c Zg3)y]v8ŝH]<] Nl%ϵ=ϳG0awU5>ɗ<΋R$itCjhIژJFbxʳ,|f2㘍p;w8?ջ3٣3٨5(Fd+T㴕`@I$έ&ಉE%F9&kJ`. (P4*AQ˝ͦs@*>Q,&|e_.ʂvP{Mٛip:We:'Q  <rE1Fb4JHȑ (op *Rn$_D̚Rj)F@RTHuɖb)[GTV3oC4Ikd}-msM X9a+2Z\"W<ҨI(pL `Ssʘ9P &Ƞ0#Qjb.i[t")P[UXMz 578BR5IswGb_^lxǃy ŽcF w2zidN%fb`Z&ly@Q{ChL:LyWY)9 \uc}- K\lY8hX@yW(j68Oy-4X%ax ie To69+#KDLYib`-XQg)n[K}4s E* k%o_X[{[uҴz'~-R ST#X.گ?n9ވKQXVXdYHFRQC?@#()hR UpPM!8}Fu҇` &r47od|~aW7:?BW~ل?dX*+˦fđgx,_7 W4-#N:`:w+ZJ?-@Gi@K&l#ۤ󶏝?%gs#rÝmXXyP_?/Z~E-ώ37_L_0;_ ]wwk47t@_~[kT݋IlB"Ζs,-Ema ^K= LV+2K'VI) 4/ym!O.sYʓԵ{=xes)n-0P|rN!$3^Iŀ wI8PaS>4m9 34ѽ'W_Ʃ!*}ynh\_ɂn4=4=gÒm~n14:ْ RMi醪]J'np>d&RKj%-7}!pFAۅUk@Yڋv@W<OZu:RʊW#C{v1~ؽ虎b}p Q EA&YiDpE g獲_ BB]`vV6hՕg}CBKt..X$N{D ]:b.)"HǐN!rGs9U!6!D]QB{n.i~ޝ*;~൫Ɵc9;=N-!U3Ubs&9HY[czH8|jo#XPտf^-Qd:@j0ʢ2"bEb ׶,dIOZ{M,eJk!>*9bX@(yA#Yd~5ҚtCU쁅~B͊vЙXozmy=p+L{p| DS;9ļ24;+x $Pa 9%S"mrjA[^K x0RlMJ n,hmRD^!ƂbkCgh[ms9].F\@(v$K<`l1l6涹GS.=qK؜wz ɬf˷.`vN1YA ~[Qɽk99A{ff?4'vs$/wh6ݐx0.Ogz]ѕ _^yeny^]~ WrXM}{.|}}wމCNw(tܹ0/C W#!6?QlnhdfWL <P"hdg9kCT28 ȑe nUczmn-ljt9*MR{UJPTr\(uH oXAVn}+ͮxBm Lg%}V3;sd2Ѡޒ@Mcz.y`< p]CDäs$~yp p X^}w~ iED+M wRŒ  c":ː$e Mz4QB3д -`6; je^JVv tQ<xk<\_?,#^g z,dY0lrɊ,rV ԋRՐNyBˑ0RSbD6Y Bb6EG8t\S=Kw-8w.+c]וdߧq:O_/ˉ`^8 !z3 x4`@?n1.\zEʬKg+Z1CFsk@ }Ѷ+$)nMݤ75U4_v3k3 |m8;ЭPgi]ș*xx2%&B$2rW MwI,.^Ҏ}uȞTVl>p7!=Mh Ro~9Fܯ_܆K "8n C-G+!ZC3S=-˘xBx*wAAAZT\Q(X$!L 6ŏdPPbMBgrQ…}Qɢih fAyp?6΁# PRM87(]&WKbCb2=g9KÐJ:= (WC'P(mѪV)}tUG{זaޮb̝&TE/&7>scΝQvIbtVD*r򀭁oj˂C\^w [zN)؎Ro0U޿WDIH.kfܼ"=y=M\s% hbp,K[iζr"39Nm$BLÔߑ%3'sfI\T =--bc=RLN:&ج;#,Qd싅1  /zzV=[owr~>&_xa1b琽.Fa_b3H.tD@j]Q$֦$*P۠9]PDRZUԮ L26D5LpVݹt/ͤc_VQ[3؛$>Hb$@R@٣~^LM(e:H 7bDLJ23Ci؊FMu+;@Hq82F1bkw'uvm,MQq}8#CnژJRFmK[=MQfmePk[;lFz)OBj&"*_SHzu i*[4ggjuTFtH!K+Zg^ գzAWn&!jGZ}Nгy]X1paWdyu7tƖHH8U@BlH&`m*ҪFeF(ń2 ʡ)`" W}J=hECg˗.H·fsy^FَSQ>d"ub*d(if}7)ؔmo.EH*% 8hR' l^S K"&o֝Z!W?=>2-F{'ԎX|pV"kLMFxK,j2zogn9$Y/8>몮>} )6tǤj>N/M43Q;XD@ӄ9x+%eEsz(L J̡ey :sUEuIF@hSdu~/# ] aF2G@91*眮9(BI ST"^*^JO5áe2@>x1@If PB <ٚ42HR>{ILiJY#ywVyr<-ߓxnt\zEOP]Nay=Fiѫw«_3U3Aueɧr0]VmZC.l4Y< o/_→}_R/SBѧۺ콻{u=:pVMs{7;4e˷^.@)֯?$;c|ѹmx;a=U<<ȫpY-R\VW͖uNE.UįoӘ8 )+Q? 7RZ黓_ eZimnw)/-1"-vIIQv>lrqa__q_ c H#?`z,o/yQ@  8~x~y/s]O8ק Q+sɯ/l "xm_{4sie1$y0NORķw LF49]_ 1ԉKk 0V<+uYxSx9Js^>{c-2_sͿ"R9WDQ`y5}?oNäK/o\o>.זW.|-7usSw 3Emo4.ݮ˰TzlHzX%U k% C. DMZ-OXq<ӳ:b RĆjV wZmK,ē>FOI~ǨɎA7/$x}>Kא z(q/lh¥bT q04 aÔ,G%)\Jx8}p٫/_EeЈ&JU0(e:c2!"X)d^SrMIr4R *9 L` LV"g83\ vt<> dL*L $GDdC6Y*K0V'61IRS/whS*t4E|嚒.iT P @렀Mp9R,kDs5%̽gUW{"Z /O*`/HWJ%(ll?ڧz b0j,dRQEd1% d1yIJ:v@:6}A-;6p3w>; 5w&%xw}֛C1tv2. !) "54Zak轊饉vStݘl,hnh7:v4;)]F՟ϰ<Ƌ@n^&1jDaRA2ʢQ/L!9hxlIVwPKRæƵls5|xC)Ly2:msdae7}SLѽ@ ߎl$xYҍt*v#xx@M^iAƕwzZl]G Pm)jmٍuF|W^ڜNp>ŷt;"&(l=ז=baXK@ Y#=+徲DY+xWK<|~J[2/ &+quYD%rň'{j w X 8,F_=sE*VhQIG$ ";MTUzπ6٘)%Uylyvr4G_u1Q'Ȑr!cH#0"WJ29˔U|&*V{H/K8֌|qgDZ9ֺ\J B1Yd|lݣ]D(bºݷ/%l5(ҚRi9Y#mY|W 5Pc`lѸ8[tY`>%::~r>2o^ӭb g_6gv/竷&{-n!w!y{Câ^@6"քwg Q-H6h&Cj M&y4#ZXۖ#BPzTؔΑryC#l% YigxS{6(ZqϠU¨G(|ׯ?4Fv~v~_Z$-j{-f={z0yӶe\lLCT%'^w^w5ˣ/[? fkB+_ u dl,B(j>9 %%\t"%& =Et/^}AYni#ZM#}QH :n +ïM U  lAzVtou| 6ƵF7St|<;W'ۉi=b&~ iy| "b0 :im&uŏo)q^ -d i$U{Ś 6Ʉ_\:"Qk[]КlxӃќ'g7{vM)mk r9B4޴voP&G>_>' >m'v4)p.2m0E"WI@܄]byE%o_(G2/+|p<'ŬO\q|'-؇v,x%vZ.FH־E$`88]'H^YpW_190m@hTdJ7%nk^ _W1956!S K\o~\?vt7<ӘՋZ=Z$Eg/f'/8Ï>Zz_hGr퍉JGG_N/fN? .ώ__^1ks[g}|R-_"08 XS%׈Ex5{-(s 1Ou(#{c&jҺ&PꦂWoжw/'/gG_fyx@pɧ؛WTTP89!K 5&ȫr]zֆ@19Qe TkqLd t )i]e̺0&~cq,-kcyzqKf>Lzꛑ]eaķx Q"ǐS-Nnnq|YsȶC*c z;8j ,9MpSv8LJ&JV(7!PTeR!-&4sxKM1P!{aUmؙ͉B;'ug3~9}vѼmJy\I#z젳ix6f"Q*.%vJ' j]Nљ }򧩩'ֶF]a!:@m XԱR2TKc FU -8-V 59k6l~=w5׿>ŢzbJA+g wx;x#" BnZ t~AhnE7zMz޽u{{{ԟw5O= 8N>aGC  x9P1h3Ðj*E .oa5Txk'P9  [= ($ F1ojm+J gwE+~=sm۪>\+ͫn+{Wʹky~=Au:ʛ/+3ˉeM5W!"u)FcT͂oG\kZGG+h#U\9JZQHWvAbR8W醅4cW,ԝPOXXxuEck\,Nnm{eKԷ!ra~;s&hR.mS^l!d( 5& ҷ6 4QdLMp%VtrZBMJ*kC^.:R$QgBaŎ7tB`¬BnT+U>{D]Rv:t5:*B EV kňy,u &N{sУkk_>T|ZlCdNLnGehN/fi]|E׭'3ېq*,qIF8%͕P)ZLꔸ)f>T'0'K9Mregf]SF'f+.as*y–%f5.BpPvT J"M߽ؒ2 *޳PQR<9 κv30QQOݥr1liP8Bkq3P"QJ KBfa*\k;b4@OO%PoO\Ie)B-?6b_)r쌵/gF'M~ҥ62$XZhU*3*ѠPZfN |6t;Ǽ_˲a7q:ou oESK.\P 'ӹdDsimc1kW΀ [j7~~bV\T^Q"_=Jڣyh`HB7_Z%dCrj >klk+'gdlv3wӐ]^0/1z1ok qadX@&%X2JHM(\+drvJ@Ȝ4!a ɳ/ ;2SJ74媣\I/G.NƩw+o~Sޗ<}g4h޺A>jHā9"l-CgJ, >Xʭ΂9\^R1J /@"@N^e/z̞uͬr\nlYR)Ԭ_k^˵; |+` bg[԰bLォ'[Dt&S5lٻ6$WX.d7u.1~_z޷$h.Ns!os/[/]~ ,M_^U54;J[\kㅸ,P_}bz@4x>3kp\jd roWf ^LcQn$FɩwM&b_|A9af9o28QK,fYone< qY6E NS܎R{{tN/޼wzC$/e_.|6@Л'7!Gwg84..ŖX}KXčxSEfٹlE;o;CRykꃄxƔ-:&"IYL܋d#7%D3*p.x =MS#基o ZZzs%GssbRiav[-CQ(h&*$*,VV#T%8lY7J'45izJxcv=-M{~֛=E>two3,"wKhm A9i 灁2w> .9LWMS/ɧof}]Lѝ6PkGnxWtcVnnWi*K\P+ଙ `j6ZR= ^Bbe062*-p?H2c!4b(_xV|q&/HWwgN&l qB'*4םUL>oD~]@Q$4j@_Ԕ_?@zDUl<{рgn ]! ]({vO*CۦCyӽ6Z6^ ~l.~Yq흟CXL9s3?g}A*AufY?<$eby9 %GA۳ԃj9juzèM/qQllA |& VO^ӻ.LhO)ۼ1c+.K)J!17KLI4SQ[,=C2*0CʒSI$VP T`ƹlDZe]hK|^oOH[:R1 g0 fOv&%=ljlk>Þ?~֧gmڡxOhrڗ(rHY")DDD(k#KԉI6^|qx-M̉Ȥ)Prw D3O)d̋^t"y ^waTȄ^]G,9<s9Y6֧K+ٺ^ic\T$u;+?2.CC(ZR27 #ԅ/ 6Zlާnrk0Ic0 %iRΐH@$& RG#f$z1ޕ$=(#FfﴤTW[܏lNbS6xx%{ fF=n1E[(j7ֻ̕{fَ>/7],cjQAI a9oaV2mGg&Ǽm%8n,3(`ηzngksxVƒU\w(>}&#e*Ԯ8RJ=#S ]t9*>WbA:'tSB;.oEKiMJjkPxEZ 643Ԑx"C4z6bP"r! *i],#zʐcBB+dvFntK$HoVk[LdS?4 <<ril'G1 bRsyr6GUqYO v Wo"FڲFz/K~RZ#r:HLi@T1bZоƧ &$fxuJ.%Yf6"\$.2tW.s.{gBu /;#gF6|z^P kpuݑ2r ިKpGDRG1,\Y@p݅zr:lA@+Dn% ,UfȠ!=gNRZWMo8XvjۆZ[rzN&[[|Ԯ=d+*~QꎎK'J=GB˥xE} oMս~nҡ諚[?nNO(soIv&[\A{u܅('0x8 Go,Pdz(qa=2S(֤ 1DZ)T;ɥHw3G2+MZj_7/BS8U dեRrr{={$',g ՜/kdjr {*5 $ Ӽ hY% ْQR\A}Q i'o3X{& ˮ? v F;uu#!-,ZHz$hI qEULEDI gVOS`46F<ރdh2rTV " Av}%!nK섍ܘ` E^rIVe謏MNY,E%sFЌZ.nZy&*߄N̶`|h C(ҽ `.j ǫW)zBC(P!sA0:c*OU^AXLZrѣػXGo9dѳEAڔ2Gf[#e1Sθo+判r̔1zN0M9 ݛY]Zf3/V++vg.RأY#ι1vqB+m,N#9Kwn=:z$td }oҤZ}wRF$_+dpLh-Y=8#Ą36 0X\IDAH(l . *&|*P1*uX{|Qdv)dү~1L. 1såwFL9D4ى("x=w~CɾgNEo_BӾM4lͽ"n~8 24PH N?{F!_vQ|2 쇙 EJv2Wl=t,'M5ɪ⯊Ū;xI:hhGw;x oVZVdV xԯ?.m3RuNd3F+$LU"s%YRNeAbm&r˓A,lP 0KuK.`P:FKrAV%jAHfdj-H'T=9O伵A7fl)g\ QoobT9diP0ms&U.GAQ hks "R>x]HFRleDLtQĶR 1R'?4@fP ǃ4IkA¡cHEAP@!BF(,JjF-y&]Vՠ8(9Qmlz]ruPQpwC~ܛw'|; R@a\\Tysm^ gQf Ep #´̨!<}f,'·xH 3O3B5P*h"ZS$%-h'k0t_*t z2 +A:W4ƒI7Mo5?. Hb(6֓̓MVa5f6 @):t]0Iu#WCJ35弡ɕf.sW_$&hzV &j<@r(`EQ9 ek)|ΞI9ж-"s2I[At!E}ڐͤLp,9 y+rnG|\П=0-ޝI#-`dyE֘ Fs 7Ĭ&sҊQC'ZB]`?+mF,)Cp^hxVcc%"SG'K{! Sg &pͦ u "c ^P!(U9 1Ag*. F0DVn???kiF05)Xr0Xt>DPyd-)*m Hmi'+zđPzX4 9LC~G(] <ٚ/줿<{}G&G丯8ѿNd4>|x\ QN{g?oB#Gzp,3 UqGΰ> Ѵ+@>d믓^}7<Ԇף;,nF|ۗ_}o{F*x7.8&@7`͝IǝX$ܑ+]ϋ?kתkoյE^|oӯm*>jQGqizLsOW۶z׾ѢU?nQ^f18Ax8Sl:+9\醿Oc掖4Z_^O;Y]NpOqF%u^<7).j_?z6?˿oyK:ՕxG+3?E j5dg,f{XyOTad"|WU=z~'~.qY IӣOkzJ[mߍ0DąYVrWfmnp,ԧ̲ԛמW.|ͫW7usSw #79|h|]TV[>O> dؐgMu^,r|Lf>+cL:j-R*MY *D6%C EIYyEMecdh\˾ѣ5$\5G=Lp3Q#cLwV=t}Hi.eĊ:k@MbXՄ.~̞l~3=H$e$chfc#@Lʩ ,H($)4Ő TZS>ei.hPH(zMgaQ1d3rflvG{d gW%W_KV\׍"xrn\ZϚ->_hL@){Z ]`,MYC/6 iJ)KH/ l\lr&PQBQ:AES=ȹ]Y_Ϧ h=<{lG.F;rM7e@e=xɘo,b^4q[wGYMKQH]Ee*$TB_Ain JeQM tyd0>{Jq;6=qY,m GQ$(m=O9)cLm^ 9"uɒUL :l6#gKFOo-}<.1$ :-R5z* I:uXc0&- ?mRQ"f6/gFQDB6{6[SmE  KiVO^h';(;;;&dz>gC Kӷ*UmܪE#6]5nx{/4˹U^i [pPriž OA;U9 TÀԴP˲Hɨ` --(!GN%!]H$c+Q2۫F"gQٛHYhZŦ2mt|Cr߽L@*P,L"J6bˊ?b:3.x.B}KJ"ۤRP1P;2^fq6e݄ wNKu BʛJҗ5SR+xϖ>DYx]E9BS0 S}N*w!~ʘ,J0QlƢI&E^*$#},&mTp=-rd> URk:Π@@elz5)0`umv)xv4. !;6S"Ձ @ޫh^B4kܞ[{6:6׻՟˰<ċܭPn&eQ{/% HH(N@_Bu'대ųgtTL>mxZNֵlFNƹWF5>w4&cnWǏ3۰OGAMS'M'u+Srn29e^U=8<17NYɯI=M?^k>ݧޘ L?Ls__RbqK>66M0tt5أJ2Xz_i,wYRW~%?g޳5+cYqQYKAEkU ' cODV (dtfw-mHWD1Elf3LiMKkEڔ")dr)zʨy$ *\F9H1z듍},H%zs%Vቯ~oƽ{$|*_]b" + ]PK}kM)TV8@Qr͈8D躼Vf`x`zM"9Y+̀sY7+mҚ5oֳ)rvڍd].6Ut1ot0A"b mq4$j,1a2qy*iFgJ]3_`~Sg[[aLqpuX?`Gο˰)P|S+^1*0&q5@@PUIM+"W4Tsqu3 =3X^ UESl#WC.kΩ:\> *J%3#$z̋lG!'xqV׺isR͇T{.4}U+\)Yi}%[W{ro/hɬ(7ѯ6Cġ}B=Mٝ>r"4w^s0,/z׫bN"e=foDY_[F m![)fn*0>yG-ό|Lȴގu=wg{,WwнeT?w;*;w(YB?<khDkuqrOyn`Y],Һz&mdwW=7{aƻn[ި) "~ X?păugZ2EGS_>_GnJyo8ƚ,Ή*!X-&۞)-͗bX:1xO;RH9gN?}Sa|=[E@bQ70Ѻ9L{?AJ ͡jje) p oT?'zu=&)Ցp5[%8+t)*6\SPڱ me.2 ~'_q9۾"8"f>ܺ~,Fö]d?SzUF6u/ߝ]^wm[qqW-G {;PTiH1A5|O.DauOuYtўMMTĞ|7RiF-zé&ckΒ Mnͬ ՘4 v#lTI!IIψeԫw. Gyvig/3~pW-MF\ѡL}^ ԭ'XلH945XkP %Hl%"ЫZٺH/oMدu׈͕mpbqubm̴^i핝iP@ly}jnr8*S+ə% CR. D3zҐKC2>Ґ/}d~ѭl)SV2t;OxL*@P̕o"y}Du`v(p}uhyJ׭\'uF/#ޱ1?s >*ck-玍o./m0Qڌ_Og\/FI2a&r "?iYx)&1E7|H'ͲlqyŔo!%2}Hļ|D0Kzb^6yr=Lpd@DS)xgN pP|2pTKkUf N 0o8vj9Z>vR7+˄.?↓+HtR7+"E֮V \W]N|:\u)w;/Q^gߥ{i}b{ $:$pX6fnJ}։l^O?]ŰΡgs} mU{]Qtq~$Yx {:2FoeLIǫavxMU}׀qW}DQ<1䇔~İq*qZ7و-l/мNOg2PX%ncdϵYdj4ZþU-u1IYN9:Z%WVV]:ok#۫Z>Ğ@Dz g~݈KљuV*J[MRlMo%XGŖ4HٚG=ߗ$HjU I4PlEE)5Ƣ*.>o;[$ku`ܙJKke<)"^i1ކ`顊/>}|J5K1璌uXii\!f bUjY{,]'Bq c46c6c6.`llʦHS-"h2#0ZF@mHF^QYw&V.AeWu )e)5LFyrDêҚCHgYau^ A pP ڟz}c&EO[vZ!AP2ΧD!+75Scu."nUY3%X0kp%9VstF셟 ¾QO9hn\ ZCwR5p> ~X˷Ҳcl$*2P .E $ڨbI/ 2%쓵\]hRTQaaOi;_6gI^g!ՔJb0;bRB b&IE2*uL:ZfB Ǫ7)ޥ$6* eU0ɠV WNS K[bnŀE F+@)p+Iںuqx6IRT AW%WPa X y:7q, jӘ}jAg\@2Y\! 3=UW˶ BP;GU,pJꬃ7,^oP6aRwAQZ+!Mc¢]b2 ܂sbPWMĊ5)ʐL@6(j?&l tR)3WXm=2z[C 5l|%]T q9&q3F[@zJa(|Y0LC0e&S%9tH v8@ո3дή+TT|FL%\f 7*,Ф#KJZ 4}!RPSJHX^EW.(? ]l-ud"f_\\U-:mTJ_J#F)"CqBY3^ $SE `|wQ+c囯Us Jlj L5W}r%ZW8' #ܩIkB c; }0M^nbG YW2/l[^]@&^۴^9Ò0Aƫԁq@ qK` K_JJ+G @˥G40w< 'ȸ,hRaB'sAp)#ȄO+k2 XPP¸=web=@$Q0|DPV>p38pGhdR& :Uϳ75@Waw*@m+ɒ3!.$ Sšbv7~b?ٝUE-n`$X__b=EhS>~F4AJIqb &"0´d75n :Sz qaI'$FDlJuXqTgRI<)gjrAiDx:EȒT\5v+Τ 2S"iU_(M.ʀV޹q$ g\Cy]r%ODjIIswSJ4HtwUWuׯAፁH-2p% ~yש\c q p",UXs0a9vڲМ},DJ "#LLYUq-v/z"+: w IA0H?&8K 1ҹdS녵ҘWrNASֶkm`4, k02 9,-4qMW-d0=00ltFL{,E ,:.Ь! k sQS5ƛZPٗ9%*\HL za8w`93#`Z 0_ p_|J00F|mQ2 ;=zJ6 "qp,1 %M's+6/" kst:fC1IUvy ƲK13+*/ .$ aKv=)@@[@$.FXg(j;KDuōRPЪp .]z7;dP`v\ gZ t 4W=]: ژsf(nP61WN8u :5W(;c0Q]1W(SǛ+RI2W/\aNr;d@`wg ŕvWJ޻B)#s٘+aWL*?׮%`yd'e=̕'sЮ#w\۝1W(bPZe\Ջ4W^kGV0v0L>; Kx~x׿>|?Jb,#mfQ(tbeMN7*1X?:WGsd闞aD nA/{Rgm>?}-o$h}bItrR _RҕZUUjuLVͬ9׺_aޠG5ҨtScF^}{on_ڐ7{;jm8 xǶc;b*45'`nǟ…;j"l?5KZ7oQӓra!x>ȋ$ c5D)Bc8_tϣܚw4Ν+n|w\>@&xKnZP]Syqx胮|7>rWݽM-h7jxЌ}]W!Z2b9"jRQX]erBֆb+Bζ2Wn)>8/Sd腀`W*J?!6h2߰Dڟ-?[ᴌSi/F''|:~'kq~^,; L*p@,0S_dLJxXA\`*L9U[7_RZ7spS/?WU|ᙑ}VO]Kaq,W5e[m " kHȃeFRNw9.wi0_v#cБ18\@ ڂzo /ҭoƏo=I#XUyb[RƧFgGZTm|p(X\h"{Us p(yG8ӾYfoLN3̙ :%M8¹&pN}.Ju#u&>U8KK/fw \.=v l>K. Md?s4O-ݪ_vr$ټ#mIN +lIm4JqN[/gKZXAȑhÓ=wY۪sx0+,i[w+" 4; =l5~ o\Ҵ4KPv2&~;wEhtm]H`{iPq4 kE?WAUUI`Q 퀂VjSW*Y(&?(Ry:X4J% W8Y2ns ],[Ez'=OlaaIzmԑqے :[‡JlySh 1>;(g\FvG}o\}>dpZ0Z|J]0?_N$G?58X/XߵhRMK,zٮ^ Y5MpY+;>s}xGoܗ3q2qH7sG a0h]?&!6KDoo?.gtN.d1Vo,w*x[pG|kuh-7E`|r$@ף@B"妴DQ6cLg0.NS2>2=N.6j3ѧj*=|"ԕƔ*Jz( a'Gl¬s('asO ]xЃjQbR5>MBUj}ڊD|**= "e=W>2pל7R)EaEB2iX&RqVwC %(钂(DWU.iK%L(]f&1Bec3q66џ(+ 0sOǃ_}ٻKΗnjEbSnه5+E?i-Ӌ4tk wmX~]b^o 4! AˡjɑdwbQp#Qyuaa-8mMv[;tiz\}㈝\2 Ę gIDaMI 9DDśn|^YDn(ʞO"Y&jd2v̺1hleĮ&nYb jWvfJ=2)+ zAX 6F:GK@4J$BV 2ă'k͎CPUCu6R.W#x 5w֏/eHtgXBk_;{bi}r ""lm۳.xgGu(D <]2^U+ﻟe0V묥؃gkw/<7A!?"a@B"=B v3M|bGe1ņo[3ӗkv@K'wV׋ٲ7%Qk~Ed }d-?8_/6^-7ˋIO5tÇ^ ǠzIbF;e\?q[Jp j+nd!4Yh@8 j~v{oa-/NP!NC%^y v[W1{i-AH1{%io \d) lT4MK.<'w; )4'%=hBlz"OتO&MeffOT,g}{/kRO*FL{*d7P*h iF#̚{ ]rĚ B&7V5nFgyNPKHj~vdZe<JAYJQJ!6pi a:$C#nL)c߁1 }%ǫպ2W.ۈ4/5},['_&Yr$eR( 2*s6П΂C%iN9Y0DggC2i< / (`L RdnG.%Vq4-FIRǔ`f'RKHAE7DbS,r(*C5qv,0"@pˢhv$&{;fjJ[^jv3>fmƁ'X ;2Mʪ,%H PIBxj|zFy9]8!"1i`4i^PSY!mΤ>IcASy ]qZţ>FWis9x&R ,[0Pfz1h鍵(5R["dW}UIOfѷ!Xġ]B})]R_ d%T+}4yo`\d$iKLZ2 JHEiqΕ8x-tGe6SGt6C%Q8N-s]s}ӋyPjmpӄ+=V`Ҵ3᎕̣dn%yG ,OYPYDVXcMIŌ33Iw, xX|L[)NVSs{:&e%l J*xNH20YȊx,dϜs@1N:‡ 0#.cIHGi' xLXy ̂V"Y1 ^;{-q5^zWiI~8ӻw_$I2/SteeI*8#{-tZyHωjQrZܞc}"HNAPqM`5J;HY+iIUO4kK$KC NP2YYV\i1ƖT~:!\LTPuKhcdPEñ*yxP)DfI& vZ!:kۤDBFxMא};ns{'db~z^q#IW_J>Uj-;ZIۧ TUyXh rR8#UHUO*8DИsF?0TP&R^z* AU  B5 hٔ_g^ͦoGx&M;r|(&5هt>Tx@<E_C]p|{Ԗ*oJ<oיH+~/n%<]x O~ȝ!}OIkuJ$#~=8-:{zE7A٭D¾?*{jUUM|ؤ IZlL߶w}7ЙZNbz1[v0&]/o4)>6_;~Bqr;^lDZo\_zˬRyIPe@(9] cH6d?fԆ i$aBN AM]Gq (Bc9 bNgmDf8"8m=VY_n.C>Rr&s9x'SA tem8{:u J/nCiַNuWMT |PG;v};}6o;w_6Ovؿ*ZH\펰XD/梒n弎\j7] Y<؛E&ٷֽ|WFujn|K{yǜ'I { z?fj_]u_ |Ȯho0'~ʉ}mh!f] %K SDbP g 8LpN\*v+n*u{㵢'}= goރ\=THj d.*Xs "݊v>@Ȯ'\!U6!GgBhM KhiZEg/̣yu}fu֡EyX1on&ŧt).Ro1@;5hn0=SlBl\`U\f?ep .:tG ֫!|rܻUd$-$ M}|8: 1!&XEꐃI\r sdLQ;W9RY:>imfR&8SvADy+qz}Zj_цa\'{-Im-k$[,g/5 휳E!V5Vd%_u9\WuJC>E,)瘄!8\%D(]+ 1eQޝ{b}GZVx0KƺGs/JQ瘠3WUPZB#EVwn8_k TS6_#%Gɡ(kF+b!T0Y rJD|";R[CeA&8Xc%dPW?#.l줿yS2F~F&~8x|2>{N.]Ϭ9Uz3ƽ}:2M˨.er4]:Emx3yqdqx4~woǟ^ʿ{߲amIk,=H [wǭ][V-z·m{|ԢMz_|;ߏXk}Z_˶uxvn\ђ(N(b,2./qTe-|XQ5s 7Z2~꓋i/_^dinso'FMIQvɛr_[~Y 9^]n>~_Y$HzzFQ+~黺ؼ8Au­NPNdյk,F{<'o_6~6 6kVO2J4eJx:=MϠ}WfX_0mߍ0&ĥQVFFm>Q6SE7 $--y˗o^3 `—|_"R9?VЛѲsߟɲ__[N~,'/՛3_MݭsM }Yo%_A~ԅ9d?I'DB,70HT)*8$FYrIrzLf``O0gS)ަjV o\F!" Ǭ 5SR~1jcWt .e XDhC6gKn4cڝΦ|#2d+ {zzeN;"5Ey&5rSƘs^ 9"uɒ!&Ij6gKBy+dojw_o[&4;!1$ :-Rz* I<yXc0&m6_9(kWͿ7{P"Ab#qJ=HLN gjf}vKiV^Մh';(;;/hMfȟ !+ Wo3+:y7X3~@6T{CǟӞ+jE5ʚ_0e/5%,$yQ2Aik+9Tk G|.Z*%$;RlQJ0HFf܍|fXL3vգQp-Q}o(k8/S94/qB/s^PnbvRdKt:C6Zh %ؚն>kd/fWY[SvV$ 7Ff܍i:(澠v3W53j@eDXޑιmNjɮ ,P㓛a1ELH`̞fѨMg5VD82F1b8ޓs7ƙ*0 "6ӏ]Q7FD=  iB+0I$9J&T(*pbLYpχM11Вi:;8dJ#$J:W8w#CҸi슋aoZ l%IZD IQLT*F8Ix0K7ccW<4솇aZ|I?GnU#q9 XD=7D?>SV>яAn¯xgBlvz\CĔHuE3С* !o:Ems-娷yKݠQxU;~IĒL1jD)  e0>&대 ;FTL>mU[k%ɹZ^͟hzQ-7k:yuZ^F/ٻ6r$W 9L|K>nw}\ HxYbe/-)R˒ۣlV)Q=\^xٟ"ىGn;ƣǞ.,ZA(LGc[v;0E`'PmVHu Hjr=n۹۾h蔸BMɀkM٬3A)Y;#9J5Ke4/j ϛJAvYJQ}JgD+]bJ2,ǠڢdA%+>E2e \t f4/VS2\~h)h%k:r٧{/O6})G3_;Ql8L !lNjς a;Pr7a! i8hDXDj!&Ec :g)\at wQ1Pr[N&I#YQ0kmZ 2Yd:Az&)B9 1aLa죯i @sHd|=#$>lV{vPm6 V|4u'g i:;.\M7mU;.7 eKϪ7y oM8t!ӵ=mL4}\Q=th7Mݦ{IksK0݂=u ?O^ӯbDi5{s(7{'p_b3콟.̓a%Vt%leߌneƾu:ooZv~V-?fod՛`?ϾQr%{yb?ϓb{UZl/aMk}س_BO[jwj6~=b,RY.|3@]{M mg&n]ňy9r7+ns]|vb .B mm5[xpH~zAk6d"H#(1d52-%atL*- 8['ٞY8|}4) p.fc**%:r6&.\ AKhysT5/X\x)ZpMR,~VZӈF6ۿ o󔻲)LV|;i_zi٪p:ol%PON߻NH ?/[_i'q8y'i5gvٹnLi}&h%SLdd[=k{fXW;zv9{cgJA Id}r%kOih~JmIk>ȷ┓xچ=ۓngh;سo)7XՓ9C}t<߷t~dSed2{'sz[wfV~ۿzh>8{oz>oszW&x&\7]%y qHvl,`VSV3dlY2 *\Io Gy3\*˵\3l#y,0-ɛS:Rm yt=٠zˬTԻl3~UӥHrI#9R_zN[P*zw| B Z 9GEeţQJ}zic@N''S>H4.%.45 3x)1Xj '(ֲ84`l\SbQyH?Ba)H+4WFhI)|䥚TwFSl͗f(},LhBք'Ō5Xc:yedTe+RtIɝVKʭfjtL:3ld!!ĽH =rcJ Vu%ә Bձ#ұi "C>B9C[bfP=87^<*Grы/Fc0` XcdC7yѰL?gyW*XDڭ+q.DO6<%@D ]PJ<;r}ҹdBCZRsfc"oGaKD.ȳvAZG(%#z d*CV sk %OhC &z8 SDޖ{OH>d~Ȓ?O/kPRyjF+8^5RJOu17%uyr"~jZegmӮyhŠ,/ii &[5 - 54t ;-HûY_LoۤHcژ3nâah%7OmK=66_|v@?Τi}. 11%ZWg`PDl/#-YIG#=K^f%RZ#r:HLi@T1brd}٪l +e<%>lʭNɥ0 Ì7ܦ̥yo9@"w2GwV:^i/r0q6L[WbK|zs9{%kӱ.oιfäy!|X k> 5]w]O.>;"'t-!W6DTk.]BDK5\0jvi[H7wh>^;ݮG=5祖t2Yzno0'Yk:.}x\ aWdBN/nq6)1ןw֡˞[䛃ldfQZd?[| @ۜǏ|M6m@.ʆx<^|"0r \Rњ$P+ewG>6`Jd XJk&/@ xƽ r]}gx{L:1cIm9bBA=TF *ԫPB *ԫPB ^#SR Ѿ4E\^JA"Q^Hiu-H@TV@Y(+ee ߇f*[(+ 2PV@Y(+Ҳ Z*IMGl'UbJW*^Zne%PB U+1_%|U9Ĕ[*1_UbJW*1_%|UbJW `P6Cx~@n3`d\ LZrQ]]3mhm-=`yW,f5Rfdo+R9sY9 ]?[T Ra~O?-3'y}^YC?ʭ]+::: {)gO~\OG)#HaQrRFq&4, *8:08JL8c %͕DXH9" 7emb#!Kb2<,Sn"LB.3!# Cdž &zpD6q:9mAgi|ݔgmL-l5o?j1L|4?UyJ ɢ2 d'JYI f&(o^C=M4==헛و{(́TZX76s]}\&qZ7ȋ#R~>#G,IytK"@Kþ-,zڭ%akYdeVH%(뜳SӘ]0I,rL{C*{eR x˘BvB.s/e/g=+6p\gMvܘ>"("EhhM,b祯/з-VIK^DEM0t\GY!ϊYG< Ѭ g`P;j`tOYCE&J99,y xIXgC=kWfDt9ti`PReT1;m.ycB(4#.fN <釩*0 \g767!!s9[.!(%{򕼍*9it6(}ZAp5~Rnw!㋫e7LM(0e6YHF,d"d%rJ A wm_˶{qn.[,6-Ckp Q$$;wxKGe:Csșg8ÙyO7,i72>yd 쥖%cKɡ1~e&[["%,MyP1࢔MB%#&yN%U09{&_[1yW4ycYn^;^\$ rhN2/!y֚A۲)|T~4 ?cLD[М~`V.er:Jd &yPN_|0Yl}ﻝjSO.:0<;Ͽ7M_~zp^?}Og9B)E&_"pg{{ mk M㭆N15z9kjrǸwGXDƸV< ˕2diV_!I?.u,L"eM|4[ rD9o_a'!@y,//`3mV\O;wW 5;5% i/o"#~OOuYƃoAN޹^ MnU3e_n\q) \VZ ~u9l^4 ZoƓOX9%\7qw53W^4s`;K8 /pWlӲbJ`4hWۤzzv׹h l#*vU fBgo/gixtI;ێ~ —qo0"E(t2X֤o˴q*g }wW|Le$Iּzש^Lˤ/:p~Scl5n*nƟAlAI"#Y Y.*m57Q,4I9A.ψ/N>XdGxqzw! `V9=D2T`V:Q! +itp=FU5r_/x% K+k΍b)(QH& ,tVetRB(1J:Pd0KbO^*a2ވ' 0BfY=m)ٮtC}$vgŒݾ#M3t&Omb.`RG xTsĮ9"nqPzZ֐KRw@`Wf-u.ox L.O1pk`$b)} 9%xqϞ|lS,*veDwֲ墮dH &ڊkI,*i+15X sS}!¯Ԡ J{D= 2)VL ,f- t<NLKK+AD˂ 2E FgȠ-YH0EeLb$W!m2'N ^GSRVVGAI75-#>CG֔ :b%tzhP>콦}X!*Uɼ''8@@%-s2a&tRKrPlme!{! (d JX0*HyJ6mJ^Qy3*#v5rv#v5XPwڕ0jѣvn]Z"ZoMLR*(z⤚`L Yj\66@t+I3䊬hX| YR+"R SօІ>j֨_-x,X?EDYe="nx$9 ɼ& TSA"5:<#U@V6aXUD\UlcJr2-ɒQ,&G A(RU,W "Ծ+Q= "-u$\5K:j\r(.ʸz\qq&`)Đc %/ r@Y*Ղ%fYIMx4zb-8ue<ԇ#@:sG?{^9 0E >iK`'Ufˎ+!g$i2Q'Hh t( 84؂u{ h*o{ Y1*kR BdϜV4#Gz9Y~!V[} 3`VZ @o>*Ib9!9tF"90 ^M-rp/ӖMFCe\T*fU.R0am+-vF!) tٳO= >Y2cl@(RV 8'&RߍpX-{۫P_oy!/B\`1TL3eDѷYwL ;t6N%NnxZ|C۫)|Z@8(9iSԮ쏟no$ef|/Ӹï[hߔAɹ.ekzd˼Gpy&ҧQeZ][35lPԆmUJ{T*Ŕ_yr1nY.g˻ rQVҚFLnnJcgWҴ2GWZ?PJ~gYYA4:QR*,(u:#xcH;\dF2YN`Ik`L}^nJǔ8ȕ|#z`Fx6_~xaf|xJ\5sԾÞ"-GZۻ)M=mlyE/Ս<`17H,ClSֱ\'%J-7Ah :ZS x&iꡧzzl!YI ep9)1 ==LdF3E1r<{'8IFÜUVd+KFAIJsia3|oAN>>a4y?y9#Ng/sdzYbS)ٙ~_Ŵ^(e%aF`-S #3,,x2ŏ0CCƠսݧ{^ӽS{u54`dML_=Hq8O%f~0VK?/"^˾շڇt%]tu벋c:kAO32amqWG>uEgW¾TOK%%zpyWN"VF]!6oJi6[ -q-LO/C,W-ZLEJ[ګԜb'7 u$,ͺ[(ZyZg;t[KB eǎ Z;;6i?]Xw`ȚZQ2L^v][o7+6q[)@ؓqIֈ-B$v.vS]GPVznvXE޼Sv (k+2R%ܨ?ZDgrO6h9``Is*n Wh♶(I=7q I8VZ"(s2޸*"U\1:7&D.m)+ ("9Ч8N]!]7~hYXT/T/W |=*B~(dyD!`{yدD?5{wo&mMy&|S<6"X0|,ؚ%~,Hv7>0no/T߼^3mjzro:ꂻ`t p3=\ě9^p ޞ ȻIMࢱڸ=M!K;xy@]Xk#@Yr8+ iMףz^Wר Gyg0"M}g6ϪMkt.XyEl?h+n,ZS(Wzm@\&P߬"/߾#o+C,cdZJs ?VܴRx#W"yt` jly-4;^iwߤU֎_02% 3LcN0mFIq+cP*d!( T4̐(LPq`$p8pㅳjǹ* 4S0$aYʂ_KF@%LJLcx,>ʁ M̄Q?=iAV3*B4ސ\Mk$SJq6EtDN(ghU/HeiTzDt Dmրob@xQ.I\9%'8B<645{$ISOQ37Autx|8czV`_5Cq| 'IQL5LUߡi5lS#?J}Kٯ~k_vxޮ8o^xUw߽z{@>xo_I8ϗІKlD/Ж_k]{t ]s-VZӒ_=UU%?yIŇM5~NñIXM* Ì1T$TaHnHǕf!B=9'm[$g ?mfTjPWfrwwgr|sRM6RR)s ~FdzKG_ =%3tz7dM qk}4>}e~i? @Pήu9\Ws_?oțmV7`|s2wiO>ՒsYϚK8dܲ)/NI쑛(C4~gu~13K߷t~>Q{{f5f7e%yf>f_ożջ]FKދv+|)o~ad}LӣzH! Wa{>N'a$y/շޟzpJМiOy ^~욒Po_,7gKq߾ݥ3 [`ʺhHpA}$U @sOSH\ A, [`ŷ2:Y݈ĔG@hwhF@0c)H.M22ON LkKrEiutL'FEN:ONHLHɹQS<;R<^OxKGzpBZ)ulGĄY.:szxxRrFpRFجAչR-J$Bq_.?1?G!vnC F۬{3ȃ9Ɏ; 5F\֢&fHr| PaA>Iu#ccpE|b1U`^"SBi``*5)`jDyM!m4B۵> 8Rq 2}{=cklM#t}>v:ضYKtO~jH˨]^f‚AU4.e{s+Ke+zCerYA1ƢFz4 H]Ҟ-`LP-,ȹ[oԳ9 lpqokwx:lG sĦh [޲|$Ri&iyrzwjZ5 q}ɿ :RSqBm%u 1<#4(A 2(e$@BLqfQS(K[1lgD w'Ye}qn/ώ<9i2*p;s7oj笽Sz㨼JdAW4o"4T'ESyע&) JC !a7ZZ\tH2ϧ%dC;*]AQ-R؄ Us72Uaa1X ia,=>(.(6fxIfvOlg&7~۠GGG׌͓`>|"@IA{')(&⹡L6.I*iIZȨRV1MHrH-CPqD օ$39ZGτs76F}+X?ED^y="ZrpgWhèV{?,PqQnHmPňO0~#}sCחX;qnj [Hb%0g eF@HTnQ˦I^i!hAYJN9SQ;Ca XRpR8ap%7Cbh|"sp}9L܋7qn6H5]T.9΋⽒;; DKAû Xf'kT%֨Ռ^v M>Ƃ)Zـ$:Zdw/;T * 7)3K8SPdH@X0 {D'#h\^XΊu/,-?kВA,fFKH Iprs%ʇE| +#I>JbSN6,qm%gB2I)S ikN~WZ~rIt%]WȄ&GILÑ(UD-rri"ѩ$x+IB\M$Jd4$8|9\Xl .Ȯr;"H[N_O>8ѝQ*" _l+cA"xap֨JK^ZdKٿCpf9=%lߋG9X(cf.!JGXΞ&tVo;`iR}r΍"P c9$?{='~5%(2nͿ{ֿ$7W^i¡uk~F3B/ǿ)-P7ڌ\ʾfr˧oF8SQ.T.RE9kVK,l<67̹/4eB^GD<mzx& 5PmfgxoT#\:26vK{~Q/ߝyx;On:3d܂.vPuӶDq*l{uI\<3\YGL kO=٠yq߸᧓#i|𗋨X-p [вQ+(I}B'v#,/@qp򱚞4H4\~+/we$z))ȫmcǃF,6!)5#")$R=f%"/Ȍ#J<~[KS :,I"JAs3;oJBDԺF"I+"2/kט_Nj/7c yg_׷h:+YK\kj{_[T|JDì* ESJeDe9s?1c@ҳ57a!Bgȸw\HK".B.CB8qJ= QIo\<*t=;]9bDbY \yF$Ȉ8$-YIyL'N ıyZAL׭y|A:80rOi` ޑI<˴ rFs\F,sݪom/ֻE߰XWEÚݕA2&ԳHQۚݵ.  ![Kmx^uFܓw|NRJq={N,w:6$HO0R`0+N"{uL BXxIPJM8\at pc/z7- xr6WoMK~9i we_PC-J+?U]GK˖q6w~;.fwd .Kol5T2OoAFdȯdb%5`s-ߨ,\Xءml 6tW>1\6HT .^;LԹZZ:]M_Y:Rۧ(ʰLm5nRM]R'}EeϞ;힎/f!8=p{n\:sb`|A]!ob# yc[b&xҠy.4K)c$pP8jxpV[8Q? 3&Ύ RcZ)䃯 I\>1 I3)TU  kD;)ۂu#ė/PAhCc;g3V,BiNScN3O$t9VdX 6 \u{rbvcZc4ȎeeOrv(6N!pZ˵pAb"&yg :'ZpTȬӮ=)tuCǣ_WPH_Ի?[6vrs|Wc`x?h{Ͳ-z4ϞXw澷bicSc~{ k&w}9]O%rV TZDjTT6T./ kr; $R GtX# cC*M3NNV߮eu,k+<:t񉑮ͶcZbZB 3h/'E96ߍCS$+ J^SpDiDi#@SWR&rfy`9OӡE!( w.0kEITH#RJgB*R‰v#d TkՎl!!TG#u` `AF4$O-&.1K\>{O3~hƛڳꐷxfq_ܩIOWuwy`W LE^ILWWֆW`RlʃPHRi! ˍJFeuTU=R*@re`NL UA(Wj6aD-sa=76d©I@e$gSGFs 1%mNU~5S̬6G9[Ѡ?o3qZ/*((fO1E+h$$kViMp! 来XGA/q)*JHBH1MȤhy44Fڮ(EbhruPHY"!Jy &g F8-Kc ڵW`:~(1~=lRZŎF!* YFR֙Ipnޅwz8uց8v (Z'KWl&BC()s1Wg ͞nZpvs򚯡- Kq:_9R78ͧ:5G5mwC~^CVї5N7?nny_sg~H|VT~8.TjrʟϪ8Z糜W/vm®ȩ 1my旜Ov_&.goؿI+^Yڙˤzq.J$(Yʡk)RXg~KuSPkAiEdU/qE[b>[B뢫rs [naft¸ݕPݢ]zuƔ۟K;^UY.`@͞Fcv:a. bs{ގ碎;q^0pj~,ɰg>,8gU/Z8A26.S-,-"T'p™"1T݉iy:Nޝ&'0:z.|BI <XC#w&@q%$4,OdiǓ{N,Ykh߸'|L ϋuxX=Fwpgؤ } Z#M[l,M>> &O}P##Bi︲"".պ pQIETt˼ ERX`fyN09O&xH*PN3 2RTVYЫJ98;>ޛ%A?xi@1o)J>i,-nٙd@la* OƜ \s^WYJy|DRP-NP`d*+UVUR> \i\MNP` d*+UVc,a\}@2DjNP`sB*+ɩUE#•1b n<ޛAp+qyJ9vTuPѳ(}o8mؽa_4LND_;yM9/a2?&7ףmy};Rϐw: U?)Qf;N_vNos׆\/jӇoJIcCXϟh;W \1䌝\,.;v*Kyl\ \1Fb?/ݴ$OK|X |B>9ق}2,:sJ(m/@tF^PյJ'PK{|{>18s&^`ģiU6sgóޒ3Kn]J)8i$sba9 M*84A٢#D6zuߓ;V=DkȢQ#DZ\p87',bR?{V /;X4nrU&LvSxr MjIO+%-j|`K<8@wFJ}jB_^l(A.w>M]vig0 5C@3 8@ D= De5 hG@3 8@3 8othOzn'PKRؑAp ^c9Q)m]"m氣@cy%:-͎X{ylw3[͙Ox{ah@ep(*`q ,h%5c SsDdPNz$ad=iM!OASA2v2 ZWcc(׬;wCjCǭG |Kvt)]sх [Y_~HuzGy /[m]7 SuNTk4س2X a&80REAYd4v?zvd MeKb Y:XXt M1Z]WϦ}ADV6ɂL҃K{ <9O伵A7ֳfQz@։k*_ !m.d>Ee0Ֆ"9smBb T$_#(^d< 5{@&])B.*cL!Ǿ+>: P' ~Fyя:ʼ#z..}.Ϸw]-Y={<G{@* "RF_:0BaQRt%_ݛ ǞcAg'4}MvӸ^3ӅUĭ6BqYUr!dB7]mx8S)S 0@.d.*<|,96/q(H8mkᾝۭͩ<ыCQf"'WʷzH 3O3B5P*h"ZS$%wN=6`Cюfvڡ ٢.'Gi, itkBJ$j:!aH6yd5BT}ysÏ?>-Ux2L/]@*N&qJ鳷|_(oz;Yy~NeBx:oG"lR|Jhm}gG5K }>:/\y|\?w\_5ɎǷXS(`EQ9 ek)&>gOƤu(mm66^d$JN> OͤLp젧,9 mswֱffG[rm/kHַXd͑ "^j9gyC,j2' >u1$/:lH 0t(TΒrr=xVұFPZ]x>ł{LSZ݋_ҽ?%xAT%P2z$:Yݙ7ʙ7HZvQ$xd  "(kF+b!T0Y rJD|";(R[Eg/^-&u$ c%dPw3#.l줿y/SnF52Mg+}YhN8[e֟g9=6_Gޭ(jtƸ=/';0Yխ#\f_Poy oG,6./p|?ǟ\}û?ϯI߽W^˟xN yFo5>K"=;p|{4Z5vh˖o]ڵ-_yKG-76W ߎvx>:߆ ZKʣ8ͣ w2;^p{lzu:Y,c&jm_g1sCq1f߼\rŋ':|0Fs >]n>_jGb<ٴg}Aýzaf_l^|-/~J[y]:յ;7ף5/? n}pw٧מ9c㹹/6žzE.݅RMQpqL7M,$&IMϤ/o3 ^w:qiU?,72F<+uYpySx5J˓ϼ>޼xcp[Ve=?<"IRlގ>2rw?;ͳO:,8Xo:f*rO>c#7u6sS7=9_:_ yb5L>bاO1 dؑеj3&U :Q\R6h`tx3x`_;dXɑ>b!oSĎjV QdMŐBxcVGQyLvTP3 otMqW Ώ쬝<9ClV\ǫ\:?:qLu|J ._sw)|V7Q,%,ZK2ޔ5u;WUa$b؎E0.WT<*_(J2BVunܭ7?gӲ-dGx؏\!~w(o+()7?B)m m299'<>mUwv`w:2,ye׉LNQ4whMO̟-)+lZLv;i_u/nR kr_pwGԊ,K6` &H_Kz!&U),$yQ2ۃҶzWr.yh87.{ٗ VG) Fel֝-c;6}mac6H#[EmGN*2BS؇Ω\|^j_di2^-v 3 @NltNGt2F[YYPL\mu CubVl 5*kB :prqhIi;Ip|>gA#B_baJ{%,lgF?: q"r)P8T" څA2%j$p&l]FӪ;;lhKNf1~}l'T,L"Jvbˊ?b3@<4E|㚒6iT (T`G\HR)˺ c@>Y[LKuBJM히җ) gOE,Xa%)T_{@}:&L*`+hI0H_)$ttlNK=};j݀6Z7p#5=zR8OyǸ8Z])Bw:?A)uE@gCU4 /!6ѱ%N(OgXEvPa&2Ũ$dE'r7Lg:#`$]&O[}8*iu:?i8;ޠ'UNzN4}u}?S |4mbo2m:4ݲ[~Wȹ[+ 8r˼ ;yI- \p-`eR;-z$HlfυecL8iH2j4y=*)QZ;Nߛ0#sgeu.7t-J%X.1J5R$~Sˢ06O/Χ믠&Cg$W5ވ9*WVgU;lX뱅p|AǘzN$WvwCKczdO5}u*Kѹ6dsZيrEUaT0 [3Q+127_0qs1Uq>s} B Wi4q#M#Jq?EGYy&]Mk"jqWE&+#g!MYh>`杨*Xnou2DDn.4ޅӢyD|Զû 3]o ]9ތf+}ce9$Gj@?&UbPk6XF?A-3(w"E\)v(6st[SS@mG%\nA,wUM޶FFMLފ]nih鮷 UYwJYqQ>v/ݙδ#\9IWzB8 iwH!hxd}M\FDsN;•ڒ.K/ /*s9MET ((xT)q񐃵1qb6)4w+1ʤ^/F50bCM7^z~q۪;(g6% }s?i <!Z> y9IKǻ tㅑ^Q7\P`o#gж:.۸?A?/O6_Fmw¨2&6 ,_qF;j:o;h`>xs0x%#AlW1i#ey^]: ը QZߣe}^ m3)86pݬ3[c`Ba{Utc"aK~-x7\u2^/b;82#iV |Q]w\!EH^B$ -ĄygxNɥdfe{CmJ18-"QIR#u;˝^iT7qW( IKo{- .P.wF7t[ Y{G;8޶t%UQ'vQu _u9[uiurh 8JW9y bOȈ-߰tєo?:Ѽ:slW67j.敒*-- l~/yүM϶G #]u}PC5_^:+sU u@}s|g|h+w|^43##BBh +|T"h-grrh0Km|gݻ:rMZp8 "LfLdIy ^`:4SJ dUj,'lזk녥!RcJNOPE#]l&ng;Lx%IbrY8ʕ)$@Oc^.D|6-8]U "K P*,&祎,R(.#M11HYSDaRkRȅ&` 3xeRSk>j +%Qv&+,1hW:WʳrZ#Xr{ Tur>z/Y,j-ilF7{za7A7rޞ!O6OQdbKUS{mڧ{XuRVt/giQpvlNQ6[[zV)3)VJ%"bQ&QK/0x,0'e Sѥ&\:2& I;Y6lIڀsM^MRR:9<ӞT!U0Nt'3~Bpm8Vf|;?_R;+%˜*L"OKDq"21hƠ0)h;+ɶ-^tguf)U Y`je ωT#$cN&.8 B5c6":0J`Ngx`q% l=Dh"!p<2dsLzhVQ%3~w3-$i=$[̹zGc֟Qʳܿ 2A>30oGW0s0Ѹd,Uqބct8(Kv/k%Cm,dP*IW8, ctA|y!~5ڦ?t{gtkCsTMϪ8vB<᱅ k_1؂Mru&^:[3W笲kL;lx7V\nU%irCMe(/abšwX3ק Pz%ȱIG-1Ҵ'~?ۢ#SU鬼p&ĵyYIIou\"n3⦬]a40 #dv9VKͻxN2%Q2t~)tx3ǟ R3r㿗W9^%=!׍m80d:fPEhuz)tKU'5ݧn >H-cރZ2(m捚>Ot*;n8(D}wgڑ[?޾\s,Gd[L9 yH!hxd#&F.kmH /v~T?ʀg6b~H 2)mpUϐ)"-(ZřawOwU1QHAIAuYYZ1O+H>>Z8P*fw0«vL锤$d1&\ Mn%kE َohcŜͶ4g4]A:=ȀCq˪ɦ6d(.Q- h7Ru6U PX9UWC 0Xe I ~fכkWθ()HSFkt!Ocs&4ek%9vYgv:Aaz=:>7aC{37[{RRԛ)?꟝jTn@ ] . |<`vu9dfuw~sD%\I]& Q:Y|ҹdC \ x0&UT ) ],+M6ΐ5 9ZK'6L涭w gzU׾ҭ/>;8xpGhhpM>}R 2 t^}(ڨ8*=iʘY% * . Lek[-WYqmkZCv]+)| _YͶCZ}Қ*IEn٣l;BŴyӽݗvZ6̤{=bo?7^y]mןw0_^cOI#|Q?5Ӧ Q/ѳW #.{x1Գ#8_K+5H#\xҟ/$JF tZTW!+ `R\ &ԙ{!Ut#8PfiLN,E-sg g\> ռYX}cB, \׉mx6>8`QWLk/N~ Zr uR%Hy?|jיB oDOwPT=P@KV S wyF0 ME"T`ցYZqg! ]HGZ ,ꖌ3φggY_UA !t@8ymXBYv% |8W/#mYD+bhE%?x)BRq (cJĒ2l Y2!Oiik^\JZeacJ18$.2tW.s.{ҡx[ gzpl~$uқ, _Tatv5I70ښ1+hpC&o[tlTmC϶ Mvmpnve!^FW6g)9Oi/nO7V[]:y-ƹ]nm俜0nio~wsoAaL+9k|Cѥp)=G.u}P4?Wޜ`e~~|&uJ2{BPLR<bJBgmTuנd$K, )NN'h^%,+Hf1hCx+\9ߔ)=*kǹDSj~ػ~lY5zc=dX@e^ޜ6ٽOB{7n:z t}&8YYV|-HeQrRJs& PuQbYB$md,R 28ЪI2Q1.Qq+I&GrWPbyمpb1ç4 %*9O;+Y69D4ى("x=wR0}Ǽ0 =f6Ұ&7#+DM ]rn~P -hatz:U%QM *hԐ5/fkhػwu0M^C0ϩ;;bZRW )*Hڊ9T  W Vg7LZʾ뱼~YMgk*ܕpuB)Z*\?SK +HmSO:}\ ]7TRR]ݜU4#)P[ؚۭ>.n:'iWD--7׳dJbly: _gi.C.H!E+U3iC\*qh:hNM0٧rdPyR\(8PR(Bkv5Uw:`YBL0òt6,,SF)93*`Yjeƛ1<Ҡ6h GP# +)%_JKҎlz@Z)892~hO]z奯8spޥ[˭MʦhhtܰWDYųf>8GT2>z C1tώASNS69-3JZ;nBv,;i'BNB 3Ld(l2!,0mJjde|p䳚xl6/~ }6,gcv*Ծ&r[BdDB̻A:]?lmC؁gz 9ٲh I13rIP A{U2yOǘQ TNt)CŏwP5I ⮕,3 LY̜1 f (&B\2`IN;T(5Q :ъَbcmJ&V찛l:^+szVYSvV05ń U3a2MQT$9An2YEǣY$f鸭=m:NQ6-P5+񖙏Ϙo "*!V>YKUɜ|2'HDNʨs';h=`l5 +wC|Fl gGEڟId4x@8:/"ª:{B t4>ZؒPT S M!ќcݫвfERW]3 cqrEUǷ9%}DB ,FPĈy)Иu%&|tsZ.%.#āxgRc !0 Eh[jk8[]! octrfYV Li" :YԊ̋2-Ic *,ڨ&#HB/:]`:!|,r\:atƐ\j SgdK*5ht 07Ǘྔk*U-063w-K9A PǠt_'>2{SP ёb:jִ #I,.+A\NAęqi>鎑eIh!9GE` Y!:4^ys5qH-" 7R[giLnޤ^LQJqm#9jtQ~sX8Δ>'&XIɑguFơ4M Lo{xğިI_mltkkXٻ4闿9W?o~8_^c.zqD.8oO A` 6muMͻ6z6|5^gaآB $O? $Smg/M_a0ìG,Fu>4uFÙˆ\7iԈϯ,gT}fRѢR\#ib$`?OSYfm'4Hn=m4mi?+j?eOnA\k}w){sF?_yQbfXzkm4+g-7$<t\_j%Nr& ^[7?^]0M>͔`4:.E&@O\]*ʢpsmnYehUpxx6Kۃ ڟ{={#52Ĭ_R˿]Hy:@'ׯ!Gw84,S_׻5D8'nr3YCڔLir.{C[Uqv9qM_AJY ܚK)%2k/rhBsDnȈ.VuؾMsLvAX '!lj3 arTReUyZdNZ91:(ohx\Ň\y=7f+bd,ea+g5>ST-KiWTd0+T)+5U}YwլN"r@GTk` /p^H&5 YDH@NVA'lC ^FA )Kn}Lb('䣵D٧`q3׌pku2&/KE qGnަ\eoC?{֭ib4`vܛb۠E?#đRIN9-K-94ypD83h~qƅM.Rg:EK^q4BtV1rg ǹuZ_Zo|M5W^lWT,K*d-ˮe AcOz~}:9=ƽO (`P[O =a,,CL69ψ=RR3(a J2M e!&ɸ^UuwA{%w1Mψz>yTQub1*F6a ]9yE, $S1pkSZau9KB&ju'YyZ"8q*t\#1㽔!2܇so$1 UNrԂ('^0}5ڤ$r]ҶSkSC!%ZPdpD}KDQ8lxOeŕZ謍Yze[Yܑ''xqVO7epZXh>.FxΫ7Ǫ #"X6ךpsR;qo*\,%(b[5E8DPϔP# T8bp$Cs$ D˝"sي'UPC>7!PXЫd|6ďTč?06z;F?>Q#㌟WϏ >8M8/pt'ï9t#G_q-jeW᨝U^J.ʏ2k64.zVZu9xȾ.pi ~V:"h[Hk7Iˁ6f$ϧY;62ܩZlktOJLӽ|ozhgjvp|pe.}vUcW˓̷ rcs6<I4~F\5SUwKA`I^wNeov~q7/p:nh,S}PPWǢ/nƢ?@,z -+4 x`[ |`< ٴTs7:%hr-uֶ˾`)_ٝ+%Foq7-x'Fw3:`nr "qC 40bШΠRId|Δ yQpT|$/->6w20|Kr{yxy6ߒ.mny(޷߮~\1z}` ͹͇{xKku_Eg4mVse=3ɒB-:3ݎvMl]4ݦ/G|a#6P$-^F-1rm&ƀܰA[ugi6?hJޏuk:Eɮс1o&}0hڃ?f`ζ7f[|Z67ٯ7 "s -pU|*S WWSFU&XȭLܚLML%ԓW|ɥ/ǭ鵫{lyb>i7LJQ)f\ W.="\'7GmLSxnm(rSȝy8|ϯޔ9KߵRto;>X.^N,^jw*ڪd 9 OdP ^$u>ݷqi5?iTB&WNPb,P)|phO&r\B*uBuO֚VlΒ ~Βb[t֐Y2TUkYDTsL֖!P&fV\YT&e)E=lHEEHSpAb"&"ygڎEH>itBdiWs)r HTLG#gsjolUGao#{sͅ!;ے!/Sѵ.޹9Bk=CkI.L;(7~<W@9I j-9lOQFU艕HͅpY끐Tq)jBG*jou"~_T9zԤ'&n1q4RT0aiӜy⬪?>zF/x"x-e4F"4jNIH|@&Y(Ƽ#Q7iީL ioyů 6p.ij<}@/ֶTLm[Zֶۖmim[Zֶۖmim[ZۖRmim[Zֶۖmim[ZֶۖmiEڶ-mKkҊmim[Zֶۖkڶ-mKkڶ-GCє=j }&Bnd<)+ok*Y?X'&Dϻzi{}nu $"Ip˷6am6ߦ߳]9BR+j ! U\&JEj#ڤ=#0m҇L]ƥ J]c,kFVފȔx UK [Hb%0&CPZkL6 ? 8*w4TU^#Ѹ!0, CQhZ ϗ)FI:/yrR 8 GW|QAf"bcePN*yShˑ)m>#IL ZҤλ YDȕ>kÔhR_%<O| S&z%')"XQx R7[x2ܤPT,ILB#"'naI0t\A\yϒԂcX Y1rVzb26Fˏ!Kˈ8 D[\6:X5GlH[oW¡|*%E&X@N?1I}Ħm +'-X}QdRhKA\]UZ~rRj'ͅwKMj1'=z(1 [! QNQS94iL_=P"pӦC^yr'߭/M$Jd4$8}9\\P$$,my\`PGi;^>ƅޘXdU+\_k,!Z*y.yi",TJ™j]06z&HaQ$M0`g (C2\ܼT\#P`)[(hDg@HN/V;eT!2i7t 󤋑d6ǜ7Le܅7{9(W%H%)ʛM4brO8D -# /PE%sx?g\Y/yi||-HαKT6kIG7i z bsP}LʥӶw7gq҄8_i8j}$O37Qt=vd\.w&"4m9!.MmgÑϯF&v ]ԷTrQ۟^G(G/]D+b JM¿"˱ZkC/J ͆Z,1%t—W ƽ.> |V9Ƶa? gms q?e &AQ?jnv+nJ2jr2x39 93lyh4fF nOw{݋FJ*e./sӇxvi#"|/촇9nt}n!N4>}}qCpߞu3z"ln`=؅ _{GrZS@wRKe=k.ckK~>iOR7%9?9?g7Ө`2`Yυ[ y{VVp*K}VY[Q\,M/ܼ;h`{n/Gtzқ)RBoUߟN OMw\w 1䑨fg<:ɧ[ 5ί1gķ_A F7c'eʺhH&$HKQ枦^| 2iY-vet Tmvp%Z9$w_\&dd8\d֖8gzF_!4㺜03:lyE#,%ى{0}O,ɒH$|#bթܣcb1:* q.n]90Q\'pyBkzΟ.\6^ 孻2w)ifUJI Dʘ2!4ašǔQF"z9de>`(G#(@)fK05N!sP?zxyp`˦z²ʇތ/6el? @VXFDOlnP62*xfץ,xn$n oHN;kd4G(~q@"AK3J1&(IQ=~/Y/`qZ{gyݠ,rSZ$CIF$^ɓYT P50/o3I '߳:&79;W~7 "|xL=9~BCw#=wR;؍J1 ce2{-To)P #g*bN"ɛF Ul ^/xomr֊NzfDž ׻c}zR u;BPL>nQ׉g7fO_oH lh ;}[j,7)o/qzPҰw d?]qAh RT%(JضtAҤ{F7ȎPBZA=N#`A m˽m^\z9J\ (t$F ,\K -9XG%˸ ^8<;O]bKBc) I>fMhrBsr=%MZE,NJُl B]"GPy91tK@4*Y \:#Zp('[.Z8M"&#,aArYԱ DT*Eheiia62}ǎh c!Wm n ښ-Wu֋Y/|pi8ŗ~t:Vt46Б{ k"uCikͬ@x VtqKԞklNjYX-$ZOfZ12{W @i-ךROmZ Tn2>i*Dfv+ޏ$s\-s*i_ް7P1t/*]~1v|? mQ?Ps~է?_oneu;E#(b0?I˭ Zvsa[]7{6]P3P+ԒcR:RhRѹwteQxQKOQ:: edV](.@ '%IXRlzOD Aх.))C阼GNG-G#\Iǵ_/xm$Y{}/m6ݣwu}ReI&BArQ蔒r  &*Q 5OMIq8B^Wc 1D5BE⩢:6SOW:*%l9\ j$ ykY>P1@#r 5A)D bRXQ b9ePYˌ5r3 9>&f,{;vw`ǻɻ}5Exl^Ƞ+ӿ-{x3lEȈ, I8NRDϲ"xcEBO[g0_d=yqbxŘIJ@H (~g%.J3g8: p^^ =`~㎧4QȞޑ9W:PpgjEs\-"W ]ڮ?J̣f+ :~-]b!Χv<"q*|Ve?z~ e4~݌Owi~<UbFL[z!aLɏpJ+>uV_~OUpW:僝 ]lNm~CXn:h ՏFsʹZdzl`[3Ґ-8#~hSW8P7K\E&lj$><໑ z9Sqtm$J\~ծqԐ\^_eŹ_>5)rjpk{^e&)HU>rUIVFoůimN0+d<ٵ6޼MUeRAITuZ,iL'C+?LJތ_Vy>i'?cC+#R oE-4-؛EhBr6 bԔZ51#rOLB#^ΞPԈ35}ηyNd3AM'&/K8ԴZ tmf1ÇQL"={ȋ&zª^;RfLDڋN;və.O?x6ܙPzO3rO6h9p_%DQS`FqKtT♶Ӳ,Wy^A-Z0s% HˁYs"\~%2F.2:7&D.l)M(nEFkyyTZ[[2-2ۥFykR,J9@>>P4V\ i_F'qJpYh+ _,-XT(tcj]ac-ƍKWQZD"b2UdSrnLk6GCcU1HL451go3 R3(O'ced9Fռ%U돇wzEzZltw5M U :\ѮymĪ YcvM)y׃qg_:k D>JZ1+mBu%d?2{(3eݺEKfƇưvng}lz|߸ k癖\mu-Cͺ`i^GtܔW?wfԏdB.'M7͸Zվ4ЕGi;Tf$N6?lnɒ" zqbǯ7TJ-@ LyDx"hai)<1Δ <0Duzy`[sk% PRj"GݢLΝ ;蔨ăCPJ2v{aWY}&!N}| Ws%'m3mh78;wD1ş MJ^^j%\,ּK\c%IIOf5[Xnyd`6R3K[j&\tڨ} l^9JLkS.e 빱1$SIIψ,LIExj۳5r 0Bңo >}\ki-Z~v@Ї*0%q=FsPR6s!AѦЋl/S^⸺GkS׊eAZN^o̊ :bRK:Ҫ2xxwZ}oKom8Idp%/)S:"M4ri#@SW=/~0c{(>t@ qe"]tm nϼ7-}!ֶc&!EySYU kRgjn3{om8YAi(psZrQpTjU s`NH UC(WJ6aD->܊}9t9ǡ_hP3#G"7Fe³g8?=L\r|t@#t 1@/y2Pƙ_ ̷uS}I lNMN>_M©ȹxû 8SN{3 jݨO&ݨT^,cep,p#ӯYufΑ3gɻםi^,x!ֿD}.ڊe;i:VMmx$Oİ xUvzW4)*[_Qwn,ru`&w ?}P7CGT A3o \f8%z8E<,popVψ]!"L.υ]ejA;T ٱWȮ$H+$烮2@ #eWJӱȮWv V]erx. UBǮ^#Z>F\K'eeÄmU/4ѹ>v 1vTnƗVh1ߧJSqy[\~+*C|??9_[F9^*F8`7eUɪ.OAmV ]|???;ܻ)4t*)O1H؁mJ.yx |6Rqj#HYRi%ږN-u^=^o#S[`:W<"y2CR2p;åNHq$-jZb@Yu HI=KWFzeeI8=0|J^{8vVXŕt>ҫ] (&kDp-/g(+(2tGG0Mb,!QQPC$D|3d$T啈}W񩕊O#s7XmRgʜ^/-wjߘfľ!W2HB$Jc$)wVsN \!M,0%G-_*6o+Ao/DiFS +6_EW 8s(vNYo;@d+?qp/~pu5W6QĩES޵q$e/I;VC"6/cbRԃEd~3uU뛏#dhY^t7 ъwu&^Oog8j[9 TEWUǀJ|PQ v;^bsָ]<^J7/;ڂja^SykJ)?iL&8 "&G Ćz6^[n^/דxpVG"6u" p̀UO~~|YIJyTW_]&O+nf_i}[+ :]@؊iKX;YݻOVR7Tz'k#.vtuRcm xMakzc@o->k;wq+5UT]M*ަq+=Rn'035[e/'X'oC[Բ8]J)HoIjf"H,< r!X) .J$[.LL{*xҼdڜ͛j;öL٫G'`:8)cߘhBW]pZ9dĮSMrU<&S??T[jQSѸ\=N/~ۛ߼|77_w?~o޼½}o|54sR| mU5nI54Z:Ў݄o2׌{]|eqMtKkHʣ_ӫ coa <;Y9q8Ui"=*q3U̖%UY{NxJ6 |-l_ʙՈՃzG5k͵ާ_&7{W:wUW/]~9jo^~鈦mvë߾r|UFh[c=W\k}y=Q|/Dr×מѮsx.o鸾Ԛ Y/Kzo,xmߔ~oe&K!ĜzyN'L袰/ \!xC* 6FZ1k/֘5A`= Yu>QGOE#$ z(FŠd0 @gUF'NWR(e2FIf]LXiK%L$rBCFΆєlt:xCƐf۟,ν:}舸]`qp ]g=+IOhtOuQaLAIj ZC&/K J0xst1xp@Td\x dry"@[+#KXY{#n Q x:Cg+_C[56EcPbW 姱,gް-˶Z_ORkЄƣ.shV&1!meS>Z`1a9`ja5(9Aiwɳb2bBzNp]d%\j^tHzYPA~;YĀht Z=b˘ H6B.2KeN &xFazVG{#gCV%l߯nA-= >Iq uBAa ^sW%éy4٤,{($_PE+)nNZ:}siS~Rq4rpqz+dK٣x>=o"ք^YVqwySIݵ;T&k)sӔ=1TD-#7zdI{r$NƇ(~5#cgXNL\lN)0q.eAO9sڦ ;)]p#ZL 72F؟b! Kʻϊf[{9N;rv>kr7xh6;dVHeV 2-XB%)\V^b$ %*IchT")L+^GM ]{_܍q2N+&殠voܱ-jQ[ =hr BEp9zklr`RA'cb =ͽaNsE2i\ o:!+)("ő0>1e]qm'9wac/ʪ`D?ED3"Dc\./$,8i8%=G" &a֩ɇa"w%U*Q+l\&KGH4Jt@dHU\A3"F#-u$\՝pYgo\-.qQ 8⭚aRJ8C)Ds}de"f,(# .>.wlg<#@زϚxHy7sl(ڨǻO(xNz|IWu>'kRP_ü@Hf7Ixj卫P+bJ}B6\ÁDح@-gQ!2븶tF%@Eř'8.N8C{F$owRKF!ά[VuE-uy;<ɺOg+EƏ I\kL"9#H!hYidΜ+-ziԺFVj'6jzoGEmpnlOdJ+\+i XP dsj& S 6GD̴ ͱnO\ء5ѓ5R` Ҳ,, ϼq E&RBN%Ƭrh ~Kh8kmơ#BXsR4Xyz#g$]*P){J(w}nrA\Tj}@gdjr]6{Tq182y6̫1io,FJ<)ukSAr{ʼn'{!)C&#U0T9b%9)M ϩW9 ,R0qbdNmDV z& )CΘ/{Pv|X[.JsA' I9d9RV2rQ "Ч|%W7O>;V:vjr. :=i){#-R6-~qPzS`~) eVi& .b&HN;^FgpWjpl ;@7L(WV5TL葑UʤEhsfm(.iu ݫQ+Rz2ob4FSI.Lin݌REBFΚ95{m4ޟe-E ѵ$r@{=w;i^7tmTS: 6thw^ۇ~Q?d-X,j&wgzYzD7gӶx&ثޫ[2qEEXκ$t:@<:-?Ol) qrHsBJṲZR11$ÔfȌ%3vSf﬉RÌ4yxA]yOk1]j zxcZeoY$Z٦c)K J,Zn4 PI'AZxt> гzl!YI ep9)1 ]]oGWrmFÀqHM6pOIkIɶnUσ")>P$1`K4US]eu'GȒN rw)k1D(*mC6IzQ!YH45R) TY[n/٬bɞm? #j#PԶewm o@dKx %6phqx>g5sNPccR9 *EJzB  \7W[oO3AaxVN0ߡ,7՝=wOj\ A[8B^^QָkHHsK[Oկ8݄"|+"AOStE4v0O.H?(tҔ(0>D6 wW(dna ݚ< >w~ &]5tNdb.r0uB3 F#4BQ ]0n:颴I+5?JIU+5h}QCovr I3–_&F,#F!91u8Z|yPPŚ86  !W{eh"fARpVM3Uɍ-=)S3ZUu$"Z Rd&uYͻQ= ֗%\nd*yqq>b?/{RUE?|~aW,՞IuE9tmЬ?^lʹK U=T+-Yz^Sf6ocRLTY3.=sb\,XvuL6wf[ J^)qRC.'*R):hɑ3%5 F(;/K+Eҫ^si0${F_(:ʍw1M2I :׊Ke~\ڒm+hNs6 ^WqO7(y`\oC9KK>o"8 r΄̙2נ/sN!6|z"뉠c=يQOLp5u" 1 ace;^z11LT "vCI H8m`KMLc1>,E N&x[uF3լ )D L \zq^,W2cSbqks x:KⓄ O1ش1kQRO$@C륆V#N9 > ǴS )^ /,E?\-# :Gdj K5l S2ɘA _& >%2 Gv0%VD2\*밊a,%V<*GȆ.\hL$xSuc5_ʭU!ג)ELJF &.p{6"\U8c^)KXo.o?qe9D /NcR;ĬW逰 ߜAwEVX@Uv)*8iUxzP6d): } ڥ+WY6mXBwe/y{Sk}QmNwP_6nUv䎉kMMj Zjtv@JaAR%S`rnps3jTUp"n/_QnĠ/.\"ͼ8O\A`[iw?߅t_uVpb|%'KU^N^Pք\`bf tP[ϯ7Ijvu̘Ws7Z V0#)0{,yq2;˜̗Q/."/mk⦖@2oG܍K\ Baؕ?e߄qgr՜q:GyzH<(a4ϗ1ϸ:i7&_3a]0b0K=d8`<腼,8P&\å \3tn`rb#ViW )nq_Ors (X骡.C=bhuA`h_焅E ʀ?*8wwCx̤*hrl;>y ]q6LfmT'7Us4Ooo7g4̾U}y1sπk3%-ϣNZ5~>Y0SYyuGGYBٺI1B>U$IY P\^0FtzTL=q6xxTA69:>ͳkS>b9 T@M|?n A+eۨbM![.wgUݞZ>wFup&`G3/odj0X& nsg)O}=V'JD}xZ6Z8C'O(#< (ɉ^:ꦵJYhtL|IK}nN! pjWKIS_(s8MY Ei\[&mkd,0y p4OU)&/%c,,NN9X)I]=zm&Im)t%f„ ֚Z8%Lv&j2H1N@/ $)\[%n2>ӤvX D^t}>ń>/giWu5C5c &oc, [0.LomjĜVlNwP=MTQ[Hzy Goɹ2m\pHx+Au[$tNhi͙Œuj?y]8:.L[^VlOTe;F3-&VI"\hA*ٽa0kLowqfX㦩ft~~O\H!?q?ugN'#_; `J: b:zQX:^q R=Tb(ֳUIrmYYyn 5TjUr&Ԋ+wZ'$ư" 4T@5s<%:eX(Pʍ[֑aYqGk2k%%RHDc۩[#g=T hgZLjE}qyj9:8\΂bbb&qQ EhKəM紷F ]}K'X}K|j=#Tagrd 31u*,sQr)Mi8n\ AHRVaR^LwZP,``b=6MVHKD5r,u1U}6-nvt_YίIhe.#kKJIqp랧Zʡe 졪5WW =ԬWd.f=i -5-ھ}ԗϬϟ>C2ѧ-6?mN<ؗm2B l=L)P|Xh٭TYRNoߋ #z!Uxd!R&R/5eDĢa$EV 1鍣YKY$9:HU+YZg [ކ{L,c05%a4UTsc%bD89NxQ 9^IN\\o*5Y&Oeu΄9 C'7E,&y>;9s},X'ŵ)7A0ɵ r B($<3`O  baJۭn},qbk7z!6[kz 0ַb y>X.V6 eAA*6I87=X0YϯZu^rÜ9Q [cI"kC릃ur:JTNQWDP- +"RRQW@TP+9vuTz9 ?!uED\J u**QLTjީRRW@dU"SQWZ]]%*KTWJ.|^z{ĉYFm4%#xTWsb\9b-]xX9$Rj@§D TK2vP-QY^ T BNH]%Jj%ǮSW/Q]I>%Gq%r8uݷ[twuT;;zyJ)N ]%\y2A\Zŏ]]%*uՕV T<\z2*Qˏ>*"Q):cE+{s0qwp뀍EM5y4NP 9RAoD0fGnF9?7 u\'-RRW`y:*ة+"t*QI^"R`}VrF"m0f SYWgYMVr<>ҭ?’ul7.f'ٞz АMS}26ѩ(Du#"= _o޾w[],p D-6~G/c˒eQRCbPsgCgV)<> ֬ Θ7Ҡlv@BKc,#cٌC T6cG)ڗ0KJsp"ȚݵP\ WmIzBb}lL3U00[ޘ"%IRZd T03)QeRDN2jD'|IGmb jdK13x,s^a97#HF1ions5*]D!HbsDD$lD?]] 4ݕO!DDF5b/K4՘tj݋U@Ur‰/ Tb35gJ$On Kw@fkQ')$)r HNDji-16r= &\~˰CxNp]W%`6t|. n} S?UnuR\h,(@Vafed"@”Mp0ZEAYN2-.c?? >ٳ flh*[*ҡdTAJ cJnjgQ!YIzpIt ʓsAh |D[Htc;k&Ύv:~sǰ]L@@ Ims&! .GAQ hks M \=wYC"䑌eDLtQ\)\UƘB+>:V3#O:W?j?ȠJJe˭yŽWW Q C) / (SwYcO1=F~ߘlx:AW̴&paeVk+g'x/vx; R@a\\Th#|M0(H+&G5 E)wv+",==k"ll|ra|;TٰA<=#T &5E[BH,*j ?{)ܒhc:?h!Zjlϸz>+Ɠgi8Fܤt&+$}Mr;ͯv7+Gsҍ)='ˀcG 6ֲ 6~(B@\ &]p䑂L @5@PS(`EQ9 ek)G}ΞI92nJmb&qNF"`>1'Ix"KYrZVlGW!-?Fͳ r 9O:wZ([HXd% "^j9gyCj2' ̊6㩘;_uÓZY|urnrsL>\MUHz gH;&alY:!]W(V74Z?AhWu "c ^P!(U9 1Ag. F04T[0_] #0^#%GEgc1ZJX')LB !Np$1 c%dPWt1`ke'k~ʘWƾ>ҪK 2#'yћ~=_ן'pB Q]J>_BGzHUQW%{5v<23ˠ.Y՟'|WE'˼/zKMio[wj Ig] _?W\QZx: Zƿvy/zmu\j#쥭W_-O*xm_\|=eiOx|"ˠ]No3 %W/]s6oto(k67eqV.x t:} #zUF1A߲OXdBa4QB6@V-]:*]MHSl>L98v>l"y@xd8{fQa9HI0ϵlr*&" ;(50t M1tG`b PҧLu; UEI,, c1l&Ύhܵ;@dȈWx~v6^,nO}\B ]M糷6k 5N/J&gu-fiD.Zr񦬑K 𯪴~A|R`?BdE0Ȕ+29 ZT,4g߈>;dzipu퓵_9`@hyN[xőY~$fW=W<'4ѰvgY~m@(=T%wR DEEGJ, cA?)4; BPfGh%cw٣FV1$eN)uDȚ<zrSƘs^ 9"uɒ]L :lM]6gGBy>$lj_o-?|:>1$:-R5z* I<yXc0&-ߚ9(k1ͿٗšP"Ab#qJ=HLN g$o&}-64ABh'{(;{/hf}φv fV\͝urWݘ/.~@6.U{C=y+jE5z6` &H_kz!&U3XLI'3dA\]ɹZ7$sR|޸\BR̾/te+RTkLힱVi iƾq$#A}Jl7ds_ӶTNOqkr˟7t{f P9e&/3ZCI(2*jVWQ5TbWY[)Nʏ-HE=v3q{45y(^v'2赏J b6 zGR:炳.{ _/$CJYRyǍb fѨMgDb8bTq'[ 7g?IbuǾQ76^hsų"fp*ddB2B !Ua0G Twܘ hɴI#je)9YibD Uu{fP-P'na\LK//H ʦXR휴JD tIb#ttG~`pb+;xtFSă 0y cG~:<Ũ[K_ƲH:CG9Y g !Ufm; X%_-R z lڤ9Sa\Pq}'q1#(RƅKUփcD)}@ڪ\^xzlGD֟,AіCM)ZTd[K9"J^ieؐ&*`I5Ȉ7gP"}\c S]|Zn5MO|no6֫'#wH5hÊ֯u}X ?Ǔ\G.Dѽ"/<}d^6yG8}y^U,?x~{8fS?н1O}?i}Xizs?)kьt}/DSBog}E={|N |mu)?;bϘ(ޑdWEez-_,@|~/7SH`9P˫"gv>M9wm$_{i lgLcv{eѨ"9;R#ɖeqD<Ū:u{7ZogO W5]nt; GqQ?G׫CZ\_TŚ*tG ]U)ҥT6*Wh> ݱ#mۏX@qwrLZpeMfLdIy ^`Y4SJ dr,Ն9@`@4̫`wRI)C$ 8=1BIt)2tƚl6oFpחje^k!g9עA+0"&祎,RxiSc;cޯ>>E!J6r3%*GH9EXqK'ieEhaRH1Yi!1^)wGju#qR.`Zuϼa}&GħH 3:;-Ke|뼕 I;ϫ];2e+aQ `Y2OԳd`H/hЌ! 4 鬃s(e4~hӬT0N:FQ[wo/B]o8x.b٬S=aTB !(An)h20.VKQc /}#nύcU.ݝS̤t=*nVWTm}Bs']1yW0H0PXbf ~ V #AlꓞꘘƯx^@~l[ýv7q//v+BpUS8O@1U;΋bc]ou_un84\?V2x_6k#K&kRKL[!V=:T.=7jxK2,f|Jg)SQLr $cQ5EM9wG?=x3q]?/Cit8dR("$MY{ Z#lҎjI)yt1z!1n:x.u"jjL 4smR4۠sảFR~;X#>gΚdp_C?N@ɇ*φ=~;/Soo[ 텉'"E*:KLcW@xMLaI2dRJOS3ObO=kg 8 ׁ%TRNH ;EO m'N$ijy`r^OAY3LFzO>f8U+Bt5Ƹ$\%-Ţg}cљd]k b9P[-k<@8 KM8NdޥWNVfssMQ&'LqspUBONyv吴 W7q2zzu;3|n=,!.ttq3G LqqzGi ɦ?ho۪/3jҥ5 68,١qu0/T-[ց1DmfXÓy0q3UqrB @0fqUV/#9B/ T -TՇ]Hǰ2b~+~F)b]F0znr덌'yѼ w]9ʹdRw;3F3.M]4vnUlt}.VrüEe>>>&/P3)wu`y>67={u8m8yp3X6u}!uk0<35/&o}E01BIh>7 Kl=vu, E;sBf_˒)gAcm.Yq_g#&%ќSp婶$0leiY>iYTz5 ç@ܛwMQQ*W!kcŐ1lRhVbIӓ^##0r7m)صlilԅd_In#r3@e oGueH/DB2S l9 #-Ko$e^'o:2̬X5hu%%&.CtR+e< ,M(b93̣P%F3{fY;⾂3_^`!9z佩bΛ V^IA(z07Ozq}36)Kb0W 躨K/$~zu{2;9h}ږ5D(Acr'hvIv q-/.:4_b= t|wQMp DZ+ER^eKf %V&N,l+NXwh*nXThJRYŴͨ|PNd'J|ͦұ11Hg|՝ݏ5 vY뢈f4LbCܥ_~ns:mDϥ@7R[M K^RRU ŵ5ybN;OaNr-KE)Č?a`rъR['eupOЙ0eK#ILX?e\&<$;`F_ZaJ:W=s-EB^rs뤜BpIBJS $lڇ&'KSZX:%b6A ($F])ϑJgay ,͸+<>UY{uZ7+XɼdcX:I4KTe=sM2̥UxKn5p#}6Q '͒"ơz>&7dxO罚S³ &,ڧ ӳvO=Wrr W^80ɷq ~izѥBoZhu晡9oԧ/)fk>7/`&G-cu(|7:5]FK|YX Z ]STBd{[ :~@GnmGD ||`>Pd{=|?2vXsJu0/*dp@w)| @<%8UbL}W W}MVB8 O]ϝ-Xpo|Xx8e#,C7l]QՊY.j^ʤ4GY%v<ܠxz;vآ 8w)=/`4w=TE=00at50fϰb!jjCLۄ^|qGMp\"z/xߍ5pѝH=X8*O;iq kX0dW3i!gg~㬞 `ݧIbVji7zUé.TQ5Z*ء6joqz|Z{d2O@WOq"l'_xO\bnM:lyv 4f.&\[jMʹz=HMf\% IkjQp9TCR]!J{vR 4{(]id+IW [.X=]]1+C30@+-]+DEOW'HW`J&:DWX՝+Dke P^]}Et6zuNvz=bя>߿@pCvN4{Z(Tih_H 3͉ fiXH |v-% ާ1rb?lcvy>l؂ɇrB r#]1yw}sm~}_Ot77Jg6jI%fS+1Zie09,NJӾ)=IfzVg]J*{yʈ{>PsH3qlvp9EZ}$f;e¢QEkSH Ӆ+thڟ=] ]1*]`CWאt(Sc]`mg *ԒCK Q.=]] PK k;CWv-gm+D)IOW'HWaMQWWӮwJ`tu:tvdQW=6 Q~tmN B3 Ѷ } )ҕ᜾/]`Tg -?Z?]!J)ҕ.XwG]!kۋ-#7%}W'IW֪oaTMoB }v_lT~&6VIcn9Vhա֭ۡ\҆So~~ܵ`u]!\+BWVKoND;tpYg vBRtut[El;v.%+thyAD)iOW'HWBM S:CW; R)ҕԌ,5\hwX+ڪIbIVҪSPD-+.1{{X5?<p/Zb_# {Gvh9iTC,OQ)+, ]!\BRHWZ[t :kwnV4:E{.q`!=eQW^]!J!{:AFYE;DW޵Ƒ$_i~y#0> `?x^ZyC4Ԉ6}lq%o]u DdfYyR  "Zѡ4VCWCw SylRv9{mWNK|8ӓF9"|W{Ȧ;@< '>'Ny? nOANC[/OhT4zWY]1+CW@ދNW +BRvAtŀCX ]?pb1:]F)aEOb.:]1^ ]9eYbYb)RZIW)%DW|,3BW^)eu"˿hp_:o(cVc~Xv|ruo_\QoVySz޾}{BGPfP=#k'[BW~ˣ1䓓7OWhzϿmи|ܜ] Xۦ9xo{|zg$c}OkŔ&t_N}f;̔=qvYۑ~~7!d'+ #PQü|Bym~FL |F|s13y y8~S|ÐO. _uޜzր/]6Գ8bG>ږnh_yJz]>fΖVϼwy_77 #p7A.__.o-g.ǧKٛu[G MA5-DtVΒv(A{sguJHB:vZ*A u֪rqƅ\J3Y3,lFPatlE#V_煆4QC5Jstj! X&Ut(Qom΄1h)EL$?'{j%{(*#i Rڎ(9ٜ\?]mJ|OSHXùڊ> pJ@uM+5RœDhY{n&3q5sƖjh4U4jYЂ׾( {_lqoM- m*F)]hmlM^E4P&]Pafcc,c1mFM#d m* QN!yD!8|z]F|&iIobjSEKG:SAd*X  $J!wysYUuP iH%5Væ tF:3:9&z7'ߓֈscO1Ǒ֑ ~F_o3dT&΄|`5 j}ITp,ֺ[7.m|i"%h,ΚFR-+wr2` -U0ss`J"s ]Vzk#Ҋ IeP4BZ%;6:sP[r=JNwh A^AkaFh^vil5 "(e*99oܕ8Yɠ-]^ [Fhcn WXLĭd XqkycwducП^=VFU,*}&*Yl(EsAEF@n7((ʡ7fXԠDtP 6CXOM24+V*{M23.=/n{ .USW%$' cEUJ=IkB\y}gu=yYsϏ/y[ UUzۺ L0B6#& B =Cw /M TxU2GH}1[Zm\U&1:#'i.Y. Ao%CJ$JET2R,zGpK`)`^(^b u6B[!q2p /U]UBNu~d}:MOU;:@vTVB]+!Hb!ҧa~xg''}>Ϳ|eEUv!ȕ'X1Xk3tdD4Hcc.u%6h /fsP#,z& +28Œbk ) 1 E=`Y+5Qv5+ڱ!," !PVvܬe$+oVQȡ `8#J&^l#0  tf% Ix2$?< B#jw7ygjތ,*ga1=TV( Aw8+B A1-'jjZ e"~B=`5ڳΚ&ѠTf5áRk4KoMu^I"`!->萄M٤$E}>f՜% C8˾L x+{  2is$;E`0u gllfѳq5eoPYf1h\ _k1i3jFq5R 2j31frtp)#g]٤#väDy ֵؓJC6G=T3ʍڛ/`gr5w@TP=`C(uAAJ@25di3 A)P͛vƾ:B?[bDuN M~#g]䍡"UA  _0ˢ#ɡb$Ucb<RuN` tB)1QFǤz54 hc3(Vj|ҡƚ5AUJFm:_3Ag2vT VPXG%=ݳj?kPqab2|›k΁ >bѫ8vAi "Y'XkʛrLpy/FF̆ M F:,`=YJF$=kikJ’ #Itk woh J`Yc6\WAҘ"&b9R Y 5w8I-ggV>ZvXkÚJkȮ1z-vY[ܝ!m@X#<:qRo~~[ޛXt vZ.AnuU wggvMz}{ +c}xsz?Oknhg|ӻmk(A4gVxvQKݯ/foD:Z}t*}iuܔM!'J{\*}$Ej16YhdVA/Ȋ\eN..eTE\FMӢWJD*^\Te-y[MZ ZZlJ'u|tFئ@~<;=yE*aȃeK-zI>8(tх&z E?MRy?Oxd٦:?0i{)^pk3-aH:-yw{m zW߆;}R#/7zݓ-qeY鰙::qI&oI}*9-[Y :jJnmu6:݊ζ+sAle];/YV^ҭ:->ulM{U-9嶳Qۯ,f|2:;,vR>)?w&tiHhaҜw<6m8ΆNjr%:@yfώJ݉vfWݺvav \C-a4Ԋf~?u^;o)Xl޲1\q,4&Y!"gw˱xH8Bӎ.-э6?#ZI1CD)Bc8~CS+GDp> b -/%iB P]_%Bֆq1U%+*Gٮ&mR4>YM㔠vGmcr>nՆ8wNuӼ*ȐypZgr\$KecD*M" kRAo:Xr셷.ĠY̦5nf&Xf\b/Y97跽}D !T75]8}mȁq.C<`Sj5US}GD9ͶF$JF/K.AP>}7b(ҢTl63kۏ6X{a9KƒM"ZfYeg8mE$TƇSdY*@UU+rٮ = =n&tL2Du!t )MN(t{^*| /w]0[ݶlv1„ePj gey61Ĥ% +!St FqN[Mrm?? I4 f (HhS$Iqnb/dFf/}NU,RXHaJXRS6 Nd%phgb.;D댉(Y𲲜U#gG9k@ZFƨ1di`*(4)jClR:`t`)(h`t6#U. dy_ AXZ2>@Y'l{GqTvRw0 DC$VA]e(bccȢϧѨKº CaF=YhcL\'9#hFzݳðI>scmME ZY29  rdܦ ƻϏ@\Ȅ+14)較:xnŖ,w7;{^tyi- ǓvJi89+<>Lũyj0 cWbM>N?|0+7"UEJhnd&"f0Ƀ9/`]ujwKƿl8`x}ߖ~_=z_V&:o˽Hǽ {I͏w[-aZY/^.㚚S0um5!0 ) ~ͽr ?l0~@yx{&KA@bs# לȔSQXLY 2"KW5yQ&ctU:+̒[/0 oDP]b !3_FΎi Vǧh# |Ai^aU.~ԄZ6GMzz˘QaLfR] qPzZ֐yKeqπlЯ"M] ޠ'>R^Cp9N&b;66w=#3ML'g#z|ӄ4aԐɼ(=8kNVtH[ԠQ>&LC }6gP2(AIb`1k!'`8.2.5/#I/ 2h1 #$1Al^\d˜8AL2xMɴWYFΎ子}HLnAy|t;QZSB~%ԃ uBAa ^sW%&`UY/ڮ&eC%w*^p@IHZ:GN)ZԮ!ZrOґO+@ϧ8 vWoU\ĝxr;m̥BITn8&%ko(tl=0TQ<ň /Cr"m(i6Ǔv)?06zG?>Q#/ 097kVX9ri|(!$ik7.C$) B4<@HO\Ay@aܬfgB;a19GiD޵6r$_!e))/7~X{bf`OˮaU4d,7XHw* t,NVEȌ<\Hdݫ>YhG(O9d'EB.ezؔ|юI{61KtbW.=S59mn}M) Ui $MP;Bwi (\QXНSýKY.NAf!hh*̝ͬ'ɄOWIolsLC#MRQk"Ϗ3  Z[.8{$Whߺ2ZԵ%@+XQ X8%VR Vͳ=(sI#\H 8GìI VRQhn.g*L.PL xjZ  GU:6:x^\~\]}b~JBw+z_7%e MCUoug`e!' SF3]\#7trMXү9]{4l68Skґk*\ TRCZT$(/5E.7m)hoz D)EW$ Q/uR"H (o*- <&EzWLLNWq|\mzpBY,&YH:oJBkGDrAх9jR4>۞=8ί۾Yrrzo_˝ WQ4@ BSGDe9m?1cN#Ж+3# Cd\SP;.T$*c{iJǠ57^ę9˵6H8X-.3@ΆdS̚|dO6;ۛm@?-0 Oȴdd,yh%p_ljST&(7VD.x"xz:H=X(1ep c!iA;+wQϼv&9OHAcu>@n%gq{ah#Dӽ#In!hn6Vˈ{XTn6LޘA_ebØq@gh#` k qEϛ xfRJqEHO#<FL,,M #)¢NG3hٕ?-;|KJ}ewy rxNcG~ ԭLJA?6~eEf`vm~:lZj]`1KeT>doS?w+{Mn=~?B~;/O$n+XGۺ]ɡ.гí;tޡrf.RO~m:oBHNJV4XW猜iع*wr粍' Kc+ M`4F0Fd`h¸>Yq2uG v8jHe*4q0lnOI /e\$[{W*Mi4o1q soRdl9|@O8|:8ÔzLz՝γAyϳؔ>Ioc2ͪz6d#U2Fkzy6{1d؇mJ} `&ٕ?3*\] E5m._s `4~\̐5cPunQᱱ|MoV-0Jom<\nX6qԻ#:V/ՙcWgHiWY+T #dy:#*&%3[¥ϴ5*K-哆eחTX W@ycB0mVd ݑ^-]1O6-Gj.wd~a㹪 Y-ԉ7\HAQlzX&~UU+pɃ0]x/j8&-R-,-"T'p™"1CSQwǤ_ Fj4V]jH0:z.|BI <XC#w&@yLC,BhSDNhq@.k1s Cuu nK7Ezl]{wW~6Wo}0M % } ~]P‚7`Z,q$忝āE tl*VsCgk,Jx%-d5.:nm>NAy 6z%&D5;.e 빱1$SIIψknfX90%ջ^[e&m_浱K[;^mf4>{eWr9M;XC'냵T3i5VGGrl)8.{@267Al zU+[ˆ/oدN[wku+[ 8vօ<-)9{Z%e{Lnwm!q$28 )`:)@WIX4[XL/]9(,/``$79%IP!dJ)űK ɂs $WJ$LZ{-v4bW፧:c\k\ "8Otd yyѭ68xܠ60ǭM?~B?޾Pa9vW}oORml̜E'W/jJT!`[SkK"DTӋ!JEWX(rX]uLf15FQk}RG.UB (,7*yE]dE H18:"L P 2p! 20 tZY2CK blD1ܢ1)p}x4Swa ٬be磋/Rb~)ظ!ǁPqWb!)+묈@6Ƚp/q)*JHB4(ΪȤhy44ƅRhc"1M\m]9C"ecWS69k5I嘬ٽ Ξf 3J̎~8;5z&70 1Kn[):Vhjt5 p4-tt ȭf∮Ytvi>v^]̮Qo?9Og始пpv{[ڛ[+=l~8H~ixVh8a8Xv;х^;Ls_6\/a֟?B@|^苑[_&6QBͼlmhw H XsT]$?׹HC.CRv]βp ;Bv}պB IN0Ym͎qm$ !57:E۔}:a뙤p :3dy Y9]sFQ"en3` К[+L[n'"J%^c-(nγ6 $EWWжt(5;J腙tnZCWW=}:]!ʅCW]]]I2a Wt( *&f fdk tp]!Zx Qjҕ\r"B=tph ]!ZcNW++6YjG5K>t(BOtiI{;n1E84Op,ѳp޿o@'BȷR0殼-Sc&WϽ(_>UIe ^Jy_~+5a~/<\h<%# ?'c_=56`u0ZCW׈UFˈj:]!J:Bb.ZDWX5-tR›NWBGWWHWEt[DWWUF :%k+0@lU]e/V]!\h ]!sNW%#++aiڕĽB$]0xt%ipg j~00 s#%_4oJي*,PJcZVV{7x)5 Ţ [Z4}]!}Rˍ얻l\|nΏg,=i.Nzԗ.7t~{֑7o~M5 ]pJB^ B$hN%p}WߙR -%WߛJe!Cma )=RIA*cL$Oy"w?1LcxPO1w;[ӓozogı0n~wX[j+#V73fl{/i+ԧ\SC=I! ,-M%SBOM!Ev3`բu0+ZÅh4 LUi{r.oMRO9Eutjx|dc8O8_Hʦ>U1{f y ݛ;Y9$'9NxoOvUTnW3H r8ZU҉=\=CRjw`U!W]BWDejWR+"3pU=*Q>v+f8!"=۝A Q J|pYJdؚw5_m|W\S kv%y DeHnk<^9WTQFsN8G Yѕ^ "dh T+ܗTQ;8rޱz R8*햭ܯ+!*݁+ȥWZ ^•BCp%4Yfgȕ~gl;\*s+c}&Ƨ:a4.bI5^˃EuaqC^ *R}"1'DJ̪` +YοWx68C rnJ6&||9Woo_4qQ>uMQEa3 @% R2Jɧ䳵hyg@JkCR*G$<$xS6$>_$p#U94` ݜp/R@CY BW=v)LpVqަj` 87ʿ'^?_\ yJyk+,1hK_MW--`ycƒ1/ Bn>?~˯w). /JS/Dz܌U}?C+)bZl3:Z>.!\V땦zY*l2˲ .,ȃ3Nif\*CީĘ9R d wFPhi,W_Aň2Jxyu G 9b6Z0Z8B2\ q(%.zFF J׉Y&EK3?55KQ0tQ}4 ѸA66^;t0 K& ͫ ,YylE;78Z_z(/vmЅTT7Zү[^*MM}ԊNOy{ɝ͗M^ EqЇYSi0  rӖeNlWbQ21EYpH)^s[6ܟOzVHrD'AFYZ_ Y;\vQez9kYW l;GPХuTҐ㭋OmfR:`t`)(I8ˌd0QUM&WOP_X}v,5 9{.J&`|SFh3Zlv+b~qojfj~Byp<wȅ9|NSk5{3k7M^o]QYdbʃϜAZ+ eVi& .K\VN&@쁣dzDبwIy q2`&\"86Ŭc<;l$/@ta x-3zddj2dI=wI۬SI*PY 6!mQDdS-+)E҄!?%Uڎaz̮+=ʮv￿?Xrz#?05~@J0&3F9_0?ujkԋ^7<\tERھt(̛8%?:LǵVCF#o,BSrKƥIzS= =|-?:Gf.qҹ`ҩJ+ 槔j*zGgdɹd:&8^RJVRluXoz>5>c z[P6y$Ș|@I+&c0e/6hCc EkHZKbpE$[ZbYRm[mUzM)ʟFcw[<*P¥A o!KW.9.Ik34+-Ӄ 17V*d8/ۆ Aa-9V䖫#䄑F`9- ʈ5p+{>E/uIYy1̨C9 MkE}9*)=fdyĿ ?jiZG (אGoТBɹ0H00d3$6 X0E4{EjW6Cp%&cwQGymlId;'^;ܕ2;L|NpTΦE;#y:5/?~7y)+WQGy&knz#LGxR8Dm:G3tSB8]~;An߷FWXoww~_p|7}S~:;y >9_h&U$4?>+@ڶ57Z:F׎9لӯis+>D{ZӫAzÐ_uDIq·=Z#I}? ZbAWlRho܁YwSUXP5鏛rzw ? CfTRQ>-/Of;9;.?w^ku:д8zy,ƽƿrkKq3z:wK *fR_^jsx.*tffJ׋NFo;^fwoKv1OaՂ?6b} ?W+QAk\[\+e͈B<$W,͊>w~<:pDcvE`—qG0Oܐ"%j$ iza7W4'>ݍBVoof,xSvu[Wm7zu4,gO.! RɏpVsE[@cS^#sl ,R;l3C$?5 f:6'-uY! "FFlmUNmk2'ɩdhn daVsLNP^s&T׿X7ϮGg^%d łRKfMP#X>1J8ޙ6<5ktY3d\ϖ<JoE<<%i$22ל6RQXLY 2"ɷjC %L(@, vR F%08%f2-ېo4"ߵ: ?]n3ZŌî[QBwе Ua:9mּh^䌎p2^ Um`>ݫڊjx\/?=kL$cܻ._oz cr5GWvaT֠5doT>"{ xst1xp@T\x ry"@[+.? >9kv֟@G$ .vV>gșҏ\S=~޳Bҫ=E;͟*ơ x\'1!-eS>J`0a9`JPju"nC+. ;p".9Uzz"eAAME Fgȡ-1EeLb$W!m2'N ^GSjm5{!{lSy]2ؖO= >I/wyIG'< kZ%x"Y-/ D|jW|fɡz Eb"I8gu#*h\T\DEU^<^l N,5;Շ>#ye n'#Hl1%AaӨAp]㹢%WU[~CHއߢB~1 ېmo?CS*'>$n8*ʣ _s(N\N|뿼NZǕᤎ{yΎE-̔M]z.z:8xi_&?  CQhk֭ЦrӍ!3;6G=ܩjhq}25M;\V^jo;`p6 FsQGGˣ^5S>A0o7SA$._7~R nHfdಒF ԕW Gؠ.ntܶDxQ>ֺEx;P M\`PbЌ2ˌf2l1K FeAr%D<=(m {֏pqYΥxDPt<x'P!V&cMf@2ZCLl29EDN\z.b Hc@3C}(.$Τ ׈Q1LFzVxg"Auo`1ձ)P_m\ vwkfΎtqm-,m?sզ8e힫Ն #& d%@ U V.H 3zBwHq (; YE;{U' ʃwf!hh*ͬ+"I2DqŜEN"&)Ĩ5D`A kJG~MN[+J^k\̹x%Zq4Z'T`f -9XG%Κpnuzv) =iJ!|It0kD+n(47,3  4;K`vR挐W3AlZdJ;Ou>Ӫ;4S~D\ D3ɰlʧ'Q R2w\c4eO5Phkiakנ9zgP<;QFw^J|ķ~|5lZYBH࡚SsUb P%Z3+^]8ޱluu©s<3=NnzzskçTcsYC98,Wur)\ІH_ޙ:'VpTȬn#{i1ᥤպvmM=EC? ^^7=Ao?:cm|ȡozwσ`C E}]UePC|l0¬e H,< gO|B-l5k+ ՙ Teѵw_:Eʲъ %TJ*j/%$u2818W(4G}IϠ|ʒ$,j)H6qBy#U"⎨6 $WDe'^FcD>כR8GGm|m>,/:z0j $@ (tJI9 &* 5O,&i4B4CΠ"㚂Rq"TQq逸+xg6ǜXA\vE+.N )x5CLjgcw{uM/A"Z57+jEȈ, I_q"z-t+"B@xJ)hr:ճQtBc&3"AFC҂Ի(Dܜ|D B4ORg=n|Q;Bv2&q3"c3Ah|NÌPOstU~2}.&b !҆h|(~_}2ϱIKx)N{g&ֿz;[W+14Åz:QZPWzW _;h^>BOVANK6j45sN٩'[u=\nQIM˅U֯0FdCw-)%ПQCP8]g_m]=]ɍ7˂LRȅ>1rMINVF߲1ʺ)"s74m62D󲣲TQ&bFx}9MZmqlLC{|p3Ftsdmq"Qc-RO ;\ns&YjզEjjr7<齣>¼3l[MykO!ʦi~r)ܰsbPB tC=D9*Sڐ`|A>m8HRԪmR$vCl낷 )K$])O'c0,&Ξ֦ D>} A㽆̎U~xmwn`<{kc)o[M:֐3fu ~RnN^!6U69O>eo[W4clxq Y -χp\ͼci_ZM;:6  nG]u}PcjΆ?keb+ϵ~}~D_:L6YIF]k/qĽ!*I*LT5 l ?g1֗%:8j|y)VJZ7z(7\UZWUY$9i[$yƪ5Byť/_X`s*E `<$2bFXpp xCy^\tn :d?,mgœɄpyA,DBrrl,DY\&.(K ,DYJ;wB$hJh>JI*2V Em+'AQq:k͉ mxU 9!`f<"y2CR2pcK+Z+( jшY9MLM։}S7R  NN1оhsg24U -V?(&kDp-g0E#B:ptfp $ƢH:\H ,H1-sjoPub]i"P'-K96+3W+Pkk=7;5$ hʝ՜|@H, LIƼ#Q/aum1hFÆjhMNW]+]rξ9}#P`Veq.*K+IwUJ峪Ksu9gYܓnRW(-0vu֩w" RWY`s9 p1@YZ%ڮ [lޏBWKBWF .]PZZRN]!u{zH ^&9_{ol?{F_v7Q|0pInvٻea&0"Hd'9~Òmɲz8iq,5]$_UcFPX_D x5O~ iےB FKwh0UJiP]z zso\v%t^7٣\2N|S|G_XD3^]ѧÔ& [b۲Vt}Vu8Y'T>]6,˹-eIhR)*%*VxZdDM,s*'*q sUPѢ}|Iv eShBcXyj4OVh4ORFOhh6]znR+"Xx65G\j9vqUV\};Jy\eu@ĤiG-bMI\x6I8(>V9W\޽\u{牋lI *zcUEgyWl8Tǝl1X'8.[T]ًD\ZE4iN:s^g'1{$bt٘dxgF]ԊL{Qđ=Lvb栊`{KjσlƒwYZ_\:4^7o?2@]N^kO_o=f+<4jM}^}|J30D-}9 '(`N>yNsN& a|%=#Fr?V5Qkط%6[ ^jֶ[/O+' QȈ.Չ_ }9r~IY r6L"dçw[ew~ VBfm3zLm6Sofm3z ~'??AB1Nl٬Oj6dy2R_6Ek\(=(ofH(gJ+e:O21ўΙ+ dn1t_걧nOFY w6oe]_ާw z!uzr ߹vLgB,f_t:w$ײ[Ilҏe?syi Zse4u[r;XG|Nj+L";go )H%|B43G+07P*Ņʣt>E %u >Xfw} JݾѬ-,[4ٛsqLkzk }f~Ggp;h(?l[u e)ڜ6`M@!װ\ cdm]K=t!G$M3m@)SBOӕ$w& fEY'er< xiR)ie6XRmCp qYr#9x'SA t1rLC1 ~$χ_ex*t~X_}y5tՖ7_FK _FCU}<9fzr 3ޝɵ\Iڨ uՕ/@huץK;zAd?-wg_ZW֫tfճYwH-+jYMunWM~o?cZ`կ+nǼq_؞xbఇѹQ/fy5ß6]pԦZl<t5(sS70qJ Ǜϸ=N-ix! !wB@^AFS9Dƙd19mE}pRGZ%(2Dba]T2.:c)f',Yl6)'-YTiV(cx 74q@CzHO:&,DxEh6EΚAW3izL&V;\.Yt)7t,α=M:v[-S$![NGf9xȖ :kH ;RNY(:ct]֧}2%Txuvaї>ܾi%N&9tĂ[Zkc93%&/ΆۍFGB]Ą->d *XHFnŖ% Rk[&~kuͬt݉?4ֶ7ax3yk^~N:`|Z%3^'&l;'YaUBw tM Lg;ńs_`{;UvEڽ:_`}vo߼z7_v㛷\W}uo?f:!e$""-^ 6M=7}ƣ-orԓ _繺!/y# =F]ɸzI?~/S}m,Vԏu%Qg\;NBd60:8aԖU׽8#JPT6<4$XlIT/_|UqgR\2}YxD7{?'q~R~BCލͼoǟHn6M& ލ^V%F ZFǁKqq݊tO_^#븁6s >crrCsR+ozA.| >%[1痻ۏ3вxJb7\OYz7[<`0OYϥ-]ߥwTWʢpso+VY1Zg!V7i'4Bs緗/߽1:_`+ŗoC))")wkmX9g gc$q6vSl}<$%Y S53(7+r5fOLUU_&yߎxL=uU"|x'jox1[ÌS ~яxlt4(UpX2L^1KH-,gfuHJh[%Zf#{kd,L#LeYwL g-6xS##Z0[%eX ^GJ:<3Q-HLmcy4hZcV1A$fxL.Zok/VRV8*8Q<*e>hzA#TDZIl#g98ኄR",$EN 0lM3=; <^ݴB<v=U[ú!Vz~ٴًo>&7ahcԋ1٧`*Bp¬UI+7)F.x]>\&X*7Y,/.2eA@ ֌ɠ8b1R:8gZ;x7m.koާho+zoT.6VatU5ԯjJ`) 9VX t#U)#FJΡGPp( )S[ K:Ev7HM\RlŠ 2'BܫNɥdfZfO } {yE,ΔHbJ9ZL=ʭ}L_ֶAؖ/9=t'qE./n@ AʘJXy*'Vy(:Q ڲkIINs,i]:9hH$ьKn&&ك+DTS0KIEYDh>6>z}bFr`%ng PS)Vwe bz 6uAӳ3dQ{B4!9䲉cH`01szpH hr`tEEwJ2Tid,&nd,gb &IcAp)m~6b;rv7 4p8^8bG j1!jJiL3ɕSH^&9`Ya2WYZZc@gEȕJx%"OQg툱H~U9L68ۏGq<.]Q0d=jJ 1nq X2щRJGT `$I2qQf+}&\LxX;ǂ+"ˆ{DqC8s'Y.t)d:\ZSW"AJ3Vm/T,0##BJh+EbٹlY.K[YNkJy^s,S!D=KdׄfaIHg9vDsl{\cm-oszF\x.F|]fNXeATd l ڨ,Jp+1I0/=ÚmϽw6g[77Y3rp=V3oO-6OX^c n EwCi GcM:OZ0I"Al듞ꘘo4[A`z]hr4?X aݧV n/h0v>TG rl<^7}y4mCAU]J>i5ؑ=\X<"(KsmP+Y6Skڐk)MW2|ͨZ\"0,`:%iIY $eGǂE^`p-Ͻ;zXP[ ʳ~yJ2 `R&X%84QQ[IPǞF4tL<_#cx7{ f竚V)sU[o}$'|qI$F'irkhI;^t?3RggA!r=0 7Th<:@55 6)m9s0q#FQ$BBL"!F9Ĕt 杧bY C1qvG#zl`ix =%.wOUAݭh%U{-lZT̾q+HJΒS+똥G0P<'^S%S`ma:RS xeg#B`kl"Eu`D 'ׂ6 'JYa ~Fhc(p>`0o'Ǔ &-/ 醾oSPy~K:Cs1^YM;4Ps,ckNn7u_*2ȦK g%#2|88:eVb"ݪ0j0 _Aada3Uq6q%ҥ nMhH^RM/7t q/+2)VFz[ȵ7u:Y1 ~FR|,0,8j5:[y.D*6{(6R6H-?mG澣756> jn1B. RaxWV./LťÂU|1/3Y-lAXh=pL;We[)fV[&>wFkpf_ Ֆ3";:Evpޡhχm7҅LW7~mx`ZEԶڭ;Am{h֨imbaųu+ QbùPﺡ>6{بjأ&gA#S6d ,pZejI4T0SmI`~c9Y(@DóIDT ((xT)qxژt1dt Lj`,ͯ'R-ntB WKyfAYGrZ<%4 ,^q U4A=&(dRG_d$M`E\઎a'W(.;Bi /+R \YW'W('W .T Go]p Vz~5NH37~p*y=A5q-(=6lʛ.&t sPޏOFzp.SN( D` _UN*.n먷Kp&y)FReec%3%ZW>Yu=R#D(^}|RcooMi\I8y^z2Le|`%dK32!LJRsBVw8n܇WzAb6!a~f6O=<`d6hõEv[ыM:}ϿutUSA>!(9ŵ' .,)%սYNK0͟*.E,Ptdgaa6b4% UѢK]ѥ.OPŏR>GNK#%J )UL8,iB*`B"r,Mej) )E/(18-`)f>ΔH]rgW3U878{iҺ&3qWlIdMnw"~1[ n᎜&ϼNY.evG*ɼ5_Hw?rh M%JWMC  ,ձuմ.U_Ƨ$K~]v}}|WZnׇxAtJ`z踍OE?0aʨ15fj.hD %k(fwmm%Gy˞Q_/5@fٝ}L 议Ӕ-ɷo5)rHY(%Q1`A"ySU}uyϧvko5le7U;V݁[a {s]RF\?9hdFoJtכdg85+/1zE> Q7GնhMp6=}μ9[Iʉ{[*liF.*KdK=ޓp zxt+eY?PHaˆ)~~:%|iXleJp8.qfSKz` uR8dqƙ,+}Gfs y+/<}~X[>7i/9dQb @B6cbӵj#Yl;~֙lsD ׵ [WVB)`Z^>dzV2g 9-eM(<(dis'׳ӏ--ʚ}qwo[[ܺ&m| frOߵ}-b\[W*^Lweu_w;#{"4`PO^9KgG'o/i*ܫ&F;@Ry)AU= 8{$"Ϗyu+BPIБ&1m]S4.s)qZ@ ծ)T9mu,&.:C P=#Hg1X& MAڻm7qnH]B_AK/}x3nV4\pEp>O}0`Ч}^gz=38SAC”Z TкZ}UMEZ}ه:^sq)ҚQ!+t2b*[F_e cc\}b\[+tNn:=;Bb;"AZ,5^O48p֣n%{3{;@8dq0Q4:`UTƺGcUnTYz6gĠUuy&a)e j kegݰ&EHѥ`!ƺ0HJ&Ei(.Ank[UTQ]yMk׳Y;}-"o;ʝG0 bFDxU{(Hwwچm)wТ9:7$怐G:B(TumL\!1)2A@6a8Slb?w=lx6ِ Ch8l8GֿM1B̧ԇZ M&iߤѤ\26vN.ç _}ap15GsGLH)@!sLC :V,'_)cx/j{|aghl-`P(ڊFZ1ZQ,Qٽ}y@0Oo6O]6{9lC[ =}||S(}q$sOD漂u`m.wG\XCT^';dk2A~cteG#8zbpʤv58ho31THݢ8)Utb]PR pUEx/M [!NIU &Љz˻s;8( P4ЮL\hc9f圯HoӜ׿n2^VsUkfQۜU .W*ij2ŴeUNV~s)Xh]}%jz_?<@,|;'7ExP* `ٻ&2pZ\$pp5LFF9p)RRadjmv}dHp(fb(#ԌP%HSCjy o'dvrbOCMTVQ4FQ=D6#:Y-ӑDWB/D/@ߝ+PX@Ej#+;=q2!cϮ lz\vCR>:JGѓuu]_`y;/PRO~MLg W=UuMD.9:,^=lXRAkU-j.`>ZJӴ>R6zZQ`()KtMZ;HUToM=c?vӌ}} n xߐg>ˬo!M'_`53Q Zgk 6Q >jX9kfRT^mmhd/#"k۴up;նGW(u9ĹcYY⵻iǾ^tf#IQC29&# d!`DA?Ձh¹(M rpWXrt! 2 ,,5yF$(>J^HTĉCJa7qnsR܈lPx%x@C|.:$B*o!>9V\\{!{'B"h"g+r; {oڅ* APp$xfΩ#[o :Y0P6Su ˅SHDrL)ZֱehRUYwgDȳhx{vd=M_ߣ#Y%ۋX]B] c!?H$SsD1 sb9 EJE̗+ZcK<u)[&yc^=[\s{]~Í$Wo+a,soqWڌo3fsi3٤ L6)Qm&6x rJ2,!4Iim:€(UCV*mUښNl[{żj@)#L8jST裓˃5ڂǔF(JWDTmWBc<Ce3*ÀZ ASnPS(>{H֗['$4AOh5+߼vmk|>O۲MgvR%Z2lj *:]BȔÐLh%U]s(bTJrY2@YhgG+bhxC˕!هmGHZ\ MۊkjgIdƣҢȬ#>A MPYOdb!z٪ugG;o_&EÖv1 ֗\B\#Q GS`o{bq4&z?5r$v*i$.D>6Dm) (qU¥B A~yZݠ]LG4v&*PȭFjE\@NF['emHft]9_{(tb }۹;U#`*FVC*LjE'cmk=PBu%bu&/""?㡥ȃ>v@Td_.`'>sP+DU-nPS$kkUͣyA5!ZG+|Ama4^& Ce3(Z-9ts 2 cnA2;e柝P* 0Re3P}4cJ7PP$\01Q8@T \*ZgیXZ+OK,L n#(AAnFbY8a,Tw;nXtbFіDz9,,:w hY,(wZ-UrIP!%eC*T!:U%ګ"fst]^X]Y#WW<ڶ!ل `ާ c(0!abD:n +B\DqyƋeYt4 cE(ETuEaH'k "bdkk)QF?hZv1$J xuFρPZSu{okTLd\JQcGCkH{5]V&ON 0du!lQ?aCByV^%x}8sr:,fu|1)<;_<6}߽5o09Q߿fG%9?=9our^;;nQ ~399颰_|.}e':y&/?ş_~zÏ?&|/^.;p$Om"xwov/__N-[FQy}Kr/: _'}Y9˸<\٢<+6b'[rYGDl17ʹ ]~RЖTE]Ǵƥoi*r4줱pOW.]moIr+"v_ ,pٻ= HKOW0EHJ俧j8(Hj$NcD<]tUwu|ZcRbDK۽%#Vox<qL_!}]ڻyA[OBwW,>㹵*@eVw~4z{'FYgzc7X޽hz/eSRUKE X*˛y|cx֋{ }_VpM԰Ox1[ÌN Qx'F;0O- ^rZRYꐔ6<7(K@,F[d,U^eYuۧVFy<ة-̑$.0.2cVҙZ90Omk2rlTσF' :OҒ}\NhbG&_Yق¡U)I%ʕIW)K냶:~P~PMh. C;" JJ Ib 6SEhC&};ۨYn7MTfo*Yk_\qae@ ֌ɠ8b1eS"QyN`~dX"YtIn&*Z>`tRZGW*_옱AC}a$#Nj5v1-)ÌeRtX:oe oy>Yl#,J,Cz ,%5xYA1+MB: ϱͱiZ8IoVAZl 7ACw1.򳅃t3a1BQͧף¦ktⱲZ*ÜR8ٻlywLo݇>J 痃q5nၳnˊvO<[ѐ݇06v!@tCi GcM:OZ0)x[;Gـ'=11ߪ{*zjh2 r(ք/_.x0r>~vk= ? 1mCǟǓ/-~}eoy?. 5=rKc _,K1_Fߩk+ZSEڰ֊NKiRIb̾vvVs*t3EѪEnO0׭rٺx?E2={\F-Xz2c#OY AFV! mR[zy@], /nuM׺PRKHXJd9gWt2o<. eIA|/REzQ'1 z,aa}EZ5a6׾X7})_׷4H;Y hQj}&笽6i) VZHJgCRp#,w/KqO C qCs TS`hk13g]7P]G[ UQItt`ڨ5gT#R 2BPXa`33[N·53&.w m(t58۷#oO4j|= mØf9DCV%XgIucڣ9*C0d ĥ'g2_ƞzRO<ߝ-:f&X$\FP xH9!)()|R]0&X|DIB<뙧Au݅wY3L:O>]tAkN0:prO\gf}';3z_2gw?PPz^JQۖ? b%}  ڜ}6 ;qbvsZ'M~SQ1X. =q<)l&hO!_[;~Otv]OafxYm3(i`%էiIUd* }S+) BVCWR 2uBtЕq?XG7}0s'_WF~xɛqLD}QH/DB"~V?aT7jwoo_ k+Bk&@:2?fkVl)K;D#hQ/ sF:Z%*r%VI`^'"eJ]z4F‹DV^d["/%v/dH?ȯ;&jdWBL*ι. \ _E+nXsudf?ע>~~(m,EzС\Y +,m1tp)]+5Yt MR]`H1tp.b]+D{:EB0i +̅)f](]!J!z:ABk# +lT95v^Btut%%BDWXr.S׮&=] ])śtx)]!`^]!\aK+D+DS+-x\CWΌQ ]!ZiNWR鞮N }VB Ѿr)ҕUmO^JWXP-]+D)eOWHWJ߮nF +eb耺Յo\@ o\.twb$UV6V1SYuEyNQ }tF((aw'l p~pّ=kC+;ʮm=Oݯ=U BCWR ЪcEe0Yc_Zn#`)2E_heMWk7MǏPO dQvEu)Wg{KBOǥIg[&͝FB|Mݨ'SUAUJH*}U7mU3AT߇ xܵ -og;pf1fQ%oGSzUŦɺzs̈́[nvp 3e"5w l^=ƆUuĚ?5M,;G@Eʧ;}@l ämy#C#uJCٰkOO(hR!TR2玩B} ޅKD#N{'l~%B!}tBb@QDȢ}V?"E!i/NzU2d]}P'Ss._7GMUEzr7NTRZ:'ZIIbE1\2'%d >ܷJ0e=,QZdjڤТ-)$8:BOR|0!4fmG;傪a$ UzAb2 Yk\mF/(5EJ=i_,>YCD ɞ5))iTw"R5VzV ZBw$(x,;vDiF*H`"2 0JP8x>m;#+J$W{HTb0P <Dփ|IΈ A9JIV :*2MʴNǍ5s4m*A}Y%@ Vr<o 7t`u~R$ 8Հh%;+Qua&k) H7ˇ`Fﯖ}қ;.O&!z*`Yed-&G]Jrc^Ls@ Ѩ/сwK|Czv@ @L)ڽXRr.a3m>%0ZuIBv `赈 (-f1c &4˽e;V ~-PEȚB\Ƣ(ΤE`&#EP]  VQ;#2Ϊ ת ¤0,,H1#dlDh! c:QbզX5{-6?(c%NYIƒ5̬$޲[PRS[ u/G“HX6ئ{tВ0LƐZjm0&Ǎ+7v%.G>KaM{1_눙$K> @znNNi`$ti#ll% f o3M(٢b1jT%'ՓFk7vSN _φHIaFI*ျ!)A/[tCۚbCxfە闶K9Ƹz9xHNk֠ҟ(yb=AUD@p@0W˃QZkVzJ}[@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X b@S)F DƃQ sWE*$y`%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V*h;$%RRp0J %^ zJ t)%X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@zJ I LSu0J :jVjJJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%Q}r^ɨ{jIVj`s_Z I1[cۛB M>"mCWt:Ԩ[v/|W"荋 ۉblԟܛ]dVswPō63[sV߅o}듑S tdzrx@NޓɩcrԑVMN\ra3G;3'Ynxw_%noU:$HX5 DJmO뷻f1tPʜC4Q QF t})!wM ʯ3RCbzp0VIacOkQaV~`$FUvܼύuޜ!BnЧw|<cQ~W)*yE8'xUYzЙaQQ9}˟:H;ϯa>.{}'xn<ao $s2; L#v\RZmlwPۋ^<1B(}7";7ZjRPM,Voekj0Z!zkJEzP0rYigͰ^/sG?ӱ7''[i5zrJ:*}d}{ 췿9 tN4ғ昚&xx)3#h2qo|ӊhwM)67.;wGQ @Of=yI~ߑ/2BtGHƞ}ɘ←sQjOpokµ67"N}yBMsRDÝ)]Kf]zw|v7ޟeZ*7ט!/|wr^7c~$sL~O SC,w5mH_1v .A@Yr&=~dٖӱʋMm'rH 3`!~6WcsGJq:C{IEpQ(st~FV@\v{@ 3x痯ڜ8 : |S`ےw5J1Dt ԪNO]wC I۷ ]v8UY Fv#j0 cJই=X*͇}?]—^{"he??M渞SRedf|9ݓW-:g+ a2hϵp\9!s[y1zOEBERʖ;VAGm\)Ź&pzMhe]NKڒ iNgjGӎijgUuWm_fv3w;_\e6z0{l~Ѝax^l]7ΰa>🫏?%[CU{w^R 1NkuŊ$vLkߢHByc;aG@Cy+:*죡rj8¬$8faM"43a7: =18B&Y46chy., 1@Uk(4w6D WGܪ, ,fx*&A7l:*-ѥ!#VTT< N${P4ZAS8!U#_ZذTΝ`}w_Ԙf*ka /2d֙sD18ɒqY!A$aYi72>y&c0eOZƸjS>WwDڵ"%,My(Kc$[.LLh`8/avszdʡV :a[*RwsK! l7}e(bܷ8݋]0Ó]?( ew 3,sz(Or2G%GЌPO%jkE (kȣ7hуg!\%`8`!g:ImA`hNTWZ^.‚߷!)h] lt4tA FyF Ymؒ>T? HRGк0F黝x_wy`|]y۳ :'/\~:.+QxF.&rgn>7}@WIgHL,T\8?w^_Fۨv4jg0|?Wo{_+Ϳ~o.ܛͫ~C'Ywq F?>ށ_}mEҩv ߥ_Ss+>7",u\+@R>7` rxIq.#WvQ:K$]`z}R] l8mnri6:wwjQn4X|RFQýZsϩ6֗}p aXn?8u}Ez/iq4[ZXNwu!kqY Vy4ŝ6s[r/YK .uA.͊C|ڱ%UBW{0J[MI:Ǘ1}z_|-'X}sc{w7qcE;*/ܳʚ: *;uz#g??9 W}G-v|,igA 9 RIKLawxw׻\twx]N7K^{\eöUg:-R{Mbՙx˽M?\\$Q!DX'産Wq>N OX@ߏћ Ïϸqϸ0uD\֪ .Nvљ/<021[LNMY%NIKC(EZ䐄sFNFU8_>}ww_'F5`;0VRom (96uyIvi鞎vоFh'e1B]SLzLEw(닡>^ +D BAP15xIA#:#A8cqt-p!D%ϕ}sa#! ɤa"ΪN*N*JDQҁ"+DgYr4K̅7"(I.1̅!m4!۵y!,}~9\Dv;Q.u3t~+&̻jV#?i L>ݭ cr5GWNaT֠57y\*7*2T`\b=_Rq5@2n$S,e`W#gY_yp쳕#v5(+NY.:{tfOݏ\œfOݓZ͟${TZTSIBwΞڽ{ʞz{V#l>:Æg6I~rHrj(lX` &!GLRA_+)V<6&XZH t<ȴK8m-A%ϑȠ lY81EeLb\)"\I/(Le9;UdJ$o?S!G֔_b8N:b=N(h^>,kdDQ1E۴! J:%Ik&V HBb9M)ZNX킊o'm6?q(L@T tc!ŽT+N<9k&g[Z*x` ]>4Y4L+o?<ň /iB&dY $d% 8BLYrCjT#g>lE΢E#VC5I#4}$vy!dIC)#@dYgd ʆ+5"Fe8DРV"#EPr%|eXR=CZHzquLj\r^T:œ^ &d01B$9GKGh sIĬEeAC/ NC]Y#eT3hȝ21݆QtAۃNOM(G?FJ0.F#>'kRи`V!M1޺Fy2J1%a^O!s&9GWAQAy0.4 sE{F$owRKFE/"O|~^^ԟ\_77\ctRdxĵpd|ɀS1/0L̹R<-B\ qK`Q ̫ٺDd ~ͪqG*kK`invv}"U8tuەB*p!U*a%?AlRk0`sDLߩZ]]GtuMJm8R` hiYeg8m"Jp!xcVJ94e݌6Z0Z8B2AE8}D=c^+PyNO\v O5rvC:*ɩiɺ#Iٔv˦WlQޕc0QgZ4φy!&-Eܨ@\ 瘢S0%vJpjx|y|s3$eHD21x30:U\$v2'Ź!9UrVHa<ȝ*-Y /Z4"k:c"J,gQΎ/|X{KA' 4sErd1d0u"ɇ;@M:&W^Q}v,\.L&`+>zB){#-R6-~A- OvNUXS(,XBX.=Ϝ&4L1$Sj/Ij'q8z/x "`*&j2a2MqԌ$%mN'@kVؓmRX9>XP᫜_v"i!<<:GUˌ1`t椖TiL e4MI<U&jrtTc  :ٻ7#W}S<"/bmkYkewAݴ("ْڋY,.U-zy|>ebO P+DvEY8$t&gr68UDGA ()h / p W ECpQ MbbUy׊eKg7a).ߏq^4LZמ6AA N(KRl✅DF""XHN "2(#82#\41NS٢fٻ_hI:w5~D4 (P\:'`Ro1Ѣc0Bsr!2)(*Oz1h͍W)qf#r- 5%lyyWiïSTfȣ|Vr|*tn "ZTh[܉' h" q%VR! sc@q8Eႋ$7VD.x"dv:YK=:[5bDbY \yF$Ȉ<$-YIyL'N ıyꊮ0zuRGAՙg<%,xG6.&H/"D#[2bJfz7Lևb͜U!j/QTewb{I/AyW8{4>$LR**\P4e1yp9&ˣMv4?\W?=;za[7q2JFxBeΠ?UPaE cz+_Bq6K'ҟFF1η┮jyۯl_Fv}}|G2M/MF{ͭ{G#ʪ{5t}'''䐫 ?LXz24yihW<.q\gߛ2v؟MOp熆92ʴ\% *Ss5Kֻct=9^M`<y{*xdHy8 5 T>  Myu·M災𠅩^=cmC>cۙ9tg2؞x%QμR2O6h霾".F 8yG!ikZ6ʧ #b5ڿixN -f)S̉tDȘ@>ycB$(Vd ݑ^-GOErs56=].l)9?a4 R* eႤٽF{Fϔ5<9?iZ0$?E$GSýT$ $Ay,wRP&J)! i:W'yQIo,Ne2 b̮cs:A 10ZT v8j>mT7K&L +AKuBh^ВuTM9Vs^R4 5V;E!).(+!skcS ҤB.|It9R$&ETOU2.PU >lKb2Bz*hZdx :YO9Tޑ ?".wZ%ka1ؔ-ΈY(W(wC߭FMc4eOs{uj+n ϴzMӅ/NoTT6&C1MAEV'0XklNDVt ywLviRڧۺxH"W. Nk֔z.hCL$]T_p_ UUB ȻN QXnT4*TjCɕт97&+1VYGa \Adh h2Pot2΅P1 2& O5db)aJ*S݉j]"C,5ڐmZq֙ә'Й8CI3IL8!KϷ|5c'$kVNMp!/.YQ; =6H^FKn\ Ҫ%R2-JLCgyLk6GCcdmR$vc)w[ %ڮ@mrpk1Y ΎAmsnnRC]M&xlזzI穮^[n%e9E"p.]KטwzTk[ImzMȽJa2m<ոȁi{wW8y4\I+|+'/>߮ϵ:yD_:L976_ ܓ_2ڵ:^qbH҄,h+UtAh:[(|Gŏ,CD{?J/&b<};Q6^Aϫsĵ*A(A +54rgs:\*s P)1Zz[Wؚ5G]|x o{50VͶ*vX;w&'z4|hv' ],ּK\cIION׽@׭:!|rϫ׬~* /5SdfEǭڧ)(/FĄHf>ux?fWGJ+^l&Whڪ `)_;s&c\j~IoM a$$3P.&FR[DW WUFM Q2B;@lS:o =*åh9m:]e;@I]!`qꢫ UF+OWR2UvEtup'mVUFUЕ&Z6m#`sS{R"h42Jm_"]8FpukD42J]]$]+?I7tS0ņ{s8z~덆Ók~J5)'t }_XSBHࡘMƃ%ZM|tz  {TÓE(1/1A +NBakͬ/BN+zLv,,wI!cS @gWɻ}Ar~2mc^@L9~չəs\ICٴlu͡]O9 Z q{=[P26LՀrX2M+D MKzIگt 3aZCWBu~(ttutZDW8BkV6(JHZDW]i-thhBfeˡ+)5mRWiBL[*ՍWWIJ'i}n4{?cwmH_s~fgffNv_d-,_[[^[i&U]$*ݛAm}:dU4C<{r_B9\W%Aڿxսu;vOq3k~Y&XW݉2c2q$zEMݝ,FYcNҞ'f./>|%xIOs_>$2΄̘2^z^9Sߧφ=h^?L.Zz4~^l<ߛ>~+â,Eɻ׺[>؏sQ5m?=GLQ]'p dt47̾ ,~iAƕyV{5uL"CLEBXzgVcMX*b &+ Y^^ T!fXIvPJwuiE !I%i/52(B;NxMig#"K?nشdÍ+@K٤e`4ʹ OA6fƬK\|\}0ٗ~nW\׆bt}ݖ^.<%AK= B3VV.uʼ8yҾ'…r|*Z8#IWS1d1x˜.&X.=X2N-W2c1j︵‰u ؄ AjǬQFYJy>%R gJglו545 䅥M,+BI5()d ep-`^ޒPxM: ]*ʹ*CKeV8e*"fJExlH%1QMjUN˽,2gS1yD<،Ri[A" ƍ6Pcut ekuYT_-`M`:r[S|;3ړKON4S[NNׯ,N@~0c^7e Z86&v9(.3Q T{Yu#ǶR\8}Oق!D>ʔItR%#I\KF@RŨJM 0w$!y Gh9_&9f- º 5J9e9 Ly{CGg}vPp냂QϣpkC9\%|rn8֎C?Þ;G8Tڲpsqoxrp='8~od)az؞b'c4dX%?Â]D!pC!fH *Oc.*bk:LU=W̑N,|A}S SY7㼺\`EhvZ]q=Aa0*0L^o2`mw놾pYᲜjH`iu 5 oL9:+ôj!$U(orsfgP F9-ʯ#4 u߄Oye-ΚZVL5i*!Ց PJ!Uq3Z Yn=FΦw &iϘ`♋L*'(0+^tŘgܻ\Z~U}Դ\+ŢN 9i߮$(bE$^jNN#z!Uxd!R2""&ZH0<)c"/uΎaM]=)P<(B8m>N>C@OmdV'T8 r}{-R<(_a_]K(aT}7eq5ȧW2|/hͺɧ+Ф74emzC(oXW`cغSM&5狔oyn.ZjqFihl{~Rv&(Z$31ݱ܃y"Z&)w(z+ӲɷOs6 ;5n5uW[v+FcN^򌝼'kܧq@P5)|||z㝌Z/lPnT)__c<`YD1a,&w~fiV'wSj]k68۩Cqrõt.2YYrsbyi*c[=)q2A^h8˨%G=FyF(K'vooGPGb, C|'&2b0rhW .B.:}xӖI&)[`Uo:1zI]Ȫ0Ihɡ|V \Qխ])_˭-=uϿ qIH+VE>5gc5.E'8u6x(Ϙ!*H `g:HqbnXy VW;a=?mQZz4?<@V?nc e & ¼&BP5;4UTsc%bD9NxQ 9^mI^8>n5޲[0n#.GhZw/75^e,:uv?.<g1tfY$X l|kfIv*8ʝ7\ 3f4j|s\1Tk%i|:~so%>)9#%Oi |6SyT~*Ygw;=;SδA7|2MVefʌ\%7ih 5d(§Y{\ \ػWֶ yNLJ"255 GRcrS̤H".v_r 8M(rneZGEdQ0A[)$R= ҆=wmƆ϶ym#@V44k8zϑQ\#\Z"L/^.a{׮15b_k;ϣQʮjO=:bT T[=0y5-8qwS.nHx+HƂRg VZs0c=:5V贯bzVXtt^#TSkJb9B "Tn_u°ha#:=V)nŀQM&sj1v>O6kզ$"/!}vmG;c 9[`L^ ǫG«\;ǫ~\5 ]Re_FmیtL* ncR*3CjU{zP3AVX:!14"*hd'j<熃{Jt 0ѯw> :[hMP3! I,}i9O8p&=\=<ĔpZô=]J_KRĭ"q>'$P ̒Z4?2_].nP4RF2fSomFWObq(o=amP11q.|KcfgҳYxwdZӴxmnͤɍ57%I~|R`\#YSv`mo>od7u#qdNUsxP"T} 2Vb\4vqE6Je!٠LfL )&,Nh?~Ӽ(4֊ҨcQzLUĤTnPJj1`S`Hg,OhVX fm^m R6 Ty`}*hɱGGGBGb!eivSvB䌂}d#"ᷠQ "FHHQyĂrN !Xd $]d+g"F7bP޵5q#ٱqiѮrq'[ICr*,ZQMԅHs7Fw+O`)j۪ϚLXkOO?" Θ"e;-uL(1KSJ R1/Qmx7"hO3 [mt 8K*v7Ϥo4Z7WoT>;7'~f>kb>~gmnYE6iXB8B'"O u v.zح=<*B)ȳsRyENZ:'j`a"ɸdoՃ|b> BInÓ<8~͊A[+ՍKJ}~: @R4&Fm!O>B +|b$[% A"^ ]zu߀1ozyPfO# ;j}n3iWNֵ+{ Y{& )o5)bC#s.Q٨DE {& gd Đ !"isEHY,R,5m]f,x$KL`TdO9ZIb9XtPXtI %[]H(ר$$w,Խ(TEeDƨq \q96d-<58] ټNn|)l>& y(ڦMcsp.ʥt"En!}4]${3*N$ .f#|)$5+ sQBVcJFRX'O: VkV<;tҠ;(梀")bNwEE#T,Jjz-yԆtpj VcOH TIĥRRBRw:^d^-jKqۮ n^njN`7@*TTqFpoy|Hde,P`m͈0`nYW.ɖO.!U2![>;%ɴ4B+u2ygkgl Z:T֡q|V菷'S)Vﺟ,ghl J;5e0 H.uP7z#Еa{t좸)*Y!]jmHtdNإPJdȞ嚣-޽cvyV$/նX/}!b*)d(UJQ҉.`P}3t8cHkY3Eu/seʘ}Q,IZHL+-`H>KLzuf\ ɲѰ3X6RL4= |:yr4:9_ԿFOJqr1+Q? 5r4VeV:+aoX/f4-:Rbco_q\һ/'ã8>OGo懟~}Ro?}Or} 5[pkl|խ{6zs_#ouQI}89nn@Z',m_WˌՕ'i^h{4lO(~°?-&3dzQ\^fTex pTg^q?>i̤_<^}*իgN.en$92FS3z9cz~?df|-]=0uywUO[]y>}}Cu47KHVn  :ɺ8A5W'<~y}sN"eu\oj#셭W%X-&['7{lg٨W.V !-0{W :?WOXy(h')L_kۄ{_/aIF~4_".[j6sYkOsM'[wYI-$? Օځf''i Z2<!ۧo~3<%K\CdB&)Q O>~v\&}@yf |ukh@ҐbxzqL*dn^$I Xo&2RZCAAiE,Td}yRөǹj%=ػ^|B15{%@~3'7^9$縻 #ڌtUG=/&Tq)O6>k`K&zP\Td/2jFUh{3hkpoOӳmVz}u։-b>q:nЏ7աq+˅v^i!hqÚYHYs?-Jy(%5koxES.AP4s;$(2z(M.:V0.(,y%`[uG1g5E95LN),!KUDJE$S'[S?쎶瞂Voj_uj׷ q!ٞBhkP˚ S;zb4&e0qXG2 W GhP3$54kRQ(Bm$z*ʘ!F;peCm =,)&q_;J~iy{ y$ږ*+ͷ]Ws>y6_ښ܇~:A6`׏_|ZŃoeM!J^,jf9C%(3:,)PS{\*Z*: BAN`+YiҀ/LRm3ckpf|ָ/tȁ  ׈곝WxK6tA\5/Nhx[fDZ>91&"]U>,Ƣ 3*:\צSubD82G>b*2ׄ[s7Er%Ʈ0bkc_F-3q`zsB+dXժV%CT>Uh:4ҍ7hV\M8Вe#J=je!9BJE ;|ԁyq,Ks}yZExq[=+dT6lY˚LAx(K*#0CW$?ޱ/LJ@a M[*}O1^9;; 0y" !VN~Z5*vBzZV C =Ԫط3ULbJ*ZI]rcpJ9D+zDW )p*Zw(iT:$ѕ&) ]UH}+F+U]]Wj+L ܕµm+F)]"]B4Gtŀн W*Zy( tut>yW0G "/tUju(:Hr^V]UbZu*ʍH>yW0bou/tU u(nL>N=U$otv6'UG==a>pCΦy letL( 5G=ڿ ?T'{<ٱ˟`Fkd4J- ,e[i=,MPS85YBi j e%0Z5,,i ӓoy-ݡ> /mku祄FfՀ!4D7xoZq\Jf7Y**L[S4A&e=+LzCWƒ>*Z'NW]%] ])i?U\_s-o:DzCWQ}ֺUE4(m#bLo…UEkm骢tv+@ ?tUވtUQn쁮lM)';ӟ {3^ђ:]ZZtutZ zDW )"/ڢ WƻhMW+J;L"]9Ё;6Z*\cBW;޸r E<,b  ^%^2^ +[ImY֌ݶԣ>L#K[< W'+eq&wip5FWP+V֎o:I\: -Ǹm:/g5]~?|xnzO[gl<wΞ,ҿ2 ,L~ɵG~2|s?T-oCO,L4`W뭟WCjv zzžʎՐ,j֎+)yoN!8ip5>KOұ [醫H2 >å*Ր{떗wWCe W'+FCuu(\AγԎEqJ ;6\= DT\ :O1ba_0Tmqst  8; W4KQ՟J)J5zj~ W(̂v*n:I\s~} =] N|1jv*]pu ֪i=xjfP{btV7\ $K+2R;JWP2l7\ bข} V?υ 1x㦹a_0TNWb3jy!7N>qb†Sĕ«i!Y԰pJo 3ϰ(zEt[%!fe:q4b=6t"l+eñ_&7PԊ9e*. *U4z53 8l"Bv\ έWC% W'++ra` •j,j]X;JNW.=T\AWCn j#jtNWlOx*IP΂+jr+OW>f*`ip5㮠v\Tn:E\! j,Bjv\ 7\ D-ۙp5O>4 j]=ʵYp5ҩ̄Y0 c߱"}2| #=pJ]m ?;p$;Ր,Z=]aT W+gun"\ cN,SVﮆJ6\"؇g{rY}\ۅ~\f) oV$z_nq̋|o o~nʛ_~?_ 1A+4pq{2^>?>tCv{/ߍ/Ⲥ/ڧF?Q4nw|{Jg7W%8 =v״Cebg>y{ۨ}3/>Eqva6-0U~^v({aa>*@I;ݲg3pE ~}ekfo Gfh}F{y_?_^_~vV_g?+s~?dmr$>F!mFUUo/gh]m(S=Q1Fb,&k!{J;&0U{ofQSf͙Y]H8gR5m[/)jKu;XRy[Fz2_Dž wJ,01JZE,Md jAC^VZK!B-*I[%k`(17hFUzsR نjѵѩ:^۟R틥޽/5[{CdA2y|V߈S3bN"=X<&3[ } qkcŴL*a ߈hB0{]ݜV4 fk J`wq2{i[halsP03!M 10ИUޥ{^Kb hs`ѐD!@,O^7WHhMWm7/RinS![CK{ $fDy%5'bά%4-x"RXR Aͽh̪BbZlM%.8gRIIT1ڦ;u?A·zs[xH%ѻICS"Nԗ8:EOkk1]€b>[QH&!DTɈ-!8f| YB6*0UMbylEyR\dS%E*CЌ$.;Z`@,YBOuX8hϒnB,AF(ԆR3uma5I/3ޅkn9(˒[ Eioy9\x;-qr0p8>PGW[š˳Ʊ&6V7NCb",Ai01gGƺ ^џ. *2aWtؘ Ȇx:PPb^jGU8(N*ZDpɖO,.Q)-ht_q j Vjp2A6p((\K"dC ꛡDe8MJ &VWazDf^" dtuiR s~ d ++gLAAX[@tHFYLh :52|ی5g+,^H8C7a ``. pc)EGC%;I`~*A6ud& p Xi TtgQ"Hq{ŢA(gୈRl ejQu~W]T,b^tyhA]LI1J i32-f5S6DrR#pPB9{H Y٠ ybDCAՊ[BFФFuf aPNݭڻ^̈KSΊFŘĔa0P >_m@!xX-$=G̓pF"cQ>7c~6sPHd 9M T G(o*"VpZXT6ԆŘdP>f5Be6 ەE$Z|W`}E,҃, Ojdg h@eV0=JkN~z{YQE{Z*#m8%h|s{ŧhd fƉ!_[#AO!+ƒWt]Awlw]cnIc@0u/滷.T? `,H;^Ftv0ؙ6j[A)'׫ʗ޵y7_ t; 7׌+ o5pp?qiVG3n:_u75˰ @f0IƦIaY P}7fn8W*Ea_Ș% jzL ]2[B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!eI*` &P\fU7%&H+9w&PR =2$yw!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&21%e&k08LyvL rO @H L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&e)X%1r@J:j. rwL HLw2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@B&2 L d!@ 4^Le_~P&>>vǜxw߉$X)$,@\%/s%$.ңWν sU6óqb67WUJC\Cs`T\kq1檊k٥+[^ozX%+c { x2WU\.\Uig Rrj\Cse]1WU\/\s7WUJ~]+jH+$b ^injCg}+'r:O:6~ /4t4AϝA@~ mzz7#23bUD>tZ4Lla?I7#LTTf$ѰWѮ|.e;|eE6~3ى\fg3NGXUX,DFHPkddQ6+;q_1ϟAv(CQ͜HAgQ$Z˽ʮ&)y+q2QP>/69x1!\x5S nЫ9<§-@UkrE+V|R gJd8"TEzPUI]}(tNѹ$ ?(7MF&̼fuIJy)w[/uPg-BUB5  5+oGcRg3gZ/:@y*i$;9A:&Zw I:I3⬂<`1-A/zƬZmyw=x|=I[ -!9q~)&Y>7~;7.\ܢIR USHtʰGh܄ƚź۸Eo5q{Í6gvl'Aw{ k) ݾ}%q0.dQ\*\,[Η? >lSl>p Kgrhzb6ґ~k Āb.= n_KiCw{*]ٽݽT% $3m=]eϜj^)[FOw.|ޯksVOhC[ݜ+QyEwWt<6W ezN7NYVin%e|>6rqa>ɵ!DI._5'-e}?^r[9;<.;ǝm8KpSd|9|k-˞n"jobFߍ ]ou=Ҫm(/:p>Su|T~o->qr6!gILk-Rqs :eY?,-0'raVװt_@EfgsQ9RT^Ztit 8g-r6G=9$w,LtI@V:ʴY#K<i4/]P4-ńi[vq2poy +r7@|E3 t:}ۆ~:6?ÿrWe=ո7q.Û5i;;2ûL^s-bdy+qѽPmltVUpw PDPHѠvpRmL|%m .etpk|tA:uuAH/y~ȫW*)Ed΋^-W!}z{M`ԥz) E[ jkv6A3~XC^v(]Bki// ڼ"9qRRy4shΧW#CE!vij]ǕY^XX2$˔yVDFr-璦6e,V'3)sl=vw.vzm8,k'O[DZAZP՜ /]]RUZMϝ]]4]͙f+h{? W$J}6CUKh-W'3*iU6qt_7UU{q-~31g(|b2XkB= J[k\`Lye'xŦ۲/O(froݸax|kD<0|7+Oʉ.̝șF8mA)y)rQR}%jG ?e#G'Gro]j:QZ5 N3etk 07QX})R$n؄oU߷o@w!G>Cݗ۵=s'ژ3uX[etgyS-li^2h-(TD6$8EѸx)^s(yL49&]ًcZR22 *4 [FlnC;ZF܋|%XLn?$twm ӘNˬ۾i!.4Cy΂!Y ZB6j,JoZ 'KDZïVgptL`w:;#y)>mɞùZߔJܚ8gFz/Nٶsxv dٕxO1e_*.)*6;+n[ ^L=5Y?}?χ;G袕Y (,!,j=Hƛ,RŤD6A{=URh BҔHSL+7Kʙ)3[;/}-rF0s_Pwڢ2j; 6w\5Z@hJ2EϳVT"C鸰@a<" 4CȊFtL H("ő0!f]ȱ6}&\1w 0 "V]QVFD!b^۳L dN' P"0;F&XUDt l㤕d6H(R"J&2Y$2PWX#UFjlq$Ri}uV]qQUEbjI)(L9DHΓ%@ sI$d0rp1pqoZܱ+xwGe}M/#>cلQ̣&Q2#{FY`Tg!rJ=Rr߫*,/: PJ72꯼"9ev*?J}P.1Ja~:Ymf3Qq`6c_Ib7l.nvܶy>m}br4cp'Z"0אUUP92c g)\S $RF"w}xɧ;]>4h./l.S<o`|;o^"OZ#Zo+r֪a>"b֖ >uiګi_j]óin@W۳osf=|xm]Y扫[:WڴRU]ISE]csD6)@)2L!R>w7mݴoB XfcX B %TyU1'l2eP'kH56I\H68Ɯ r>M7 va5rSHO}e 2QJ6$ݣ\Ah w[3 8<T2Q3!:a'*b'$w$3FeHD4)'Ube`be%arT 齑IcU9,{ B 'FNܪgJ8ac b6۔drV-嬑ĬamJ(ST6@3*S`)*i,47*|Nj}oz ~bwٲb\iM<+>z(bvFhs [-~SZ8vvRg'=,iF9]Tf4L1%Sj'(,84NK-طz,PրRSSL.1RCt2!MqԌsF YGϣTJI[=v6IrL 7逪ȮW9вEԄ!|4T-s G3'X*W%D^ʨsx2WqȮQ9/ z6-+{oԤ4/=jj"QBQ ]XT6Y{а49RJV!wG2B ]J5ʀޣ+ @"Ct>!%Z9 ̥|DcRNj)RW9i'+_[~U4K#t||aYhӹKH3K,cNhD&dXx abqVL:}m/*-3xcC` U!YʖT t#HKd eW"=N4U_/۰WagW$Yhc\'9@%ԌTCqqvӋvm "}-LбrZ0R2Pęq 捳!N N r.. 1aKAM@b<+^?wؒT"EvOhOK+l|fFƓ7p7I:kO_z`%ߟKD}bIB7-x%4~OtMLo{Ņs_x~S?ov-wc`4Io߿߇o?oÏ\؏~z׷.%}L?Dpc|{ CKoN6㚚aܛ#-m4.1 )z?vLXweQlTNI|-SQOl/i.Sp:Z%!~Hka|/,i[5{ϧ_Nku" i)]\D<\տ4rAC;{8DGG4mӳ4ysxS?8p\q)n %\ZO]q>E͍>+g-e\Kď+o,a$܃QL_߰u>)x|<~Ϣ]n_BPb-'X}scw/o#*f*fBg%ng}<6Kv+|K=8xLl_(RBGs?hTLOfq|>*JJÞ[~9,\9Gtc՛˵):4yGߗFo5ƽ 0X9`rqIMv,M9  X<"E%lug}kOB%ë,P5BeMpN F:@h}zVrhoS;0r`W1x;\{ Fb$&H1΅~ˣEޮ WbR0@_C4SUC]3d AΓrdqcJ2͑30jjlQ5N^t1{PBVrZR2̓!4Dyɜ5R qM).Qp=;!u7U}%%Me\ Wt E4g6SJ?}FCǼV@Ɇk3}B Dd풐t?ڷrgbSx);홈ZJ2 eJd涄Iilc/OƟ{ǃYxϗrp-.x~:>I(A:}{7l}kz9-b:Kc$mg847M̔ԸG&5-WZSsݽS>y7]0dEtIѠs]\0Am{yΨMNx>6]y/nha6]ܛwQV BD E3b̃7yx\ !EG+@Yr@1J]rHɕp =;߶q4+q`io1Mӹv;PYI?IkNDYC%ERmga9J|ٌ Ǜta# @^޾1A[1޾8 n^ʨ0"PQ[E$+`f|@Ł'ʾĝ߼/[,PHBxQEνGɽݓmym%e 5BpRVKQxc2AW3/\<e+ 9qsgi˦TBۚΪº.pWWW ȦSC{tfܣ醫zݍԣDidb֡GrIO"%]Q! VuVg] PWLYIBB_|놫L*BZCbUCi+g)dtTtLŮ+"jZt:kI2+evd] QW/f+T$+* 꾶CDiD1t/7 '2 !YWԕ=.~]4]!Jf">j (Uw5H]Y+) M:Aĵ<]-{5RVɺJ]W|Ϥ4R+Sc p hM_w4e] QWLDNdt\+U )c[5It5gF&+L'+&]! )5˺3fӺBZbR*u5@]ImQ Hu*P;ҚC@ͺatJdt2 ҚcW@^.j8zZY- <@-~w3sHCW g>ǸAi2Лvx/?C>:od2?]Otyŭ y3̐|OEv7m(q4 <WjyXm>}v h}h^o72>0OPrK,*@S.jru>jrˑ &hs}9xT;(8lz1 px7YaGLG7^j>#yRG9l_^0zO֜-%j揄 #}?8[~2zVGV\npڥvmJSlXߏ2ޖۄjlgĞ;cՐ֒j@iifV te*BZicRiԕ۔Exv+Ɍ3|RI ]=^Z Fvu?odK1n 57Z+\S*JfsN)k[srʦZ2`?Cu>ŋhM|.' ?1n/ M-!Er]= WО_D+iO=NDSzb*jclBXXB\%S>uE! e]=8FXXB`NY2B\Rp]W@)Uu%O3ΓrNSZzYWԕ$sXdt&ZR uVB$+NHWk ">5*u5]ibUR+66&jהMbPRG+C1$!]ʙ qt ]-*v]!ekfXpte$P;tt2Ҫu9YWҕu4,? @< ;]R ^^v6}ןpmg'ZKz?QOݡԹ|hSxV%+$+6] >vS8xZ^7R;[r Q2cŪHte\) jz 's,lY}/};jZonbpE*lN7ˆOaQl.xzb]-D%Лo4;}>k\M?aQJBS]JuY(]_Jjh}AD^-e9>Nm?~فܪeg^\gX>y5:gt.,pT>gpe7_O$ Mr=Ճ *&~}k؟ vhjwSSUZpՂS3&jIyKxiD]֔RB$.Xn0Wj9y]W#.#T?n*\eq-NHA#̓VNP暕[_[m),%5"Y#dPPpӺ۞q0S`k{\+onP\Χ%5}ߓ˯|gs h?tro`7'u&m~f4ہ x_1~n JM1"TgۍἛN1>Qs'98PL0[MVOv@P7Q&gW>prdՂc0?Z=dZMA!GC&C9}Ğ 9Gfw1Fn:;lt5 w Yw\ ?Qk=Bo?V8U \AYixӢ ;v{Ѿ3Ҵg1}o{xps36v#IjTnp6:/Yu9@k\gw<8ć<>nd+|;,x(48P(+c :Z2h_6%U`4 Tͽ㞦3m4Ƕ#fu`ަm N5qFeM(j"F+8 p߆x{/il~NV ?8iZr$cJ`0S/ V8R97Qhͻ 20T3g70B =GjQ͌nZ΅Qw{kZlf[50mS6a;Db4:ːV6)ɂZaEfQBxjx2H=ly],Dy4Q0  ʼny Z!->IcASy ]9BA앑zgE,gwӁ@3۴-#Xr74Y_m֟d&};YWņ9/A eԓHQnvH R /xqt'sA\6 !o 5#sQDp %j]L!&qՓ/ow4|q. Dwѧ?҉,>wlAMWr@s>^ZuϮ?I:¤|vh~Oφ<\_&݃MwZA}8ڳ(2ϻ6<Û+?=_~l u{yr{>͘3qs [5 C+9A=~؜/5MĆ].T{kxLS3y t0dc[c%OTG^k"3 N?2ﴜzm8΋LLgxޤyf4|p쥾Me[n̔hőMˀŢyzwl4jtד G6)MG7ܠy}'IMyeuvnbm%t_"T`œ}6R _s33s3϶/<]=s))Xr1deQ#&۔JQHA8#=zT("H?EPb^lOL@^\T}5SQjA@'c%ʧdp W|LqvsL?y+ole=W3=:tu{宿fU'U'KȎ P*'^)qꟄNY@Jr>\&hr>&X/ɑqG:N3S> y`Zgs*H& 5pOJyt)Z{RcbY`q.PQwژED00kjf BP|?m٢6%Kط0ټs dܺ|(n i-ֳ+Q:F/Y{xjZo0[|}:zU AZO1XT2Cp,%ι̉ީM2fΞ3:z눻\@ԛgƾflC p_~|i)s&D 3J^(Mx3M'qM/}-'z')z&4fEc}&1CKe;eql!,Ȑǫhpb;X|j:% O,mu={Cˣ$y%]c׾ ?7&̞̺8v'H Oċ-ϛ,.h(Aٜݧ󇢮@ta!t?Nvx1M 9:!oˡ\٭ޘL(=:!gnjgOx}gKsi-)M{`VӺ#Pźڗk _%ܮ4\Jk]Qn|$yov{|߸wR0[8=|m?(}uGݺN;yi[*ݹbV}d> nt[J(g1JWFkt ;2 iN3J:gO/}QٵǽZl^^EF[ty7Y7+O-=VS?}4GLlx=+6m%4p9Rx]2Ct< MA4,ӿAGnYzQձ!dGV_WV";go hcN\ oϲV= :,+ h^i&ɲ+LEvkĊ9wm W_w? }AZ:~Ytm 7]dv ][wn8]_}ihaqn\_e#o.l%},n&h<7(ӈcy!ễד-dq&_3/hIt6eQvR#LEe&0aT:a`7vyRr,ppZ:H,+Z۝ }CB!Vu *oP5Ɍ[t: JޅT Ykäe9nqd ]]/¼_^]iN:1V7/?|4|> ق+׊ p+f'/&~WwR݉0q2>yA2_4OG V9"eVmrLDiI-D ,i}fs'o;go$ 64[nr`;Ya+Q~7{q ypTG;\h-YLI4 E>J*"}W8?!?gҋXr\`ers  1aB2I`0EDK n=r{esjd{k z2x`w|*4-uѝ6I>W(-RsjЅԀ̳K՞`~'P (i⁕H{Y l-w)lRJP; =x ui7C)qS91ph8]@i4cVK1 вrښR)е)K.z>U&eE DZzL UqjXX3B^ yG•Œ7j{Ҝ~[7kϗ×?ˆ\2 1Β飉%Y SRB%5:uUpB{>  ()uI4$"OdXނCp:G,[;]k-sv#vRy*];EmQE=ҡX 6F]r] 8Q E鸰Wì1XŁ"db(˻,j,"HE*X~'\͜x kSAjq("ʈ({Dq[$sA)दx! h43cxcV7YBHNa2beDPFd@TpVd=ңf>NiP2ͤ x:4Qr0g.0 Q%bY7=.O'CPUCunHe:ëvܹ&^e}=Dpԏƫ~ߎƋC5'Kz~.⬂itM96 YHF8^Y[yWDd8-!wmmK NX} fm8'*I-)Eԅ2E=|MNtMwUWU[؃BnIdq0#8 k.c)yWmZDD-]YEg-t̺&nQm}y~]]ng_cѼ|1?6ȕ 8.r%xO&$GVJ2cJ*WYŴIQ ̫UMfTwW3$͒+ݯ8ooWmpnmDZ]5g s մڍّL;j``\-K!Uik'$K!aZ.fCfɎgmE߃fIѝ춭hYah-Z.GcOPQ{YNy9%}YV1ۙag6:C)v?\}wQԨFJ'.[2LԜSd<d( sD=:ź[nʖ$:C8lQ1eG2Th5eٌ-YT!;ƹtu.`k 0RξnOv^H,*[T< l(<2dY:YYm 8#ms@d UEL!VCB 9" :{mde "Nc9]vDklPth,(نuH klB8~:ړWn_L:mqy.sE3D{Y/oPYdN ]=d+LdF񬙅Pp\# g`C1ۚ.< "h4u4hĵssB:eت9D!#'nTB&2+LBLDV&%4Z$e9k 嬑Ĭa]J?,m" 2KS9+#(1!]@FKȑy3HKs "Ц?WG MHKU0b+(B1[-R9!R@VkP+{;20  LL,b"d%rJe7+-GZ.[BO#`j&F0TĀxKhsbm&**:%(P`/myDIK.Mtg" _E .Q `i1P\*sRK I{Ԓ^c :U5V:Z.'5}jR7ME|HM)Dg󰧓2'}q^W02}!]NI7\n8hKJプ 7}ui(xq7}p_ o)5 FˏvV؍JtNrP$J/Oh#gJ}e񿚈Y4wᇲR%H|r֣kIՀmgk&] /廃Hҵ%&t0vuT}j_2 Zͪ=t|Yĕ72^_EcTčGZYMI/O9 a|ظEJ`IL<ז2X:@5䀵U^-&FrD&PY ˭>ZB :L!Zt^ N1[+2|tҶ{^>HeB%K_% Hol3^rn"n-mאk 3=}iDwAν\DdVe^B,`20:3ШDƣ/?kMTS2GK1uòµe{,X@9@S5`FA6)59D$G%ETtcPĸif(o?n 1sm)E 6gh2uQX AJb 'oz~?Ҵ V'L6XV2qY r $TcN0IԮ mU7# SBtP&Dg+ce,YlI?H}B(&K+dX6~ΫFƓ7p贚A5 VJ9J>7&$}x5-łIShtZ _W?'< jq>Ox5UsϛU掇j}hv 7iR^<''?u…9y_OO^o4)Ќ>YGV;N؁][t- lеm|7Wk)>bV!AqIG`JmfwlQlLrX-D63s0Sd,:NgB(~ni"u4j^p6ʗ/~jʣwg?WQ|6i6}߳Dyϫf/]P1lW?7a^Û)SrRKu]-PO>=-NYQ3\4+ ?xlHsoWT+/e x7)U=1mڇy6$gޅq7ݳ`筝ߵwEĕỲY eh,C NS<GZ?^;Z\\ĝKzR EZu Qyaegx7U"'n벻1]͌ܔMLuˆw_Wƽ 09~}rqI{Ev,PqEbH <- 2bKT٨ܞ$`2):$6xS`ZL.$4. c+u3 oaԩx IfhjUXՇ0>N#(MuګOѬ&kPV R9f#:HD֚CZ7Ѥ7ZQf˩]@% ?kٜP-mڋZj~ ;>a ^8ZhVVz)9#6B9hÃEGà 1bh07c4јFO k9>^ %9D䥺@!w92:C,242im<!ef~}H#M}D|bɬ=@x[%ugbM+l*|9TWvj2`׎{m1H_;fh=;1RQ{[L{[ޗBĮ*A_an1qQbU!O)B5ZeUR^]}H]G誐+a_UVB%TWʕ=awɚinfOhfqCGxnl,uzxߛĉOlj<@q4է;џA._ hE<2j'r8U^=Cw쌾#a 4wF <(!B @xLf'&e@$ 0ĽxϽmpfɱk@Y-58kb6V33N l#*'=D ڢ:#JVS'l;`E-+Z4s?*U,sͶSA`{ m[!*Ԣ躺*T^]}JBWDroU!wePUUk{uYԕk*u%cL틺*>*T]}xVQI<Vgn2-lIJRg%TŰdn  jk"FDhn;dOa7ݠT:xq~Ϫo~߿= ݺ9U8: IjC̝;9$5%VzuN,J$E5V1䙙ќ|f1*f_ftһ9Ҙ 4րk=\FM2E3Y2ӯC}-i*2M[8.200YȊLh61irxn hw:64mw 흛AZiap5ۜ?*A cSq|-؆se61x. [K栉2(egUcLy v x[雥l&06 ݧFCUޤQuNLMU'Uun8?R:ngshaNC.>}eQn^h~r \2O¢V NS + a)r=ʨcv4Mt>zy6:Զ 0kKZgZnYF RCGTҬ)GB_m40l:%.J gRhT2 Z rfXu]1VlKsa1 l;,mбi).K)>XYD+\ĴdYG?Ny/s̙}Kmo9$A:ؖ#%ɑl,SCEQ*ɡ"5Y @EK)jPHVP9x'a!ؚgrCn&E|>.Gq=z\ >.m>|գ1y%04}"lC_Vz6u5`Rd/GQ(d/JL z}md[?[;xiov/rSؓw\,)YEBF2kG̼8s5BD5x&._<\qƞ[=[EBLQ1f8t6e^upއ %ms햋Y7<uZl}G P.ZGv[@X3lZ8jAO t>!Q :j?HŹd*Z~ JDt2`"Y&9bkVK<lΓy veCiy̗g;tYywt{i*νx0?};1=XPz̸ѿP7uxaTBS*f}@}-`lU/}Đ>wRΔͿEbB5O+(qc $L?U>Y >Oڟ@xq6zk26W4wH1Qy{oy0Sq,<\v[$Kwî;\s;](dt?#G]]/z07TfkU>[ k r\6 ;/w HmZ i^hz;v޿:'+,E:VwvQXg/pՑ8is2[~,nD(u+(FV9|=˦ PĘ@XL4mY$OeIȨ gED&K[3q lDu{uMKñ|yNU^t[kg.pm@f^ÀWJt{JHVe[85tP$BUzPՀQfJ .D xH41px0+MCw+ 5+¨1jk̛|j\bTC̫K|>'h %`Ԡ:q )v('EKVvpzjO^(K{xi%dT!s[Xd0 lF kȓ<pO -9lMQ^%LmbD *cW Pb@rE뢲e3qPj@impq`SPEIŔ OYx:YBW5k6~| >|2Pϵt6Ov`Jt4+7nVCBB^??^V)qm/?1ayuiTqϼ_4rw+_{;!6}\4γDy0ߪ;0L|뉊GosN0U]9<#.|*{LpS)w̹N%Oe9o|;Y UiU&԰f7V%\f1O*5bNrLKn|O8$w>i='3܂v10̷s{<;JbwL<[薈\t\bWiJ%Ke>mm{p[ 7ʴ4+Ӷx7_:L$b:w(8db!v̛j8oz[USmĐ) (PQ}EK)ɡeK@uM‡P>GfSQ?/DZ_HŪs]状 KE5+_S3=&1jAC  LJMN}:Yzޓsu>q :~3dyX Y!y,/:#Zk:c.nKnuL(;a}d"t+s|5J' ^B .H # zF΃Dcw4wj 󣵉8*c Ex*_F2E{跫v#ٹ8gY`ջ!ׯ.QD8~é볔zl a5Vque 9_~ ŀf8ya9AwQY)2b!{X. <)NZmM X3;);b?OOٍCb'g~|!T}ݼѴYdSY:Y_Ē#)gJQPAO&8K΃Mp;v1+(v%)7J8ct{Z嫐>1{Vj3:^8,`1`PWeWsyv 1B7$Q9G:Αd@ AZP$s^g*Sr&s6$MFDJ1Eb@ nyv[SOͽQޖw^ܳ5Fףۿh܋h R.cz @p@`M pon^hq۔L'VŞM ӓ «H9-v؞o'Ǥw(qq&=]otYmXɫ!eET!]!wuw.sΥ!Z/>9QmXKѐ&G*:DQ 8 g`1Hg /ME7t1bpG#,ϕ&fQTʩ ("ꩈZ.Qnt =Jy}Z :b0f^h+MKb @{E<د'D|)tN$atY@9. -Td=$ʂLDmfvc,)9YH$]9Z!B %!X]PcvD ), ɊJ2*P mSRx6f-vL=?:ئ|5liP]T!)U_@2r1Pln!}Z)?@z? rѧlD(RT9VH2Ɣ8r~At{9IC`]^)R&$p)#T*Jj-yuOjpMIm5Ӊ]w#`jY0 5@1ď5GV Qz2 4$(* `m2",$wƕ_Tj|-J]0ȰIP.9dBHvKɪz"sК {)NQwfY4fVp,kF,CfMϻ){G%7~8q!aP>^&=t2/2,->E7 D8%nDz1]͌ܔ]L~p[ˠ1ʱ QYw) /r!2GE 2bKڞJGYDSkB5';5 J R#rRguyd7:0ڨx]ghtzZ=OL0 &.tTb5 VZ H6 bY}tvD 9AOOba*Jc]ڬ5Z+ZAf5j(D\* )hF:!1I약hongOȨ ef4kUP(W) ӌsRC ڨ=ԡ< 8Ҫi= )J˘1fB zZX;y ),ZRJ.xE!w9(muD3 Qȼ3$bad߈;f(3cs҂e_ϕKGQ}Pw3HV]#Y5uZK2` > Ro rO!oPlޑ X!\gɈs)2*/@8/$,R"-YP93}}')AfcrOs"wUAi.(ڝ>-`,5~ǧa j'WGk]Oc 0$5ˑb0KmZ\goCpё$Y>pX>> lNd7XYf:#n Qx2*ӻc6آkhkŮ-g,5*{dHw_Au `4&ZBuBkjSLeS7(_AA%lAF AZHd3tw뙶.5/w LDzS!M(R2: KmI3W\dEf1"d.s1CM)߱:94(oMMѷ>nn>>I0pdd_Zb=!huQ:,jnSRī܁TPby5٤,cH:[Ik^ɉCRZō0:ȹGvIa iRCfc~srv'kwvhfDފ}~+ӷ*.N<9ۥVg t8k? ߾RTۣyv^aĤӳAEIYme9n3 9K+GYrAS|VCE,ՠsVtEÍtk2T9w#cwJgXgg3׼'%Eo(^liwMϦakr7n8o#F+!B 9eQ@A8S h)UŤD6%Rd!{. /d9`7*!m,$I!ev؝s7b(^017;uQ,ZkPUy,1 s@0m0Q)z֊j|dA"N0k.x" 4CȊ2XIu'"R cօk;#n8d DT*>98)T{i=Muv%U=.ު &da>:Or$/)KChAY3HKρȡŧōuPwz=<|+o"xl:KGy% aMExw\Lя[}X!|8(yCHcƓXf bY/4Y-;雷n$P?q>9$)Q'8r˻7{ZjߏgM΂EC)ԛV*au#g:pe<6};{m|w @ǢE y}r $ҵ$t~ 3]|.AזZFE+U s;^yzQb{+4@h`1QUd2G TKJ&PTea,ᆰdId|J<#j>X{|:/t>6k<TnjPBm? ⶙h4ABckosd>F\㶻6{Lnr6'6RܠJr3s L\%3$3X.@+L9皒w%4""DRF 7}xɧk>~t2 (9Jܠ1"($$T>0W5en& 5 8673ͷR1 Bטī+D"FP6֌N(<7UTt&<g9I)IwI:Liq5I+Gwۿ濖e(?*gDv<0<*V<:-E+K9cZ*Q0ţF0]koym,NC]hmZhpG%AP\e`Ѹn{4wܮlNjnKr#T.]REUQ2rHZK9k@֌s*̵I u%2"$$vw1݃"!MLӘ0t̬A9 *d13A͉ɀ6wF+#CK1#!$N3$n 'ߧwZM4I.l!с fS.aɤB@C\H@ Qg#5AM>Ѕt̂u%4Á=q'Dﴱ+ &I] )@P1x\I/R82"R9K#6˒Ll \F`Yc `іJbP=w >B.聺a:ir@,&T@d猍|"d%PP[r=JNw` BF`>vylX "(e*9<H2x&/VA-1~&_˜T!`C,n4V.qMH)%h'-]Ye;S5!c4)xU&t6 tZX$+5QY[4 ~(ӫl˱dYc.mO H+Y;F̀j@o]P?!w c6[o]D 1Kk$ R JLhWi\5[p:Lo-$<@Yc#~C3!vTLc*u &.p`Rp 3G,Qp+lL6 _gAsCqOAC@( Vëhі% 0|iu$()`[+_PCPq6u9kxEQ@IERD^'? *EKR`:B}Pj]g!BYU 1PM2+X+#N xf;ܠںV,KTA 1cHQR| 'A/b3s?vv[^f; .CVkޅL0B6#&d-D: 3xHu@,5]r (}rl<*y1L t56bHh1L}qLΣ&yc]E. P$JDMkk* !XPptiX et JWLd:[nmGᭀ8n3.,d:P?QF|qT'lJeDw<}Xwvڣ?:Y.RurS}< \{#Az7\mŢ<$J Oj`!zs\`˾{<즃S|ZOӞ.zf&YP`.5F֞5`amÿ4Ϊ2^[f\kIQ3y(Y#`4vfc-`ˣ? 0#lRӑp =a]˩4Ts \A7"7;` JTCۊJԃ`( $SSAvHzDGC@z[u`}V=&ÎY| ؊bPDuN Mu?KXJ1TxyC*apH 1L*8FJ'd$U#0| PuIc-и `*6B?&գ-6nmVDnQ4xҁc-Z Rת$_3Ag2v0\j},ރ?!]ےoYo5cPqab2<9.)ěk!APK}4 E:8F D (BZ4ZSDae 5f@D2RLčrX,K$(\✪ .Fr Z&"˱XT@)"0b|ŲK4\PkBwY~z }@F(G%ՀL.пZܲZ}dicʇJkSRd'^#JމC}Arvc{) _NkY#[=ā+_URs Ggbkӓt V0;TEw%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J Q}J (` F k=y%.@ʊH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D *>'%Q@2yQ}J 2D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@~@\Y)}>J kϙ@ԓWI@ߣ(jo(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@ߏQ}F0c^jo^^o޻N7_'@@BzN%.1>GEKRk.}¥k]龀lqt瓮\O%]!>tVFY~ Ju~vxnպ_VV>nň_ϏOh "^"XQq_~$EWǀ߇7)ޯC"Nk.MA諿nM^f؉ VQmީ`O}\9_9|mlۿѐXN`!t-e]{O:ݥ?K'_ ]tNHCM2QN~JJyj#sѦgi~\v>]׆c\|4mgJ.M\".~۹x5JSq%!OH'zNK s.Jᎎwanl;" ں_j9bY/"KIٸoYa <̊FJ9ƉH;Nn(Ϗ 몭=~pӹ~9 ៽ݽ _^>`oȈh?nE~mn>'w\nj]knmmzr8^85EcG-t֎xiCxB$+G^E>kε{wҘpsl7JD< E~~yW~tqkԪ739_?=?>F`G=l&V7|::w>jk !^y:>_o7੃$v=W^a.}fO릳Ӈ=zc?b󄫀c.-/~T7g4)̦zlCБI~>ttSntzQ,8NŚS d>߼g pMU&8N)qT+ f}pkדU;p|~tm=}+/5Z>nb7egڅ SV)Em2Y{ɼO֛BK:ۼ(\B# \̜:[\YOIQHݛAah_WՍeF?/GxED🦝5f?RNi_JI^5?JP;r FR-(_B΁٥"x}bĭr|ou_'v'[W;Vo>'˼ay 5CB :ɗ"]f[QՑ6O]>r8A>FFu}1*F5SǰTMSA P$|trF[)"5(. ش~A@9w:ΡOA%3\ =BX1{YkgruWuPgwSW֡Gi%RxҵV%r܋io/%G96ŇHEҍedlގdh',gs8[՟@u6G_"r*594r+l*ٖUm͞/TQB`x>y+QPbN1`@V_\Y( 67ιT1'B7[j-ّ$I}e+> V(㔩hHl5*uɖW1JxdxRe.A`jo=!Qd Slh>xu uI][s[7+ydwƭWj3N٪qR\@YI Im^tDѐEG6F7 2Q,PmvDxl \(_3WC]vP0P-VsP*92)DVf P2( RX!cMO>v4}C65nJyOt}n)ģD` 6q݌w:u$LǼF+Z4 4JY/73~T͖ͨ^hkl4$9V Q—OyɅ=xcjgK >3"oſ?^wM,R^~l<{yꐛ)#AWn\֬gFn.Nກ<+},g9@Mo=8NL+sIoK2OR㥸nڏtz~I;[fBP=.󹵿qae,7̲`)ǂ)@-H,(SB1YVkӤQXae*RAt3h&:10lTJBWNk+< ;%#C"/p Ca*KtPۨY#$A(# dQ$b(%90r2KJs>F+x:a@,hY7ʒf}f9.fM_~ulwUX.{6vܙa{fؘ{9PΪ/V=)I!l4JtZyFlA߂%è{$A%oLd4Q(CJ(+Ȝ|uɮ١Ӯ{r*7jjW7 ]=..ޭkP}}}&H:k\=V逞2-g Ha5NHd㗲͢OL.og>4[ް}|őt%@ҐUl7BuRfù]YO@gl9\{W0`W YZa[|.A;eyw Jy)cDS.DB: JcΠ2( Tyl\TޑAEa\r@m%yYTHE۔b!gktl֬9E *& 1ͫ3"RX@ 1MFc:l8;7B!qSm[nᖟ >fnU"5AcѓȘ%^y.gͲꦊ@/&S6)~mJ9H:TF9F"pXQ gdG5f7cZg5mR3=~{ohMO-~~Kӷ*2sDnMrٍonMkѾk呢;jJЂ%B{g"&Ȓ^Q b?NxPfȳKJ%}ST|/E3J F  vdlUaa3 _00`bfD-N+3RjyN=miN+٠ ӏ>SFSEKJ:@4ՐS*U $(l-b%HYbkT ?j$ TTdRhub򔰰m'EP.5FfùtΙͤc_n\{sgݟrA#9brdAb4051 RCn(:-dfي&o:eQSIG( TLsCkp܎Q_Cg+0 "6}Q5FD5  7sBK^ްW{ d9,@lJ*CF4EDj Ilgو2Z6[@0LU4H64eȸ8VuH{ͤd_\ԍqQ8L @Pb <+)J>uh$K2#0"8}Vұ/xh{Mq/~ܩvsdFz=n~|G|%JvF[4v>E&#(%aPGy--(> *ђ=0%LIBc4OdzvQڵ? 4G~c?_۴RXcZkbvr\ZdOu]YvZy4#d0L/u0k=w~Rsr >F~q[Ëѯj[wbvO<}*;#s$tL (*fԒ_Q}֝Z"yS!ݵO+(׺٤+3sݪ/k>H_.Rb F1)A]b6wg Po<~"ĤǙ,xT4"Ee0 Y .'',EPcNrBB[6r 4"$)Pz QISイ5AE@&1:'5pnILq@% _nɇ93yzwj-njnu~Ö3ЭO|ϟ8$'.f^gEf*P=DD&.D>1zV3-ޥzBπ; _jUWv~ϋ'R **/!ƄME+duy[GY.ǹ6])s]m.<.(_y,YoGe6}?:/kmȲ dvg<;f2q$QOkP VwlJSMm5KX}֩S{og }t.fXu7]7]P\ =Bt{8ߥ 囁&Ju޿oïT,$2ڊ1?xP6rv^LaN)냡Cq:넉Ck4OI]V%Ӈis`^ةpмIG0wwÔ*ԟ'N}f;M:Ȑ맠3OO2~XwPf~?3p&KӗaMȲ)ћ+[-_zgA9P1qGPN-Q9լޞ;$L乚x%ư PȐJG;k*Lݼ8[din1LT5MaËSy{N(|QE2sLYs3օFYMn'䵋+ꉣҾ{ʽZr$)ŌhI L-9nWXjY(ى i[W@<ϔya(#I,}"*uiƻL2Ia&\W % t<.~k0W=ẈRt= }J`TM»Ÿ7n8nM-w9u6x(Ϙ!*H `ݺ9]hq<I.C""]W/Hr:g=Gx=vйI-ȕV/MwO}Dn]ÎVR& 1A}0L󨢚+&¸a#(_?$ 73/l,Ɵ]lTSk1)?Pi# y?}jpr#oE&%>Eto㇗:wll^/XehΥP*v)2 Ko~>+psr1zqUoK >Hc_𠅑$q!<{IWuŀE|;lFJ[S˿}"y<.?a;x0) ^o;fllR:hi^8^v;x=S]~xk4=;_(d0GepLqXt7u,78 Ō93˭")|$2v!ְٷ"5a$*8ʝ7\ 3f4j|>#,RFXp4?j}>;|jq_N֏I_;R$ Lϳe\tffJ0xAo4Y$3}0Xoku^rÜQf YcI"!t^{(;k%,cE:T@H0"K EO1N0#vXy)9&r6ȹQjEw9lV `k%#RHDcj}1 ӁO-ƂGL{chkM[|`+}N$}$\ hMf&uf:RjA,;5;˜lWjHo5~yrq5]׸֘]vrqZUViHiQ 6*hoz3wc_GPd,1+ufQҚ3kuՑXc9z٩ԿF3-֦Jb9B "T)p:aX4Z0O[^b +7b~sVA-m=3{LFhNPƚjh{+h*Y^}y'^d/BbNϞÅQUUU?"n\eӯun31X&KVqet8Hl+U[J3 *ĊI1l~ "!fJDp2LõCg+ $5p#_ZgS:z).WG+1O$1 ˩E+[VJ(F[Hlr4#ÐWubq,y[yE^~y)0 S&m-[.Æ8BB Rh$HyXRGIQ@d&#QHYuނk5f,`0w4zl5.+%fyYPmg 0q.|T@舴E-r4AP 3]! z7p/\E-0<G=t7gqw2v*h߆ipʥ*4>nƇ ʙ0R$SƳ̣ ZJ ޯ|˱?;?I3 1˙/&L@}@HMrS.~t}"} fZ\0!hڛWb|$ $a(r,#Jˊ#kKP6:%SޗM: liq=ܧ>D|i"q.`9Mǣ~|ldrirrU㱿0K t2yAeag[Ƣc\g7٣`oLWn$Wd"sxZ\^rR,δ,sJÃ&C{P+LhYj:Rj3ftJ&zG/jY\-=vb`bҙVCw:F 'oe]ܞ8/jMdA.:?h,xRN;5 gA`:`_JO}?_`kwJ(E^#]1$EEX7 VK׺32I2KxOe(Mu7_:c&Qnu!cgFx" A0R~w%p;gXDJf<~8u7Uݮ' >-輟v$ruK\,:П*1S3Ó?'3q^ 7F 1A"zxkJ2z/҅0R`6|s__,BX_t J8BN!|3_g#/ ך# k> N[@ЛE?_K2D3$#CXf Y/p7EN LT."F`;cxRj{ӏdze4H,I/"z_Z>t{Z ["@y83l#:;fߖ U(T@e;iJ|ˆWݮaMcSxceZŢG1J%LT "ZٻXdifR%Q3C7%i[Cr(3T+6R[0B;N+n'xL'"KQ?Ldl4 =P!ؕO/O Z)c:_eD8i'_]G.>( 0y.-jk[C]wzh\ŚǻEvG)3WSa-sR`@c'Kr%(3 C[<_hHXG"\Jl(0|:NFe)堧Zb0P/5yVs_ FSOAxApA# :Gdj:)d ep-=&bA6u Rt8[ 2\*밊a,%RZ3"|lH3%1QMɍ|k:/0tΩCdfRi["H#B`Lppm;>6nxƭ;E_+fPE%R S.R6|{t i9]6At跿8O{i?_,N}!~È1v+ŝs8NDL_ v8 y9?͙/|L{Ӈ )*SeUdt;&$DҒѴ]ZΦMRFt^|$ Şs̑~ԁϣ%+O>i{P¼i'_na?}Dã"8sr 7T,tc\EMh%:CP %6kܦi] .xc@0 M?A*޻ ;,o\qa|Ǖwiv^̗k>~^Lٓ:qkozۙ .ZU–=PÿT5;tgjlkw8ZNQݛ9پQǢ)̏ٻFWc{P1C W덍:Ix@9U+ ѸX @n? .dUWe}_v cJ#68R$#Y@SJx->c]{(oFuK׭0սqLha%MoǾ:?l 0,]_@t&~0X&spcZg?O_Iozϱ6pLrb`Ǭܮ|ږ'p4-y(F.tlT@+xŜ/Yd.m=rVd:߆c}{=<)0;'i?:vp9J:p D.,jÝX. ܀bDJX8$(bE$^jNK*Ljd<)V ~2""&ZH0<)c"ҙ=[WP''f6z-v_>xmj:{MMy9D}UξBԻ L򋹚}M:ߦ_(S;<=&qZޤ /›'7c]>|J1*×>c26]f+m8#3*)HJmjm&ZCcym@sI]njKtkiYWw?fPOsBSpݬ*P m[݌[$jhJݽC|GmN7v]6sզErQaG6cV2rmGgy9W]$Z:6[~V]v(VHr"Uzkhf=Jjv)q2eB^h8wȨ%GR̈Q#z=0om2r7xN)f4-;2ya(#I,Vquiƻ@0WzY^Z.^󰸝=Sul&ε6hO3zГ؞YqbQ)2‹Uy<(Q}񹢌VWh(ճq9r$`AЂY .if0>Q1c/9%ӂjjmJ$Fc.H)Uʺ$NZ0qhX)Q̘; ZDl ha[d/h q3{/7` +gZ4x xbj)K~x)0 S&%\ p1+/0IVg.F(C0!B*CX1ciDk428⳨3BeFzhz0q.|(:"sQr)Mi8,*6hEBddG7aU*=|?P pG>k+y "CF +H,o>j}3?4hF^޿O꼹 ~pOS$4r]Y3lǮ#OL=h">i< Gɢ:ʠN*;7Q{[ v}wajpKrs*kϮW6 Sx$VP$]13vM| s11Q'm0Cj]ԭ3]9O(vjۦIss=ܾtKݡ/ͺc̓BޯZoq}ЭD*8me[*>.f;龕%omS3m/`mkC2azK۹,m@c,mi7J]e)r©{TD} QFR@D٠{I,o*wzdz[ QvWX ,K_.]ªLN2p3Mu쵲2aDD&pWeKSTsc%bD89NxQ>:ެI^rtz,ϟ}(6@S3Q^58\^UQ/EǸ.'q7ł9 ˭."IoeP=y;뭁uH ZGK!zƌƁZSdFXp4_jbw>^XbJtzvRq7Ӷ 8{w&GdP)[p4[6/%Ou\%7iQh 5(_+Ln!(I=O:G!1ωI%UPDBh6XjLP.ztiS>&Kc-tIߎ9o]|ƶm6д `6=9aY\M`G}Z2 e@@SR N)dޔek}!-z歌z 61T)b+t@fOjRz\TknlocXvoЕ;qyqvöGZBZZʇ|/i2t[0[?7-}z. hjUfSj?SWX ħR2Έ5qۿU݇_'EKtN)aA'6Ц:&n[+p?I^N7)N_8-*І-aД:u]]AWYܖy^7σ:2#LAA+hVY9\A{ QpeK/Wp~9ȡ4<0*U0xa2pFbғCB<3s0SF9Z"4j`1VYEРG6(f9F刊`) NPM>:d2lqǩ.N천,՗dٖx3ktYYB4?qqdkBcUM: r^mH lMn \l2aԊU?LYDQ#G@HtWzu_>RF%5cj hɵb ҀҺ (aJ؉2Ճgú(nT9& i+rI&V{o6EbĹPY[ni>U.u'[َOsw3zN;EwԊ,x' EKdM/CTu oًJ,CIcj}*Z*-Qq|)AJkBa;vdW ;ӌmPv¥D/gm=ç4?5~g4a4 &Qr=1TSNEЁ(]pEea6%H٨2B e?jF!yݸY^Ky87`0sP߼!SvCA 5F^DȮ!Sw9SFՈ=V<@ePg4L4R KTrT rcN%^[W=ʺnUYwfV-*Pׯˋյٗ&>_xl ?HD>+zob)$6_6An/J r{ ]Lkܫ{ƼJ7<+:fx.wt⺽6`̘`6[ ŮVا΢U`?EhŮwe)}gѯ lVp ;0>46d3FcMmH-z} g`~]9"If^T1Mk1fQT T6(%Dk 4u]ՙ80R;i˰kb(螲C ZnEN푔#jhY3Ax+hꞇE*Hgk%YbސG匤ۙ.bvvP2 M%KəZ@"G4gŬE@ݱu&Άvտ:9IQ]'Lv).0h&cCU,%(bHDߛ@&nC@NaE)[J$!'"m j'{Sڒc)(>O:Vg8#@^`XE  (cF%ڢz8v긬3㲥 |S, &d**j$>mk[%NG铅bSI2 ݎvqZ33ȶ12e*Y23gc!"*$*h,{)|%|oݙ)4d[RqbZV d#|I^,8GA>JHEBBlMPKI$yb׎PA$А!٘BР!@Ȟ2:}H5P$0dhd Zt$ԘB s9Vk֧?8+u#,UJ pdbGdzKYY^b%H]su&Ά)K4}t{:99\3-.xѹF.ytV"k. 4>yIhpEo3Lv"qTn9^+X/|l/ؐ8 ӒM0ųKA聃0Xq{0v*+IY%cDR`̦ ,1.h`PzݫèØzm\K&DeD!#NއbsNCɤ|&-$}̮7n i+Rjd|qpd=mqLdkQ9)k73c2'6X^"ROW{o/6Og_Ss0 ϏT/w%3oB4#J[.Iݕj[s3A9O`2cz9՛ZM6pY'= ]=xfG{[|._ys|niqij?g)U}vqj{L|O-sqWXl63t@[\/e)h:F$'!4`w^_ծO==\S#ѣJ6Mpy2#hėy5}1q^M/ѧߦי4)hZ]y7zz)gۯ?M@\x=f3~:W ֫璎W6ħ32/*t)q$Q I&]&タ6;i92kv~~pfYWf_p,[]fٛMI(?}yӧV.Xs_y'gr5 GQ}.G?/K\Ju{oc*AoS}U=ݘ.uarnNavug7_AN|l9~%-w VJQxM0Cj x`{'c`)2hsjXx"ǩI 0 RZmK(6+'9$}7Q'F;SۙN瓗^\v/īF#bh7$ŭ~_HX~I{⯐[#b`o['pUtJS *7{WU\W,-4WQR*գ+'Bh~TЋλaeOg Oxfq+ q{R`7'F+o`V}\2W`Wid5m_`fxzʿKU;He ώSMqoP4R11=^!ty'~݉]?&B-&%X %}SS/\U= \J}*WUJz J9?9 \Uq{WUCaq+:/pUEpR*i{ (Wd&2SlZՒ?l>8>&du=ԧ4:R%j!ۘXDۆO))'}GP?ev_^Ve-G,;͏? "g?ۧ`%@X㬬 zL!xgQ*ZD*$UL>MB5e~"Hg,snjcy29] yqH]ës=Ef3\4?w,G5~uE{B!]Nԟ(TTvڄ 1n3Jjc=:;,IGY h${3!){39/.v^_\0}]ρRS-20]KN[4v]m[7˷/IF&i v]#ݜ87`{5k:N8.i8tPcئM򆆊~Q~ˀ+ݙqn 96$IېSwS;اwOww?z<+-(2HR^O:j*کkB>9~gzU a%'8\^)eFriDCw !.Aɚ&i&ȧ+Qvՙ6r Z"̆P nޤLט眈'Ik|p=B"$6_6A$2Ex_S#`]S2H2o+}易#+lPl#QB#rFRqgnKk^$<1WX)p\tK 7M*S0^W:l=[[Oy9fi)7[˛j,k,Vzyz 3\ D vh1(%`hlҲ1J^.X4@~Ǘ 궔嫠a'Irr#"]|/2@,.33A@Y0dr;d+2DRSYU|X|5%='+re]34c y~foSTy=دBj'hQǍK_*ҳtCNJ/%b,>0EO-X Pջ֦L<~Z/WGdq_PlAݣ@NykM`|4Gjok/xt-.%߶u$:vEҙ|J)Di mZ(dЮ:| Ut$Hb6.7Y>!:tC & ۼLiUewڏ4K-ں(~ $,K&o%9L QELdDr}Pr猣涫ʊyz*OΥE 5Ξz6<|}7/ބ01$]'PЇl4$-Es`(Rdtt3Ï*R> Ie,EXeDl(j'#L)k-`_ `xN#jxF?J/Q`!eJ=)cBRR #\ԒWNmNWGq~3x"EkvCx/Zr E0 &S@̤ǒ`i2 }@6GACP0j[;gxCQf"rʷvzH-4gxj*Ft$Z[CVhΖ;?G;؀'$PPC}AE]X1lI7VMB\$j:!a̍{8ofdϫ]/ L(thPv`%8pt; Z ;&H@)XQ>dk Ir ٹ)h5Dsڔ#{c]&u}w}*ge {x od% "(48AEMbu΍׏\}JeQY4B`&)xQ (@,1Agd`hn~Ŀ'} aVd @ D=}$sE ST"Sq"UCb~l (B2(>x1@If 50J'[F o)kᅬl*g\f'O*ͳծyx~zB Q.P='ɢ7=Ѭzghef2I gr2_ 6ZM]x=eylyyӓwt/o?^ʿy/߼;?} ˚wީЁkGCG֏h/[Հ\'xuQK+VHד,?]` %-|2vZeɓ8˓ wt1-ʰ۲by ׶Aq f'//֟ZkGgO.J i{/l7O?rC^/ünu? _2i yƛ:,~,7̲Ġ%O ;gG/YwTA|a*Kf%PFBc=z^gXv"EB@I¨,,e7= jd$䓠Xg " *8vhXY4 R1ZԎ _]+0}+Z.| 1009+׬v @{9l t!^O<i0^0 IsR:#Ĥ@T Y8Jax0T|Ӯ{r*wjU'einߵm>{] Gc׳0. urM(KqgL)cL}^S"sd&W9fӡd óob˄>lyRV-Rd_{=E:uXcҗbXV.# X,yͯr %XI@i7%XQugvҷtkwʰVvogК~Ogzev{UY&;YC.;|7rN?U=F,NV޻ =%tr!&U)$HxAes(gJUEgRq>o!SH*DAWv(5HͺselUfq-lfnh nzTyE;ʬ,/5>]<><Ob琽&0bl5E(]tPBQ*RzQv:{1+6Iz*2kLr&3x@ jUb#,$|cݬ;[4K.Pv38j7fٷj#cX2 =) ecddhRRTqS{HA4!32hԥM"*՝1pdĔx_ n֝aTi.0"6C-nlhGx=/4U^iQZ.W2Q2dTT &.B2{"_)$o\ђF:#e):⊓GFT H*a ͺw:]\t+RjMchF8Zka$*"yѲRP,!L*`K*PDI,aa[Iǡ60{x&މ ųXg}FUd9 OXf=ޞ7f?>Ps|ю_ %)+:kt$t!%t%J J -IrNFӪ;{lOSJIcO$2 hN14NDNWF61IRSmJWsd Iu2$M߸d'KZ~` *2%\PA,r`HYMDsu|[UnvokDJO+ {R+xϞ>THAl˃j$IBekj$]dR Hd5@H2yY!r),8؀tl{t^k o  m\OJƷ3).֟-6Zg1MwvzrL-3{RGht轊᥉v<%wtlͫeV7xf18={79JO~Rm9? ~C!~Sqq&eQ{/% "!$ ů0u'댦V& | g;RöMlL6#ůΏsoSZwe69vu}?~dybu~/َ}m5n/w+~ 5W /?%_XonF.DS[H ޵To{-&j)@{AbkՎ8ڞhA`:`'k`I~&7)D驖sV@t޻h<.xԼI(ƍfzƧxxoToa˨1`+݋e?xeOr/gb.F]lMIw(emCA9xv`}gaM. ՝)ceOO)_ ޟRT,$Īֆ1w̛sI a8.F TM¼+}{bh@񦌫+5gcGZTϣ-Ӓto; /5e!4R4L94PAHMp#CFLRPr.1TŽLE=!OɽMR+5Oo`n 'UIA8~/f}x%e)1Uf}ԙ[c)Fοv2h0ԍWwO'0? :M.>of_]$ya<4߰+콺Oo5Xɼ𛘔bz~{?~曟&? _zep3!ˏW/LwY_ YeƩŻ*_>cQh2n|T Pi fb٫`~Щ?_u)b5L aul3G)NqBV]j~ 'pK5U;b)|'\ 뫰l_m{󝴙P{^ [_dAL!mq2KKRsvn"yZ, <`?X쳉h: m^;4+\l 0΁op"UL5ITZ+XOgAzL+/Dlw`nQ}\ݰbciqҜhh{uWՓ_VkQF=JLpkWG xSuВ#I)fDKjKQFwZyX*}~!-m{.ͼ0${F| r]L1LRx pV^@/[RbKQu쉪CqTeYM u YQ3DeIa٠;usDwE{ֱ(L/ޭ,#dxelr;K|fa0fۇg6ke-e` 0GM,ᮈQTsc%bD9:[m5h1+NYmɅ 3:f*)nq]O &Y,f!Ynu OCAu`p^Xꐎ ZGK!zƌƁZcĜ2WL#,8ZI7/=~4X'J{45,3] twkc ԫ-?&ޓ1ħ)W2pR|yRMM^OawWEJE6s~3mĥ m'-绕F#I)=  ^" .`اęݗxDv'/*l>H0/ NBxpZ5]Dۖ{\ggE "|flC.>y,\{eun`arU_3 z{W]MLQ/A%fg=jOF`' ?zED3Ӵ9 c n0EF3fX?p2ݹ[MJG()w6ґOΘ$&c'a8E b&`&8'-Fqx)9&r6ȹQjEw9lV "JH!mK&VFetH =| LѶt ;OmOB=TUI^V0#(`!m|RةdA";u-+ɯ{$D<Lbs~@ :*ڋ~>ܜsLķQZ  fJY$tFXi͙ŒujGuaYd,nuz/Y?t^#TSkSy%1sRT|^8c0,fc-oY_GCJ9ōiU"bK(F[ :@'(xAcˇnNM.Y[z7FTqYLCo^~80'2t"DO]<N;`J1^^ՈR%.[F:&1c*3CjU;zP3AVX:!14D$4 SOx @a{?aق~5Ä́ ̳8s<[u8 p&\.b bbEO󏬍PY?F Ů%f\ T |,KYp0ղB8l8#$(,l^yH∔%uV x$ )[;cƌL"|WL&Z)eD /[2UT2#=c4Md 3 hֹETN4h޺Lc8L9B_ܰoC7߼サ*]h܇M'˼s'gT s| }?KSUaE|j|11>|pQά@qƐ"2eRZP?F[Iq8_Q}${+ T >(r PLvtRA>ᄏfZf@T8C ҕ $bk) !z<[kE) @(N(w5]!)b-˽T xn"̪|8Nnb`{o-di./a]F+HK"by_}*Tgf캱>ҕD|Pf.֬MZ@ =\7Ou_4|nzwz7n Gs0课xq΃A^ ˳|E7͜k~ǹQ/rcݯwXi~u1I]fi50I۞Nh?ȸ̥.bHbRcLfL )&A6]ulUz~dWNfiQ{Ǣ0I TnPJj1`jM;˱س8ER]zU0@o5z%ntH5+#lO{|]9vhOq)¤iv=m)$C3KFDoAs߁DґErEf%*X9a80b]V*'M,뱠bhu՛qu4wo)y˧49Z$-;x$ ͕R+uD &xUc1WIZv*I):t%+$ &\c1WIZ\ \=s֜zvj~r5ҼSTL{/7CoG(&SxP^!GπFf mF(~-a\8L0˦ n5.ǵ/`^ QLQTfm6/zx U2~kϯ{VfacYbegvf5H̺.CfY%,fZr9Af; ѽ{4 =rfҊ=l&eK<mѰΣyc pJ r4*+񱘫$nw 4WD|L* L]%q:s^JR6;sŘ+UX1W .CXUC7WIJN;s+F j FiInZqf'')/ˊ^it"F"܇:)p >L:â$ĒD DI(m܇~AhHhd|-heՀH$+Y1cjGY+%$_1JBp< v ʥ)#,կ0GGXr:|K]fng?g#(D@Xf`/mA< sNW:3tXx"g VWfVCH^E:y+3 C **gouϵEhNd[L.ZbL9͖6SPgI8B׌["Y &fOwY={h6$k/{s w]5Bc)uݢ͹( 9@$AhrL̮%fBrLR,gS&}^Wj*-c.{#J02Au'$% %7B"4f,WY_Ϧid}o0\ZhLCkpŀ,?4 Y?]H @:UwQR5y)cDS.DR y_)n%:X=28ɶ;Fa1!ImemCvY0THGC hQS4RD!FƼ&KUDJ%IV1mtUk_3rn (Uٻdle$﷏ǶW_1kpr1HYOy"kSV{oyĜ E`x?1(jN[g7HR6HmumV,Og4Ϻ81t+f|6m59MǛTIUMfT\ŝy_m| Od7U]0,vL+߼Vd F0;0[O> !Y/D|*WG1J|-(A)M5C()]SR|h!6N) Tk،5c;[.lBXAޫ.\STZs7sZzME.L;Ϗp|B]P 9@RvڔH"erlz]Ia`/$ "*șLb)Aal'jP.kfܬtA]͸c[km5h4}> lK5M BMaE0BfPFѤsM,ȩ"6YGS O97ԯJeTŸ+luc8hkPhsū[Fp18e;GH*H NCVUF@F4Ո%CH05Z2lQ,&G]vtFb2BM rn5b3rn.TG֋n)\ٌKՋ^4^ⵖZka$)C)De-PBQob.X,% ^3vb+VnA ]8k䍻 0EApCg~d5-D ,Sy$w`|猡߿)v1%~åz![BgBr $ӧvV#vOŽ2뤅6Qd³]P<*Wr}N%ɣ.$0d}љuk>*Ϯ[~]\c6~x8xl OD\)$_6Aéb`PIup2aK T{WEMNk;*_qX/bE%[~& {0Dvv0:]d3F.(-z !/l͏b3Sv'M7g(B$I2JJE#zhI0r" (N#Ħ#+ y8ש_\U V#R7(qoyTՏF75D:d5Et2Q"D\}3P( Tt9`㓣 $iV&wG\)t}UP.WTQQ BJ,CqA9R
O\BTv@T<6FӨx Y.zʜLYe3_/>z}Ɠgh'^wJ2Ƌ9Z]%˦J6]|A^˟,G ь콞^-.~ß-ԗYuѢJd6/3`E,7V{#n.t8:uMllz]˙5[CZm/W$=tdHvtFxH;TCc@bbw6|D(o#QCZy\Nk5`c ڷ=/CHڢf+%6gtDO& fe=xYu f2iMܣG鷙0-NxѹӾ.wV"k`BIhpExh3LDdM2K1w~ºG*a]6F9ՅL_JLA6)#"`ȁN*`e*vcP̸o$/U. 6 ROKQWYHh`P}3tB?tA`^2z!{e"bsNCɤ0KaR J1f7R[AڪJo)5G PXCY62z/B+93tߦ!qGq-.F+VZK譕2O6 }Y#{IKu+z_?z6?ɳez߭4^եxG+g_?n.N:o/=c1sXyOT[dZ.in`KSn$ߟ,$ӣ]&l_ Ymߍ(\GąYVåY e+x,F)SS^G<{ ϕB]y; A2X ƽQ4mR%VOK^[N-tjO՛ݘy3c=7ՉT_<S8w!}(ǵҸ W Sz#tE5tQS$Ŷ&[P-5Ej,vuvWթsoEJCJbTj4h`ERc"'^q<-YOjj4MpFƔeF&Y ނV 0 (KOBǹp!jnRi[Ur)Tƍ6cq(;wmh[¶VƠx sD́+yYy]-ؔW0˧Q?QKP# $UN)b8KfxWs{sc\i*€w7n.z$ܛ^eAU"c%ß:ws{.PV& +NcR;ĬW逰 SuAXZ:<@Kʮ$xzb9_T.;aiG󷫐 @!KE0Ȧ.폟}(ϲ}ϯrl~| ?Vf&ٗ!= ǕVGxBrcvNo-]O9+jZDjTTZz:[x^nKg]0IJ+&xl.2Q#+-G \ A/WSz·K"7R,H)Ii!#vDe17 XD;wԜ"'mEW6.8#z}{v9{L`>ut^Uɫ彗Ҧ/Q"G_b9L&)(3E1J-jt Y!nՒC Ta&ˀXX9pRBT;#%&86^ &hk =!"=AhD7֝H %Oy#-CkC W1,gt;ݤվZ"W_C >x<`{>Lr+ qϢK+JyJJ,xp$y[ӆ9g2ׅ[HQxyC4e"LZ}·Udrw6]ԉ7*ɰΫ*ȢeᥗHcPdTQB#o9{{p:t^Kŵ1SZǟD_jr@u6_I|~9<}+Dௗ/Nܴ~Mo~T hR&&2W~&Q?Mշx*3o.%Y~pjZ/3J-KySY2tgG\黢\,''L4khžuy¡V̻`yϰfц`ǽY7^4AS{/_`@Ly!YE,TkO멯²P unUe٦ )>nh)|ՊŃԞ"0ƌ9HHY9L} ~Wx~;+=frp8$%#2S[i [X;)gX׾\M>LY*3zg*EpDbhwh6n=i=Fާ&ݳT ,R:t Zrk:6~[I9MԤ" / ]l=w55I64jW;T3.xFa~'S!䵋+ꉣ)ESuВ#I)fDKjKQFwV&GD>[pdxs92b0rh,!PA訋Nk(7 $u+L@K3rU67vU[UEE^xXuu: R oO-1+2ߗ1X+ {5اUVTnxQE&;5ygfvUZB ͣL] ?IGqxe}'zo^8i5ޯu(Oa/"Lh[-:+;C_Y(C_"$>HDPS !\[BwcW-7O%_zn3U`C}1W/+oޠz٨ Lg TTʲYˠbF,٘ gl4~\gۋ1qzeIy8siH%VAsf-,(Laƺ {P;Cǜ5 v kP&TSkSv>P о*X' UZ0rۍIRNq,)4waʽ3ø) ^زFޚ8DC|Djn뽺xY }r-}!%}GNfw:|б2ZtkzۭU?$5d_*72d6[yn 5UjGUr&Ԋ+wZ'&ư" 4T@@5sA=%:eXsL(Qm̭RȰ, Ƹ#a)$ãYOU;D5zn'SѠo'0o]QQL&8"SY.?r?bmXBQ4RFrf6G-+HhW=6z |h(KR``"L~` c$W^`h98"bIn< x$ )b%;cƌL"tWL&Z)eDL/[_mP1F2΅P4H\"*\@JS44v budO.p pt.{;+]pԃM'Iq9'cWoPr|?OSAUadt6;(gV0qΐ"2RZ"9%X%fIx AvI1̩ȜOAq[mXC1{DSHe̴<^M#쿁|D0k1zU?T _tz}Xu{ӀͦwvZ9)a[ \tϔ&¨t|STjYQ2E!oRc ;7pMQ5UϷrJ.6캱WҵQU B>(Z$ILRko4ث:/[yY0ʷy/kmvm6O-#y@ެ-U{m<58S~馵>W7ozؾMOYgm{k[ִtYڸtozl]VdqlQh3CT9( lНm/7qj_ޚm]"jR{`鬯^af@*&X.9ozwN=ݏIk/[^+k)L@yM>jb wEGE57V"FM}3ꄷۘˑħϠJkUT0[Wa楝0zGye+nq_ &y,![nu 'vz+4%r_ˮ څ뻼>)p\(wp)BSϘ8P=t b NV˪GT.;ܞ+i[yRX'mô U'i {E~,{7E䪡/`1d{{oOޓeҔ]_b$Sa. N7;|GDl&h'qMl%sx0EN93XZeuuMН,{<} ![d;D291&@H0"K EO1N0#v۴oBǔrg+rW}mcѶFֲz,M>> ~do^qݞޮ(ϑ)rJ\HDcc@_eDul\[cR;ĬW逰10sVk/;^sXe;W;6+0t˝~|tF{Df11-j=I`EO&}lWSI *,=}l.}3LB\$.QWIZ&\vpJkn~6ӸST {61o_4%gп~ a͟{O-݂~" RRgK ڼ&zV㍩Η_j0NH~B|N`n'"?VhJJl[쟼?wzL( \ \%q%;JjupR65\ W d*+UVc$\=CD |J`X$:v6)\%)\\ 듁$.էWIZq`RuW2O tˑ<Jrp2JP8!5W'WI\NhߖUq<+I~J2NO~WIwd)1\=CRbMO@`"UW2RŽ]=GLR~J~W 0îdDAZԞ+ß~0-0 !S=~`f 9 Ƙct<-%q"m32$y6u#%<&jeAD7]`w>K0"kY" 1VSl'8v*~f;)ً4]s s?J }*p l:z>pE\gJ{ȵ_=l`7/^NKw5%=/{Kˋl-GE^NuX<<$/+BWWҐ < J3BW\Am+DuGW{HWB emFt%|||WW\ њ=+1*#Vl lVԝGRR<\3I m+@jZ:ZP3l|W33ҕZ]!`e+kL.thwvP3GmX|n3BWZm+D;z:t7z[Xm0ؚ/HmjB Hnw??;N4CwJ2noS+քeCWW\ ѶjYȈie2BevutUJiN + "Z+NWRRҕ$Z]`QWW\ *vBtttp͈5WW4Қ +m+I6thy}WRʎ BfCWwZ %HWqIr]!`E+q#Zzu(%]=2VU&[~Dr Z}pivU)H4;ø,Vi3WTlNeiṪ4ttt%5< eCWgCWV^]!Jc;Ci-3+lh>v˲+Du Q ҕi]`Nm6tpy6+D+[jGs#]iIՎw~Z3FBW=eۢttU C+ 6˲+D+l Q4*eXgf(53=+k;n jxv S1whX]fp~3fGs?P*Ҳ۠]yߪJI3+8Ɇ*6B]!]1M}ViφU PZqe2+Ά̆m+Dٶ:*t%%BdDW3d3D̴ ]!]I% re[tpM6 rzlB ])#2fCWUvtt_&\ Ѳ֫+Dɻ}+c9 "`˲+[rB o %]!]ax ||Wlӡ+I6zIp oBEhBkaLw;I;_Ųڱ-?_tլj6>ܻ79Ŋ6ZB=Ht,-. ZXt<iE{)VoY8^go{3wQuthQ}) MuF>*ZhM`}y?9_䴒kUQ}B}Z]ϗ.+*Vq'^^fzVP^ʰߌ_l3 [GrvS>"a\-rQ]^u6x:\T ۟}]eM`u叫lt<(+`LpQDiA)D,R {98XeSw~0^F}La?/wyTG6:AI kUP9m^X"%ELR>ё,8 0|l# OK' vtiG3ťrN+WwҔ3x}hŅCn1WZ9$ ^xfǫ|Ȉd*aU)/}mUZk􊐦5UoV0BhQE*$'q49/udj!Fq8KäԥD# & 3xeR11i5BkI[n+[sw[Y#ARȞHThF""42M*Zh>`tRZGW*h) 65 %S bDxIm|pREbхse)BB#w;PP޻]UbBJGXX'Y2DH/V%Mh!24 鬃42BZƦ)ҴS~hk񸜽|JC-;^dnTr&<{r b{QxDM[F/_~^}#>-=4z/G Gn8w!N).C:~Lx<Ы yteh6AǼ/L2+x6HLY-!_^PRbFaY 8 As$=_AuҧJӃ-\i;s*h֫<צD%q|+V7T1z$pr)ٺ;3v n2)e4L'PMc%lvckCOӆݞ{Z?T*ͅCxWzpknKi't/o`9ŖYvQͫ&- /N1"|4DI o- M1PIOuLy u*{ FntkL0SQghPI{6ڄVcE>c$xF#U,)z\<{_Wca48WM!/79/ wY5k"Aun,S6gnzLafeM١iiz6Ʌ,&>Z,~^*4t,^?~ίv)VrmeP5q65bnRuڑ\vnKRFYnUGЩN2jA/PmRN3c,igQg%yZjGOwR;RH,% WpbAVB=TD' E $Vؘ`G7A0-d!R14x+L>93/\uԞiҧЅXؾ$z9%_;>#w0V@)'ٺlHwom\] ǵku!hpYHzg)])- G/ÿÏn(4 ~Oa -3v)_,pE8 ÷@7<|ώ8 , O3AQgZ"eM>_;ȫ|ًý'z{W=% a%@aՕחlqX"*z")^۾'Qi0Tn]mo#7w+F6%y/&)qB`MhAOMF"rI C2WZ;EO4;^!ZPΏQt:@^]\@7$߿ixY,sTd9VbS۔Fq] ==t'Y>6_G?̤ѲnzzIlÞ(SscfgƇ}~PT'w?/毰;!liҊш^ G rY;.l:OOk9g~4@[_{9_6N$=zwvE3DL\rhICrخ^ӕoNBvy3=[ݮ<voЭ5>^_nq/OeJ%=Tt6O0m'Va3m4Flkml흼'Sإj)jh.!(Zs0,3oڥ۪E;oYJcp>;n]{i/~\^oKa8$g۸6ҒJƵQ3Qj)e%16oӿmcScՅ/m G =L|IE[U.8Gc[8`c)6}˔]I!G)SL{S<h( [ck`8VyeB\G[쐪!9,zh"^H^l7O/&/fhCP ^ 'M։ ] NT-^ٜkѧ=Z)/6SxcmiѼqk>W"]!?K%TfQc8x?v2[j}wh{ĨQm(ޭg/&yU(M5Tpba]1ᢔ EhR@<c+]KNjHURP*M堵OG{WD+z'ES07/ǭnj!]ʟvYDؖl퇶_zzB(^_!oWYd^ͩ}MԎ#27RIYqs)%"w?j"gonCj18yCQ+K-C1:jB+YT-7d\P\pWm46H>(r9f4Ջs&y`qWfQq%b!! &6rE[uᾰ5sif*QXStt !2>_CU|\M`@%glN \)҂3V=oJ5ʗ'tI߯X(6!U%9J3ٳU&7t(Ir #$c^M_k?RCUK&ӥD!c9d+-fKԅ8s3Uy6݌lաqŃؗEv >=q-< <)9$s"@ZM-ۘmi);SmHT,5?tq Ʈg!:&]L8 EOr<,.SkUٯ`=$傄bYlbkc^ӳ _BiZzf{[$+=,2o9J5#޴eꣿb㛷,|X܁FIB|Q¿WdTpl].Nd|iıkp4Ca 8h뿍h~#?\;Z=:o.f.>? ^by_ޤNI8ް4oF^-J{£#JWB%+rҕ_!]۝GO tZy0K9ֻpwuy[&MZ)Igo;::ePBϯnNo({ l:, [Q4:9%vo2Dd~_,ҝ/޴,VHoV\)տڑ\\'OpZ7Hkbr .Hn`nWeC !5?ͧ6|rÜD٧}f9 KV*bv)#NtcuufIC"_>/v58 c;@qnVا[;PKs9:ӢKts%pCt| Y+Dn|^\,U.IGV+ J]PjPKEu)I7 SY=eQ +]`d)eP[*;8`rsT7#"?(Gc%WPIBD:7p,jiN : 2ulZxpuyd V +n4VY4:TRZ6D P,Y/ ջtI;}-+0c&MHA6l-] 4֮qs6uB$֔Gh,byM`&ĎP.0&sesL tfY8fc-. ` B@8NVj \$b8U %a,vcپB&YV;KQ@r& ~ #r {%S%̤yUkB)65V+qކ-ng}rna(&**ʼn:cHQC*whpZ_>ìqv=`'Wu6,o8.רözDȻ6md-D>Fڵ‘6iY(8y@]i -*``>qLbPfcI2 P$RDM+k2 K2޹u$yy율~:M]`cMD&5d w-["%Gi$xxNꪾʴ.nxѸ~{ O͗UBd: ՅoA1286}ßv Ώ"j1+ѐZhFymC6YK1Z1ZۅUMaFΏ`uD>"CO , )Vĵ qˆ0%hxRW5=‹Q$ 1P _Cޅ\::$ճ#AH^")CCXRrQf%Y$Ŏjq r]"P([|͈v k(h ю3ڄrogcA;mOIđ5;9v FAuFufj2RAٕ2 JEVrw7qVUp*CXd*eI }P$,$ctD::ckE7v+itԖEs6itN+ ׀̬$ eHmJMKKKoU4Exݷ Вtlo QB84/i:hILZ M.o֣sym,C}~X..Y\GIT5<uVd'JJIҢJ1Il'{'"aEmCh5(U/ yz5!7C0&<:Nha#ܷ5̈=Xe9n0J$/1:Bmkb4G>T'ڪۥIף%F낑 Q2HeFBjЁF1J[8="2C!vk;Q2lb*OcWb\QD4c On !\s[9ChyE ȅIV( LjR܌EEHRaz!TSXx 4 ȣtFdUA?Fт1sjoBlӺr3Q ѴҌ:!*M3"$An^&LҐha3TBv1sr{:-1Ώ;z*DES+t7[05^%-E/]XQ VT@: ltEja2fx=ʤh]H,O3bJ8n An%Cq5}ކ`ݨ:jEWnӥ!00r(`tud.XzA WK@$5 uVW44革yx8!CGV/Ӭ٥,R8?\z`%IwHN_?{v!s}z!]ApTo5(4ro6ŽE[{~^I msp0' {Rz'EpM%q@k9^mVC#*At{$쾀(\cjdZI:H{UIMC^>YoJƥqLXܲtƛ~uE89y[>,P~A瓀tR Y {K].w)U5WZNn?zgN > }fm]ZmfcdQd(c7cQ=ػ9+>W{6ZV@hnĆߏ/ kncH%kMoDA!nsN'KKEZJj!7dž4!݅G|*K`2hMge-!.ҧ餖Fe5]фgǟIm˹g+liNN?LgE_gg%7d#'VO !k&'5ɼOhH~+'Mu6츩'V7VOU;?}Ó_|ROɋ?~uZDXq>K~^}W_O c=ϻx^ѹzunWܫVk.#ii|߿溜2ǟ]Q+fd>9Ymbiܒf8"@ldt1)[\/Aڲx %q3_SŃ\ٸ 1>X듣d ZimEw<|y|R]?~ժo[|9mqN߆M Kڞ$QՕ\D=< M.PKy6=V=pqɦ)r/.߭RW)M_n?&$ۧthgJn]B|jJ/e+J}J/ý).^w't{M֑NiJ-ME4itY/G⢀ZzۃKd\d{X2pSakr&UJxbCjSMɩ܄Mexh]4N|%ltu>zipMXŵƏZ3Lf:0Ӂt`3Lf:0Ӂt`3Lf:0Ӂt`3Lf:0Ӂt`3Lf:0Ӂt`3Lf:0ӁpVRx8tЁ! {O1!ҁ:Ku?tLzz탑81)Z< a3F'XI ΦjzCJykˑvvDB(afT°ݝF|{崭 f?FlI- 2SzjW Ulpi{o8xR4 *h47Wy3g #z DQiF0=aD#zÈF0=aD#zÈF0=aD#zÈF0=aD#zÈF0=aD#zÈF0=aD#z.uљSy5-'.f裿^gÛ:.$Qu0!bZ)cξ)no%Pᖄ{{={_t;=q4$uy( Fic jIhӔW 3m-]&TicBfc1ψg)G疇Os*nN:[qNNoB\}vGk!Z]vM:=B-Z9mgKgHahmZZ*墹]:-M̎t"M'bxI5V(YLk, YLVɨd RҪ_!_˨<~-x4qnk㬯OmPgk_O:ma-9mae8Á:;+UrᯐRk>nj $PZ0c̘`3&1 fL0c̘`3&1 fL0c̘`3&1 fL0c̘`3&1 fL0c̘`3&1 fL0c̘`3&1p6$PPp~sٚOAZm$4O=@W^GTWʁ/6g1T5$L˥ס!LL~H"U =%b$Xw$G$$j6.F|1zQɜJȂ6QS2ݨ1M[pv}jA5Ƈbb@1pnOh)"ȓ8J2BtX!1ZúD+Ck{m7xF4ĹPKJSdtY:%«uǼS.rz~'e]Y ܡW"\w V/|F⚝;Oruv;tl[[o;Q1_\7o=P|3 ˞;YCЮ쫍2 M.~mBx z?}T8=hNjUFi]}J 7ѻSd|tHNOwvnQָu@վcXFڈhJ^j5 j\WMXMEfTw-GؚDA3PJ+rBGMET}Gn'Bf\\|c&;gR/^{4V#{m^#]IJ[er I齏$ϻ5It,1J<.v|*9Xi!#26FPDq)H> HvK|f;87eR?NOc8~#=fqƇ 4\aWZD N;$sEIzN`R\"J7>-OhQ=G􈣉sG|3bX>u%wfdh/_ 8Fr깦 ;oZzO-f[tInot[zQ/M8vڑ?vV{c䍳 ~(VneQ!G83.mįӬ 3 Wl©HtE]re0ZZk! !!گwl{yGY$#f4l ]*сruشS5.UӉK~%7zz'Me1N r [Ra_V~Pc[}xccX JkK "1Sc&gH|E-ڨF<|D#Mxu?+0fK1enE7&`XZ>okaβL\opr׏kxE{\Ëkxq /5^\Ëkxq /5^\Ëkxq /5^\Ëkxq /5^\Ëkxq /5^\Ëkxq /5׼^F{zOy6Ά_\Pwi.V'1w?Ŏ&Hy@%H`ԡ#i}c$e\r3R" ? wH\]V")]dwݕ%֯? wE)J⺃)J/JR+']pk&$iwwERz+/i+qW$sH ]Qz*(X: wRH\0I>[]=w1"q] ~")=bԗlzLŲ'4y_YL*y>ߢVOrL|;8 qMp]exYso`ؼV`f umzr;F.w'f _ܥ;~7VjCw&?{ȍ%п" lI|\ 3,`2 &~lP*-G{oɖh|hd$HP¹BIS[q[Hy,~8ܾѺ^UQfȈSY/id+>@ߗ .vH]WDib+hFB`%-]r\tE=`w]YWԕHW q;si'](u+V`+³*6"Z(ȺjF%7pѪډR`p@03DWJEWDQd] PWAHWl5]\tE}[t2] QW!OW|Wm DkQZu5@]3weBOt(  zz]S]iϮΛ[^?zstmv?ys4䇋/W֧߱ n>L#F¤Jblơ(5^vBۉX|(L*0x[l߶0_(L)7:ˇy3kg_̯fſFo_Pm$<ȏnm=,z$?/^DMlM 7uIlܞz]k1y<:q,GH{>]Q!:EO>ҙџo_?{$'P`›[3sX~zxN/|\a3F=v~ Q:Z_gZ/wyY.vȷn4rY Áo=-^̗iuԯ:k vwuPN9\]|:1K,$US:bM"^UTLUqeT?:ȅ(`P֓BI])S*ss P;#ڨ 4MPPj 3YG pq`uUXDz\X=KTAUIGN]JZN\.贈3|{l%?֏˦Ra;SPwf3>lǝ~h+ [U5XsO0EM}MA,ǽ;dK(iWWg umSwmqP^ʪmޓbuc]>gFoGȶy蛱~Lcc7OYlԟWqh( ^LF%m~)f4.W3^|}..b_ׅ/b{n*.:F/ G^3nQ;G蒶n1< w_>3P%z->)6O]a z/2kCe(.W:m{wyzsǝwo]}t6=m.m٣)paBśV"u>:򰻠:ۊyÓtseSa3w>3"Nz{h=Ķyڮ{ɜz>ӽ~%|\cAtgC3N~N }gnGEAEZW"ڸEQD7xC謳{c!dK!A mH^-vhmx ěbo`$$p2Y0#?!IOd21O|/zi2t 6jqh}BJRu2%p2Y KW]!;j8V ltEJpR% *0;FW iL^WDlutX FW+6uZ@ue3KF"\fhL]WDA9tE7X.BZ#]"Jڇ+h,k>C $+yu\4QVD56iDrj tu;a,9%ղKTEֿf/_I;u`߱m2nG]ĆR" 'MmMxØ޲@cOv}[36~FQxzh?un?_cxcp+:u]eYWԕNyN+R*EWD]%u}P3kӻ"ܾѺuEg] PW$$*, /]6\tEF+u5@]>eQQ0H|AՐueSX].](̺VrFWD\BJ']ui";> z6 цA4yG+2NU}%&H.}GKCeg sۄg]9<*gC^Y>di9whn%6GOmM}+{"V4y(R'6""K^ x%]r\tE SQu5@])L"`p-pѾDuE>d] PWZ*H`8`p+5}Gr08D]0A6³}P-uEFg] PWF:6B\+$]L~(ɺs B]afie Qu5@]9ut]!nӻ"Z|0HYWԕWNkN"gqVB"JCC6`9ME`]Q\tE>RM ^ꋤ0[HNS}ߣQ~i8JHl޲h?Mn?_RF']\t E"&f]DWJD`6B\Q{+tu5D]i^pV~=]!.Z )]̖ s^X`+֠jh.BZך(Jtue!0+cp+kK(J-sjFiVcW R*6 BoҘ ;8WY."ZP(M+o]!0v5%+YWCU0AHN].]K>$JGWvˬHAX蚨Gg -WL^UL\V!x(EIUb^bsTXD1J'p`G07=Å숣u=$ȘDDc#"#fF;HW R*EWHkL]WD䡬JYD`+ ] )ejg]DW)8 2ltEwE֥+u5@]3hF"'D\+]JwE +0tEA8D}BJ)+땒 UltE}ӍDPu  ڟ2g"Z6cWD3C۲+C)v7]!T(!QW!hzWl$\ϦwF$?Ne] RWwƮ9_mµKt^qBnuv3?q3Ih?]n?_24#]pltJ*H]WDّjpt!j6"\hzEiʺ@=#]z+"y]YWԕUW]\tEڦ+cWԕxHWlc+Ձ֊uE>CԕjNo)>cW]q(˺4qjG`FWkh:#2-B aVv{>7X.BڠQ Yۓ:/R?#.acS ]b<}o9Mt3:}$*MӜˉ/sLD.8`k{hp]0q}u"C>"9yiK+p+"u]ej{f]DWJg,#]!}Pn{:2usHW |A5qu̺:@+9 e7jEWDk|"Jg+!HWl ]eӻ"Z|h#T+K7rjG`-]bӻ">krw] YWԕa$``p+}m)Gr08D]yP*FB]a3h=+Zhu5@]8*޳Jf"Bd] RW!@&s`Gz (_$}`Gز@v_gemFJ:|~|q+GwDd€5Ƈ,5A*WJqB㱮wek W^'%o[}GI難9__-j[-YկE<|~=, X[77<^-#ꋷոtWj ~HU TPQΛJɧo/swEo[P~1Ak8SE]D-DtB4륑#]pPlt^(."ZiSQ!Jz/HW,%*6"ھ(κvqOp+5"`(̺ #]pFᆾ_;A6j2 gOWqhC Rz{WCԕXy6B\s;hUH]WDi+g "p|zW]I>$J!;']ӻB\-j'J벮PltRy3HD YWܩd=]P%CbӥHd'fwydb3LNHnJIYdcr9.JtgV~۟鼼`>fue]뇫J X?=_f< /Uuv{7Gy;{_\M?6@)3Z qE31+rݕ _Κs{l={?W+onnl.wy_]|˻;|g*6ŏ&.zWg_C^?ݜO_\5ߞ >Y+ZMumoT-RU0M` XCUSXiU5LcÆsz\O "eSɼ,XX\Snյ=jց@3ҏ릜 N VFZJ{0Ce|~_vb\V6q~6OWX(n>Zk.67 Qow7m[»nϫSі?y5ݻ[o:m-O.n[:k=޻yUl6׾.'\NMݖe+Bf,USV~"iP7X]-ݯ響NxӈpJ6dTV J솫6ʠdJ[W:VxO5Z dozl4 *}P'R~Ҕ9LeujRr5xo8GYA=^n=0oKKGc)7Zk_a)Mey޷-[f <S`>Qqn cѼX4f3r6f|6ŽR,Btv,FUXM' sԼbjF(f5~+Mj9E)'|[_ٻrW yIB;l$ DQvaNW8~mw|]muTDR(' KŨ^?C+@k0Tw%ne R!lr+ P~2ăx@PV0Ō҂󉘜ѪN[9mS.}(EbhВM"r.E1/uԚYߪU)őȹ!RemMǷ- |8\=5"1)&8Oz.1P>R}s D@y՛ ]+&5-F^?UcEl,DC[ 7URckȹ^3g G}uYIީ.R[Ə.'ipNno>5|W |tt|qk*`hM ec͉ C+lkQJUcNNC{:UyզJSŦPj@FYcF Phܱv#km7iɂ㓅|N88(L("bˆ8V ›ФT>YGGՇ-D\(XPE쥟MW18H GQE.N`Lx4rׇKP򋜞Ϋtx(q4W#54x"UΝΛTg ,R5h '.2C0jDV!$bc FZ]JUq-z!V25U5h\}%EbX !:G}"aҋ^{o%RsQ9oHkYR19A N+-S|hZ]Ń}aYm}-.r r2bx}xOя@j¼}XT1FW|6B .)Pqϓ.9v d|l!B*}!; ܓ;Ȭڌ X.d]v\c"%s$>&!F̺/:n˧{eֽy̮\ܷPX7KPqH|>x '1+T3RNY&"(E!R*lB 59˨BlzJ3^O%zT!;`޵ 6YG#z!Z]G&aql]!7&_;t7|:W@sRS ?թY^ !ɮ|yAFԬar!Yo>$_'ű8xoM-R%Q34@mj\us|0OSYPz6t8IxT3 }05XGc}oȼu5aHBjMJ9fU-e ƻ} iK}I<1vPG}cZ}NEX9.6MeaSs [=Vn0Sl\"f`MJ`OuϑP6v8PP%ErSC@==PY0@, 4SMw (%BYQVi,D۳3jN(N %Sx\bFV*&j2urs 6r#^\7[9JD)$$JֳI)_MUT쮓9݊u _l/|moe!;|0,20J(n/SqV ReR?=a g0k^R%v-Ԛ $_묯r> 8{bacsmM7$Πؒ$Ne\s, ;kBb-ٙ +H{Um']G=!rPc@(לb+Ss%8hr ^?w(dcmLfUOtr >|rv*:_fG'R?J1ѿKXX":~ٽ|rʽk06.G;;YUwNcz>Ǫo^?ً~}// z_Uv7HIO"'Z n041z=77;X|KKt[ ~۷{]ٽr\w,T|_λ:uV7?QU_2*PزhJ丞Ը ;UZl]p67V˽Gg<`l7|F?NT#mGُAܼVzGur_+ះ K]1Y}4ީ?)mz3m}s-`u=z.ߢ_Pdզx(fjEԻ{!nF"`Hh&kP &uBd9aVZCs6Jm)ɘ 5«]+U":FmdW8La){I㊶WT"1>d *DSMZX&Z.TV~SH=́ϻPGj`$2K`feTʃ%T5KK1~i̿Q̿~5(:(}p}&ӏ Bq0+3cshO4c݆tюGr͙88ZRy(5*){rйwЙC}1,GEWS6yf50@9h9(~Vw^k %Er8Ʀ%nְADNYT Yִ*BdebbpBh\omޜyBR~P 4#Kh\V&i?U =brMwdW|Ȟg&!I%FScF%HyD(|rcM*Z&P'ǵ]b~CFgr@*lebrد2_"Ր#l9qgVH:Bro)ci)djղ[L(RBcHV 䔵 O2v@26пhCw^bmm.wM{rIlQ݁EXx΋GV%k7D`N1**.ޕ5q$鿂P샽&>u(u) 6HQdwuuvUVYY6$Q3-9˜Rpfa \l2arKxg~=M;~ֻ=wF?TQ rq{ӋհPn;X9ys.︻?qay}*v`^P0vY4a#A295SX[,} *"aiSnp].Bo^rԍ6ҙ\xQE.O }(6Oza}upU7M}97? G ugBtlFrPO&a/ Ě)Rs4ʠri/e*%4 :__o?scgҶs36cpVNTZF * #A* oRق[ѹnt $PsxV<,Jې3,,lYp45R)N@edݪomet,b.C5+h8$0;-N 4L"[Km񴀫F#!LQp 7F 1+]t) {R'd {V!;1~w1Wi9$mǔ⌑`&=?*bd.fR!eHȋU$?/2 gu *F|4E￿aև^Z"QBQeG 5b7tFS7sҌȧTo,O@P&ez?k'Go=8u^~O1M5;iUr<cbM j XCV7-\)l~{lE׫Aϖ;]T3kҿ8x[hrS+k;].͊wEYOz?>83n`4x91ZV~[4ΉNyf%F'~5[dz] ŵ~3Mm& )S0f/8$篔IBDyQYh{.ʋPn!~PO}za277Mc/6>Xj`L"wCKJfz쯿 ԁWUjQdgcTҪ质SHmqb<%ƈ/~ſxD`CЀc-V᥯_OR&XZ/LBeH/o2Zo -}.9ܗ 0E-A$a?a}aV>[:6~[tC^ʋ.``n6vLT/,34Zp[$"t~}P՚3O7+QO1%NCkWGS0%GR̈ *,9Il GD_SK-2|J{a(#I,quiƻD`n WkE%t<;"rƞy˹=ȵ ^Vqr%ΊTؾO}?ҹoI8j~؏Z8͍LpY&3>fVqet8Hlw@qfO{?v~hi3gJ8iL"hd0+p$<,z9&"VFudXDcZ)"%2U]mَnABgKj Eg_4?( VOx=b9c l]& >:HS!.U-i0ߗz(LI_M|;zFv(~zQVbIM^o" E.+\&7"L7)jG.}}Qa܏JfQD@(Ko@ЙKp/zqV/+3p!i}6Bu+j ۖrg{~ P̾syxUꕝUw Dw{6<3 R zWn]ML^R=hx!|zn݌fdt ryҫ^uWL-˫>-|~|і21"nFv^.mWجa0o񔳧(aYp0)B8l8raaT(+/0I`Jn܎ x$ );cƌL"^ˈiDk4]5rZ80A*03FHƹ`F舴E-r4A =LP|`bd/h|0F#&F>(mB^|j" )<SaY N U^'`3rfu 3ɔ,(%"8xB/GKtދDˎWmŰ7w|l}0 <'S Aq4z(&Gށ]w)'pa%Yx%^Oh\@IMF ⿫H ́F+{7' #2~=K,ϫkeRRmK9wjz`V^trSdn(QTf{ny2!fn,PʭNYr}sc qHFg. `t\DzYٺH݂kGV ֱ]vy޸h煖a2m课=xnb|tKǥ[T=qPhƚMw.k(wJ]dlT!,aGJjt(O[=7uJVpleQh1CT lН 7qj9Gr-RtRo%],E b,Ov[eUW/ه>cz#hX|쵲2aDD&pWQTsc%bDQ'($Oo*m5YfgeL1aHUT՗2"S2:;yM LqY̘C:,N,hz+,QnQV?)p\(wp)BSϘ8P=t3@iZ+IKWv-xqϠPms=f}4w}. dc)[[w;h=#JA=2Ia4q[Kdl0-:/aN(E3fͬ$C~Zeuu{yyz@eD291ơ@H0"K EO1N0#vlx:?{h^:;&h[M[P 8^ v瑠Ì9|׾vͦEĵKYGQJ- B:%H9L0&;<[ˌ4{ɻZ C!f=Xb: lpLfߜ s8-(ڥʶCZ8֬DžA{ϧ233wד6N!kd,1+ufQ 5g 3.'WHsryL `y%1sRHX' YiC1rۭ!cFY % EĖPƽuHMFhNPƖnKB.?L\qg@;}>+"M!C6EwrN,ytx`o^ȸ̥q~>mxၱiT dV0|eFhCH(fAE+n&ՆS⥢Ya96S&aϵ,L^WJM @VbBlxp¥)Ie,Jq ƜPH~Wgx"0٣m. ӆ㊳N c0ߑ1*L<[\#9f0Τ1%'3cf8c[ {ٜa6Lݒ"v-SI#VRGZGwH9|_0 rNmFk簧XkՐR"jg'F}IܨBpއp!uvFгtV,操o'2K &%;ezY[1+W"K?+֠t ZcC qrF8 |r.ډAA֙ȗ 7VٟL b\`YDݡ-hPka&>Nh~8Rp] Br:k-ш\#5FWAMk#)ٰY1:hM'w/£|*Ήş ({xOfpuTK VerM, nEbO( ߷#wTB(s@r3J5Toh#*c)h|)vC*sN0&@` j< n-KC'@u(Q\%@4Ql"3 (6MJ;L'k /~KXps?a_&~pj޿?ms^!>YvwݚpEm3S00xr{3U8t)dT.%LQҵLhKWQ6ac:Z"r*-tE\HhF4x$QLNȼa}`O:#.K1ڗ3мT.W]Aɚl>M^<oC;"j/Fv 35Bi- A(:@л(Í7Ģpi2,j)cE8ag4H!hnۉPqELoBR; 43k;Mlx"Q=_2߮@I؍0Q F1K—_~p?  Oirjh>$xݛ-,-ʟ7.z$WrApN8!qQԺxx'T1芜@ Y@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': Nui+dʒ[ $86;NtF@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': Nu -JN r/ژJu]uN uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R':qX .Bw 9aEwx'{: gN uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R'8v/od؅p%{ Wrl7tt\JWW+ΑJ͠uAiEm<ڕd5*chJupEp%j=:D%/+{ϡ?a;R򕌻7my1ۍ|M~@/߿?E}{Y}v_>~ !Oߜ_wloO]oN/ҋ?c[oMo𫏟`?܏?_4^2@yiOT\:e|~#R^^<n s'8sZm*9VСp%)-+*l+Q q 3wŕy\Av\mG;\q$bD J'ZW"7UpW)Wd]rDpV2)W2{*+-`ڕȥe֮Dm<ڕd]jF\E+'-+i!7 D;AQ qrJbgJep%j)W2d*9JkW̍p%rɬ+Qv%*#+Wمa!\Ap4l6+Qv%*W\]rw]ŧey˖?zzkvgN~|w;˜y5>=>)J4|z/y?xwgn"'gLl4p'2:\ #˕><4!?eyxʥϓ\\foud=G։istd1!_os\x扚ɧjtÐUi(/olr oަ.t,6G;mhv4z+  D*(\Z+Qyl}v!\Ap DOߦՕ*q\J'c2OyDJT9Gqu="bp J' Dn6 jGwڏǕFqu |G D./+Q{)6GCq$>sZP3(r]^WR7nS qdGX\`ȥ DئuqiJ'^W"7/S]A-odWWLٮ+N3(jATV\]#r|'؝}q=d\vMſl{o?Ԧ m*S5>tm@1 JG^W"7Up?JTB\p%:>d+Q qGD0ep%r_W6ǣ *DB&c•v\ܸ Dm2GǕWXqu J e+Qq%*))Wc䕚Afep%r2_6odVkUƼ ؙ D]ˠ_jZ]]%2H+]YfJJT꾫Ct\t{C_Ct>N$ ^ó)bGkS,sЦwم&&×_$Ko&.ykKmަ2ѐv4z˨B`26+QGYq$r6m2٭+&W'kYW"8S]Amjm8:DS\]#fSZ`k2nfP^m*6׈hi/C},D0ep%r+Q踂xcWO+6RTW2Օu2(*Q\]!Oa{Wr\WD-JTq֮ 8x D.-.jAQN\R>ޅ,W8| ;=2dO|O՛gws.6oS{M?Æ4>t譋n|q&;k*W%M2+QL+ĕo/+*ՕZ]]#;6n!\`JZW6JT&B\BNu%r2KP9:km#9˱CzHl`8}c83+Z OU"%w%bp櫮꯫{(,tteg]`=BP+DMt(˛+ᔎ]!\cU4ÅOPbSB; K+BW6 QG G)LK¥CWVf&(.tt:40s|qADžI1^y{4Tfb H:oT$!I۲ĒhV3w4iծ3IgUhvBMŽƔCU5]`8N *tŵzlt(хZ{Btd ZZs+D\=+i]!`ud+Dut(Qt68*th=˝aŻG$<% Ke5 ]! P*f ]!] ӄ [:gW+*thҕX]`.xWWY*thMt(-/tt嬖toLL JZ|t(EG)Ik•d "Zr+DiK^ҕHM!qa1Bt䕌 Hu:]ip͎+ৡr7gJ3smiPsР̑++ BNW2 Btd Z2UrJE=+ϠX`Ah3*vɞuiBt^F ZOUÎ45(t$t+Lt K޻ UL=+]!`O ]G֤3hgZCj6x^ɧz|v6>G#܁&/m}9O~<>a\@ z5cQco_^7m #v1u!cȐsc7}!ЄR8!8_63}ngz+n_7ֳފ _xm6sr$wmNg-? V܎u܈Τ {,-A<:S};pԝoN[&bšϴ[Y֌pq!Zks+y3 [U ]!Z}-)Е+2-_ͼS Ҫ4b"%zMf L֔a${%;勒+r\,$\O\~EihwҲv4.aGʎ桪 o V+ J Q*WjJp&DW <BS+D+et(,tt%B+5d :;]I^sSjJq#!DW02t:*th˝J_jJs+l4B"w>ҕ\J̐+xWV QGBzF +GQT Ѻ ЅқAL*%sԎh]op\ Bt µ ]!Z}(%ЕT~: +AXɸȺ_QAb3ݻ)9y4.VOJYOX?}Y?z.xnZ+ Q WjJHí DWX8F.l86wBJC %vfzNƻBNW҈BW{HWJ !Bf>AG+@Ν%/>ҕVʐ `l DNW2BWOBWF9)`O;/g`+t$teЖR 6Ғ+B[NBiYVT ZELfv(MvP:OXtp5vDgCWũ3LRFS/&Hz*WUc*_(zeDLVْEPED pѷI ŧj֞_?&U f˾N޲G܈b0 vrxjo ny#J+o,/wXzy%M>h}2+o`G}toxWoЃqoƋ9\W-4äܜo[3v %P߫Y}.q\sO5Zn`Kqü4B8-&QD6זT\ΪF|0|_ W3/=K ?kbAJA%Q5 VZHy"x=p M*LC6j^6aǠjS`,E#Ϙ *cLx%_xƴuQvq6i|u>/3XLƧ`NU4zv]^cW3ɻ M 2&٬r=7+.|fkz {sκj];ě{cIfc4>ycw3i^lb/ O)dX|k:mRR͡2pWğHA}n90¤rؓRb+ǝSuq~0m@NlơߵuIX\&,k0lC#^ && 6!`+6!\'6!Zor?l:@r4Mj-m>:Z}߿Zᘴ-k"h ^**HӅ2`Zoב:9cL'!RѠ4w ,cu\_l@")dbU՚Xqs3i|S|Ktn3Or:-_zzyݹX쪖ȧv'[H'4N鱺o lKj<ޯYZ*yv3vcXhr1Z1JXٳ!JDkm{6DlٳMv ]!\oNWRBW{HWx !B]!\M&њ+Diwt0\+4 *thyt(/tteg[0uCDo`e3t+uPxq<͛~o_|ԯ.N?;b)X ;ṕ6MpvI`K )n_5A?\m MgnQ;wAB;m">DaT]r]_cA!|SiGlЌ}h==Tx/m}NNF___G1i6Cǧ@ r^?ؿދ']|wC'9wUYj njTn@I1;e`hX~x bOȁPy[W2b{P5, ,hmm' qjg0J<Ogjv[#:-F48 Kc^w%{ X_ xqlc&{Ⱦ(Ձץ8rV?eV4{ z>,nK*^tƅ"?i"6,JwҮs{>X8Qv7&j2DOT\ík1Ys%|L )}neQUUT'3|Ǣەk,|&'tY\> 7H:,KQMYۦa0m+Y;r6FaWʰHt~3K (" aCgR+!2Ѽ55ўH6G5ƵnhCY[gkl&FpT~ÔYגmؿU4|zRM1zU)vGbS0`ŃGhFt>tW- (UؠNꑲ>lc?#oL/7=,J͸x A/ۿ|][#"?_Ecvm2lM5'{ /Fưmxb3ݘϜlF#Xď0[[y4~5k 6C嵭2i>l҉mٰC+HFQmvUo}G)ҹ& ;߮#a+`<^Z&\sO ˲6#HWVnMyv ȼ?V\Rz2 DS5L_d$ivY /JطOF>-dBFy D4&λAf0Y$)dBгIS/J&1IҬseIgU m/=u sNYMAp4)Ϛ&8M&L`$}$%ãI5@Ymc24H͹ڷKFtg'%5I& E2Lf.Iw&IZe{N\>L^2gN:QDܼ49"2+#xfsd$\9LO^X/If(Dqc@Lt5NCdggթ}qi8"I.ȸ3Is,Fdܙ4Pf*9`1^YIjkG͔$䮓]jdS3ED&9O/,ED 8e{oSVt9Tj2E=—aD[5;>^UC}A3=V'6P*VC[74N6N0pM䲂(\+P"T_\QRgj;J[;Լrc 3a3aj\c1k4r+NM@2xőv٣Z-{[hF#)'H?5JHۚ6JIixMUe+Fúy3r[演mFM_G)ܴDg;Y~a|sˋ9[;ۖk]D{AA>iJh?,\I$)ٗ:Ǥ\BdNcMv.X)b@h0|v8^.[%;w_ެ}.B&W/&WeDd\ʚGOR%5/EO:A)&˹+g$}xEO|~H/>k>kdvIL2|&#'eDd \J:E/I˥-W)j2[^n2|x8w]|ּ}Ȋ x [ `Z@6(b3lPU5-H-HXL$}uxT|Έl27ISQ[ iJ07:E^/?kOOW/2m\yddT5{`!JÆDgJ:2RDIR t.J jvU]cȢ 24ƕ6l2N]>kd6e ~~w9Z'Y^dI9B)o_6~%錙zE_~5hp7oȝF U$)"Z7`~xzg\Cd4I+0-sA 뵗Y(O"Fܐ@4Ǐ?|(1>DCJTu;Ҏu\PWP fɶ_@qm"R W5D ;ڂFXC]DW_ѫŷ̎R?ЇH\h[) VHmxĚ? ᧙uhtD4ɷ>4fhT$LJ+%%G=ImD#&.NVkmƼޖHzZFV߂:_}nWذM`IHIc-|7`jX0sU<O޴|p%R%rH,_ ,wmKn,cjfXq!_ )99ϭxhV<ɽw:$uU!{Sx΂~*\jEHQqTjrB>@@ Nz ߧ?ۀ" (qHR@b>a#֭Q5L/^GV01zmEr3+ kamFns6[8F1,1A@y"491XEVnr*H4MsTŏy&5 gObO*Er=֍^PWrZ1$aŒ1R kܔ_DQ!T qBଅcb8>D0ȱeb!fe/1&"N :V9%ǣ"/T$- n4 9n:yuZ?6RYR3n>0n? ?&P$luc^9B,'t'(fşO5spe"`nd+%< '7w"H(AL;Pۛ9G <2G9a Rc=i^$ߟ԰<MacQR%k_iE )SSh;4n7kYy D4YSQ^ ;l ;zV``+~G`3ҍ{mz6id^USBz!eTNg!@ZS#=euH^`cB|n,_C,G[M kBU)Xi͐3E*v:˄ < k\ \lT ߨq^Q'^T.hngTмU6hx@?ak>ccrɀ5Q<ʗIoYdq*,6^ִ /#B|4R k\<Ů<*zIͭu_U~RzaT%P3@8r¼ʵVmr |CБMe Ce\f>RbM/k.6`#o2btag<·[t+==z-.[0X4ܥkB6@K MJQb}Bs4Ty$3̎j7 aZ}eYUD7 j8L W L+%o̊dt7b:*n}0)cJ7͒r[W*T4AVdbTkȿ=fKHDhk߲EYڵF Q4-Y{~f6+`/wlCZDW?>  ̎\ t]p(Z3Qj7<ܠ<(TXu= 4tVhPD\uXZ!":Z7tCTВ9QD;Tܵ?? sǬ)N_Gx_mi_r+D?ػJ@C֧Y*@^ЄnbőZp"o)ζ tdw5neWXQ1$گ5kfM|8]V*F;2v@/ZpsGq vIPm]fAΪxhE0k>b> B^@]Xp%S?ߡu\M=wQmD"8v2upm蕨F"VlP/ C}Jy+P (7k0J@|+K̂FY5!(iFXc.o2yW*|r[w6Uޣى1\z UԯPi {{V }sbk2Za'6cF]D~V(jTR)^ǃg#,shՇuvwǛyRm7ƥP&}[݄ !,5ǖT_Q, D(d'] vyzEn*6(7=")˲?yl1$@ҰUGJ(RD f+E[cŐo:+wWր .<֨QaJjTm eȼ0lM7e1W"MmQM^ mC{'_db)suǂ!9&)V fʵV੦KI\;s}u|T+a._7 -ԗЍ2"Y9%y MnW䈷 CՓ\?KżvyEH ahgt@.#H3@nyq%b@aafIܨaQJ[yc{ ,LepO<t$BG.;oE[d[P |ccKI~w+DߕZ`"kLjBJN?&dRtﺸʧ !(ݹ~UJ^m=ߛm%R!)xLM+ŪߎYz)AqN%11S*{SiŤ(l,ʨ`ڋKA?{ƍ /;Spjqvkj6S&CIJ$E6)6&4Ee[88WY?C{2. D~~>j]_{wQ1ޛ? %42_#Ƒ(i. pC?5DEѓƘMRfO S ]ܡ߆O4iƺl,7z\朌55zI;1}Vpi7*(o) gp&cw$X)n *[MX7+ŚVIum hN.>Sx萨Bc:}Jj!Q %OaCR{t%RqI)(E'}* i<2e eW$UG2H$ C!ӘD6ܿP7EiS*;>R805r82 u"P2;kKOYnNXӝ5%kv旤jѤǣfȔ qrI>BaF_ŧV*N\.7=/-Q`~KY&YڐfcPщ[+ 'AåA%ƭh ;خj.(EgFs?eNaћLÍ`O6p&p[홍"ƴͳc5a Sg L^h[ur' j>~͠s&3[۞Ժ6<' ,F1m4ͷUк]ŒieZTv|M󁺗,r-Ewψʺ)'Fz0֤qʂ-؍oLΧUqǝ6fB0wY~F3AqqR SIUGZh|&yWˑH^|sp׼z4a3Y$@C/IwQ}Fe@> qSd0'H@C? 8`ɏ,+y LfiYP6g>02<cI`!SJwFZheHbT]fNܦeaǔ~-w-&1W-lx&L*9%9_0@6go;0Z8CN58FOU$ \wp jنX஢ݘ0­O2i%jCS)I`ʱHv ,3 —WfO-$fhv?-aΒ̡_rN1?^C/[Hɮe^AL~遼S>~_ưC-|7{#wz ǿ--%بs{_[aF`JZȖ{]Ձ:szz=wEJ-N4{%@y9eZ?Iapo.tbn%u(/8*r_W%RptMH8"ֲ8^8 sRYΈ%qRf4WL޿ n~aq`K[/Їޏl8q̋\6ٜ\l>fO&5k-kLIH+rU KAobx:u L$T7BR;;"N7n7r$ J F\1dlƋq6VjcqFZz4aIcp 9126vlyq16b~˜[H/+ݬdA~F;k HQ-9A4"6[okD( Hm^l5lT#Z  /V)AChwǮ݅RMkN0-^f4 m˼kN3P.uk4oKD]Ө.6j oXłCL+N%)k5p3eSv5qh-j/5 cZՍw hyd{$>x($jjht$^Aa\[6ÝwM@HRǜ&h([z<:p{~5X0Py҃0O,>S&@2 MRc^8Ll1yM)8e>ܳH{Ӱ,b ϐu g!:~:/-'|IPީCCQ'K)7z1 +4C2 >L0iCMWK0Ɛ c-`Sf]>W3>`TrNhaX"'A*W%bVҔ&)` : z?'j!Jr +9@C_ڌy+9= AKٛW{4 !q[4 !&PPL΍UF}bB)@_sYƌ 7{ɥ#Y5+kh#+P'/I$s# GVq\/CZ6= NYn®;8)Q,0N %58 )|itx1N3HYt-2!uxbs{{?\c]N 18$,x0A$ɇ(+q4{Q̟.A-T^*|$cH! eWgA_*Qp"XĉȄ`eA:J*ŁO^B*fݗƸ7$A(>^]UÓ4<Ƴ"|E(ܺ@5b Y(,2gq@ۃΊe$bTI]롣YkCw(64 PHkaxLE1}}ʦz }=uj\ȓ"$uh7rwFiOV4zr$eqL0F("jDƆWA1zŊ_b ry8sC; .ӥ޹n)O}N,A_3HNdʕGîOC&1hxÉ/A3N`ЛW'鲳ѫO1g0u:㹊-3 `H!C_iǹC!9!ROOe"hp7ՄPg\ {` :gdƭ~_j.AZ'Qx Y #V)qJVv=f~hu`/RB5חj"C (ȶ@tLO5lzyD@V9†(NZTAcᨨ(8"j 4a9'N[bgu!@Ӧ!96Hyς /d s-BExt˼N c<7ϒc1DQ9nip vk? Ui>;&[HJj-e (#E-91R,ݙ~^ig-L3sGC96bJѸuו#,Fְ?Auu "Rd\`P(xˌ2v.VrbX/&l2`_3^ {8i~d^c尗V";]frV?h @\*ʬB"sfLr"+hlמ3 z!hޠXaz{#+RMPauupYnx}7.i!, Ⴣߨ?zTЪK; EޡSxQUo7Z-duðVxjuZ|]#7&5ǚ6]eVo/7Sw7xi+@ځri.L#U(gb?^C niZUYkRKÍv4 LA02#m?8F$2AFw땔2X4[ 2r/4y;ړ^’Kds lSAn"H]2ʨB6'L' 6tC45X 52X*hݬ37>J/{+zbC)u45Q]hb{jZU#O:G-= $/2=k :*[ ^/yɁ;.I]U΁OX,kW.PͺUbV~PҢ p"Z'ųxAhh/ /oye>Ti#Bivs ܽϿ6f$|0MgYԢ^?d 6_鋨 JVyPקYȨr ozB0mޡ]Ysȵ+(?d"zVU{VL*4U&!Jv* դ<YN-'8oN˭Ż=*"k<[TЙ t}&=MQeyqX̊p6=LnVo܏$e'3E9 zhǵ]OH),9 E yԾFI|MԾ%jBXik4v>I[`͖\!49 Aa(O`Eb9睷Mv, LeWPE!"A RG&FOQٮTg#fm3Z1oQZ[oe%WhW5՗6_vN2T i=̿_Z42q5pY>wSւԤs kÌ#Ixe+¶u o {h- \&7\=Y*ݍd#{jPJh1Rh8H"z 8X me ;CUDt=Agyhm7^V$~Yzd2˿qZ3f=ۺ 鞡dz'A%T|Nf$0:`/AiRAtآ2W%*¥b̡@?xi[> χƒ br=|oaT{?׏O?aJo0!`CzϿ$ɿ`o炍֫OI ~5,kqb?,ny=y4$X{O!,a2oMa:C#Yp7w[]2q?u=℃Jkn+C?ny 8>kΎlP$p^4{F 7Ę c-NxG-F_d\LuO(]uqA2n<66aJ,$B ~D:q9:"hLƅ68{8P _[a&)-4D_m611 2.JŭfXrlTъ<|6ʦQMPFAPa}))iQcTU é$ޗOrlQd ' d3;]]F0k W6e&E 2}}1C GjvC" 1F:`8 #$2yniwD8GRV1qn9!}]8jRo0' - CBSYҌI/up͠uM$aۣݴPG7CR`T&^&MʖQ'&SG\F)edfAe,Vev{hBG3q:I25Ҿ[7imI dD_?ߗy HBQU_!Zy>ZTtxlb+mK)['ՖʂJ? j8Bu>Vo6.vEXs .HfY^93—6}w֊[jϔN֛ܬmgp[r;v[4u@*3ʈ:dJ(1w ~(W3"Vøw8!vnWhd ?i Zs0k(C;Vd]jUTugx3:\e-J4]Oq[er}8kl+ W]G2"- MB 2*zBi.Q2j ~iY|$cS>Qȃ- DsB0/{^AxV16xC gbD{R ى=>fP8ʼgO(FL'G[\`ް 皐r <)=ܕHa{ZG[Z"FIn!:DߥwLȶ5b/޴bԆIH0M bX*!BUވhǡNkM^h^GFOOUq9EctRtLϣ:Μ@Sjà y6>'hX#_]~=u&w&]oCjh/:A-z7! x3[Ym͗Ite4%m/-Z7d ڶYԗ&-|d\VGEf˭Y6꼐g{\x(7}΍d\b%I֧x8#lt$$.tJ)-F*}>i_;$2Q| ēaYLaKЯ8V G7h#4-dY p6gA/o1j˻5aJ&[29clPkP&|qU/eNU1TrkN3Q |))n5ьiG)s"})WwcM}pCİ Hi4HE@ع6\+px9gUzn_i\;s|Kڃ3L֞8'}P\Ys _eéR](wFۮ.e ܛg"F;ַ-Mܼ{F޽ٴ5]ϧqIaռ{S{̶9)z\qTgĝnTJZĸVĸD.,[FvתQbU!`k?/ h/}y>^u?Jct\uEwVs^ԻEaTF }Q˱,J(A4pBޠłA@cz{|yDɼn jǏ7:,f1o8nN^zK 翮]r)zbĭEPv`xd3sz(aϢ Ay=5tEF%b*;ZYHUv&qH`7O@U`IyW6Z cx?| k<0/.~cF­_>z{`P0o < (1Z e`pU;m|AKi1X<%S&+Փ2?=ץŬ;[+&>ڛ&vv`_vM)XVޯ6Oo:/aMv~L@6UPOwLx:~4'K)JEpcE䧩\Ti5Ȩn4TB #2&"[dO@KΘeSDZ *_(a0 4X)ۧ] v/lzb;2 .U DЊa|'O-z 䱖IR0!v"mP]bLfbRr#8cNfE2%~`i$0bo~Mq&5rCpv=[yWwI^Msr,06MW>gXVci=y"YOBEwYY bc+ʪAfIP&:*e|H oL1&|_gDj_s$9}pݫEkZ||= '6d#\vkZCΨXe-PEhQV/z0=Zkt#bn2 n[~+ljYQgM}1$LBh&OlI VK0m0kUX*1teY>xFj޷*obGLa_t}"/,_tRT fO!}&˚*8)3 ^i,y?o?Ϗ^eZ %u*"StcIܙi`p< ЫMkEʎs~I:"ꐢ-"Z47z/GFH%)pB)mU67G}~_ y84\`J[|4.{Un_8R5늅ݸJD@ȂDVCOL6H X;cBT5UcrdMFŃH- I=:!.Y Y&d{}0TZP@xJ,&knYK.8Se0,2ζl(0UԱXlA3?[Hi|´:" ,)B5RΨq7}ݙ8!VmF[wضj';fwϝ(cb!pUUQNVAʁ@):a);bօ>\.փ *d6X1Y$2+o l%epF>=JvXba.hmU/xILziIPIy GRb^SREYKĠxXPVaUdH$n f;E{ݺ#o}Nn'%Z N-[-\Gk BDrfOXIRMDc#yH v+U{}&==l_S'[p1xCod6C N9+j5'וM3~OG엹;kkZ1Tu,R9Y[yK?NC:HI#L..~.16ӏC"_^ׯ]W{n:!+hئ~&ɇ* ?{ ws -mP{J|3d~F!uyЄ-}~T~.Koɭp-J$C`~D;T*iLn?^o?^L..S9nաg>9Mf6W;;X% l߀q)hn=Z2.q{vZ;K ;|>P7'-Uk+̮C(q52fw-|Mѯ-K^CDĻoG}zXGm*_M&ha"qohs@3σdjvqdzx}TuOAn>++ w7Ӕ3ˡ{Hy! J-a9*Utg~ %"d$zS{Dg̞* C$E7P};U;Aܳh&~7 _Jc:v8Ǿ67JҮtE;GvdyKH{hpD~z\tk2%donY>ޑx I0Ou-?nE(-=mq pP5ḼW}NqCh`FTVTfh4ChXP#[,M䞳D{,g xi}?WÔrG %^>+:n979tdҟ]N&m?qA|6u;9sŌfa 8=u[WhJ|{qq2sfd|g1/BBF 5x=nޱlllNy7OEgmb wde{+AC|% zg2{g:i?3; 3ɭHIl?VÎ*lZZ'N\U1[H $є"W6\2Q,} &1y'*=z4,f8`}W,Aň/W>Z:ێK;AOOC!*Z]ƞЮX9DIs_Rۇ/}=6m;* j3`*yye@ڵ؃ $  "(ĎxĭvVu3vTcA_vN*KwWx*hFW8FUSޞ4lE :<1@+/W K-kKa"_ٶ, &|{y128(_\_ܘ]u^oN'Z vs̤LH n/>ǘ0^,۔?븜_jeݿ3z} Tto\ 7AYۄ)޼=뎪[S}> B>3}p}Cv+%\ rS؛sXeg^>}\n6À%Dk!ģ,7G Q֮HGk÷C<ZZcT(6ŌQhl&J9(g!H*këqzma0H=:Ziu2?nW/Gk36:yoyzj Cl`"TSl5 F?~m1sѯso^sx@lI򬣉FZlV:OLPP,!$?ku5'MCUZs I#SSPV0=^%[eqg{]fLiht`hAoZ "7 b"YXsPF2* VlU!`"2UBbmCiحk5}qT$ë$m2vy397# dQW5/+ee nO&*CҒݡbAoUq-t+Z&>t?#&(|IT(-[5ڿ5.{51Q?)1u!u=lTl?,p[W-.&}`ub^cب0C&g/HH[+BX"#IRfŶ5U5ZKտػFU ָ9 $>,AWUl˖lOfXd-V۞q#f5"d,H۲Ly͔Iqҗ2Rdԥ.|p>j*e(JHtwVHe8 Vuⱖƽsi˂PBr AƲ(+ b(H]Xchv*fG\d0Ƴ-49oE42mׂGF,<\%ctQ" RpwG+ӲSq%TmXiY|'Ay\\eSiH)`}hk1x q#3Ƌ4VvQDi 9ԘVb$hس>XIUҾbKo, ԎK=ܟeZl~{t!o-fT_V2JGat$rX,bT)Z͆z[\󔖟5"sމڜUu_,X]"Vi}fNOQ,Z RL[cEM:>,WSV震%M"8U[ ې}=q;ݸI AVS䙐:OhwCCqy'\Ш)ygsN3s&TlYE' uq-%H^ $]}g?IR +uHN^Q$hSCm>wnDc zfg:&6߲G9EDhd;}kF|y{/[yj;x!:Զy<57gmIR%'1͍e0}y*s+4I6bDv9^2qz[(rT2CֱIGxf-A9)Px2GrcB>Ա ulu;HS_?2#rs<=Gb?}d5n2Kűa'yF%y1 'YR%FYBjhDЕlX YuS]5tp(me|Oə\A[ǜ*Jü-KQ8@X}q.Q[#aCyRۄZۢGwXxOZ>Wx{kuzf0uK(bL Gz3A1z ;˸̡ׯUلN~-RbINO/^ SNѮ=J)QITDr mep=K䂸VFi;wt8*ո8bDG2vx -c~tPQtdfQL.q(z\4A9Tj̓@p8ι6ڷ{&>{coiz uN%c_0#a7C;Op?Nü#+nCxI>$8GeBTgG.|jY>« s,8YpS&x@<>ϗd!IC2s\'rb<}H:x۔!k^=ͧa{a/ύ^X^tוOǞ&<@[*sj m69h8v&dH; JPA@e;AA\ܺ4@ҾPC(=䫧}C[񖀿7&6<4͂Msƒ0e$iyho]MF=]xvG]g*O_5vh\?=s`%6h|&׶gE!yZckY7rNrڋQ}f-oZquY'Wۯo,fortFyb7SdQ!}%S?ݢ0C1+aUC+nΓ/ #8xPD>AA(JcՁ3Dί$r!y E*3(6B[$ CѾI'ܝhlyG,3,gY)fLq#1G(;ᮻ2m䔡ݔ3*V؄4 <[*r"˖4 h%mN%U,m/,)=]xnk~"OA-oE\,8M2<@ f#o[1XR> KM 8XFaA/c7'& *w駿ʚi=UMy $]< WcPq`}SeEaTq)[@Z"ʭ`e9=dC )"-; bȲ~YV-eY5Ee7dY)BCuhaUDlH~Ӌ?>.sodsL_|{odHc'e "y[|/K*BmToĩ|g:YAܠLJdiLJٚ^*ÇTdwuv<[]*Õ[-kSIhyy~J7O`h0yB:dpDKݢ$|a|UM  ocDؗi:h5(VŹ% #-RJ:!ESg]]UvlA,fv,;vu~9Bqa)r֚G%@ CCطw”p!lv-QY. Fj GmU’DhۧDIZXy9;NeK<sOA>XKtx++3wxy;[T扽FC) y2 d6B:o!lw7N}}y_CI}cHtz ڢQ6'{JC ^7J \2=8z}U>d nCfcfٱ10r{9J'{D*4ԗSVvmd2V1z XlCw^Cb4do1)y-2RGG^':c Ce^>62ƽ()m!X\=Ntal$_;M; ~Y` l,dYaw{0>%IJ&Y *VfFF}wNtۯj6:IܐlǧR+F@a"ѫY:G6lhm:#lgz]09)ve}F2s.:o-ZX凥mᱟԢ!*2rUlz%JNKbtΡe?ɂTد}2,u_v!,+'u|#^? wdzA BwBchKxnQMtOЄ],*)O6MI Zktmt>rc_yŀ -+萀=+ }ibD ^qmssT ;?b~~,*qF5 }a Ak!ᦼА09WYzoJD-`W/τor %_m,ehq Pxc rm𝳚Y(M"b.5.&lJ yl$mm5?Ow@|wS>4ھ7t-@CB^&\Lǜ҇nG=!u_Un, 5] 8^{cv8ymGo*L_=Wiɴ]\ռkJ$2oU):?mtqSnmY7<+{ˋ7gO[N" 4dJXVo~̓|r"s+q^v\6fΩ+7އ~X/{鰤lAk@y7`t36؃n#og7TToMJT(p:Xɨ(cW&S9!ϐgSٚykakL$d&9t$ :Akgkm ֕Q F qrLT,vQ;*Ɗs2 ֆ4 |eA[;fOzp_%@R}:$5&&E(&l,RWm5vZBbf*j(~,Y֖*Zum: :(-nO=jm#H@5x(*zP* 8)?`l,njhG-"/1}Ekm-c<|dPhͬG- 3/)ʊ oMHx46c(eo DPM'"؆/1ۂ\SobAWm@BWL.әY= 0tAgt8#>ߌ㊆+(g,8igX۰4o_}w>0_~fc[C/(0gVg92y1$v0J 5Iy*[F 2`5!:FWD41m@12 Yh^w `.; {{1 Oo/&KQ2gS2!*/Zz sHx%dה~rFj=iuX磳<8x:l9 9 *x*PCT]R8 Fa7IXt6[ꄑ璸P#DJgub?݄i/b6RF`]IzeE@Pعi-iYaYp +ұ*ܐ lRJO&&h{5ea _d[$Yd} }r +R\t6烟N%>·3ж P'y9)l|$W3X!B؝ua3ȬTjT+Z(/H2Ĵ6a{=yGCU",KlډfYԄl)h`-cc93UGY~)g_|ܡq.M֯+u<Nt 6ݶd!83BIǥ V@ȌGi[_5B_9dŚ[VV1 rGs3m@1&Q::u-㎜D8n3j DU&,3\3-in'{7EtK6Ks)U<^ms 5g[! , Tom.O"vuҬ!s4! !]"@)u\ 08 ; (ʩXe͔sk/]vE 0tZG9B =) &AXZ[;kkbkKt-2LWi4tjD,-\09FZ;4;=A$cb3ߍ)T@R\p=|oR&+CCGPO8 T{?| 5AAV]fX`]5hAw`M$O I@|,ulXV^N| T|Ά,YfQ\WdJ1xX6A36_Kn] 'ی*wY?ߝO}F 9pOPd~ڼ>S"GF :j k2#2a:ԉ֢Vdgv9/}a"!+|Wm̗ bJ82dr}xpWdU񦐲A'2WkeHYdGJ^X'>y&gB@+ N cusevrRֺ lj T< c.kS4ȟ2tOhcC4٨@*k" l<[``7϶{G4kN4z CqSz'$]3G mYq A7ƢHzg,F?A[^ʊ_X,[mytI_^5!`B i=f/e,V$d,qFBYI0O\mSd74Ae h p6FQ۱O?㻗50ƓxO *:H3hv&uhT0wj3$ E8аhߚ1r"sdS`@LϽ7Q_MHS9` AV1FyzzR'5I_K" c$lcXCqm %$٭Oތ!+nAoZhV ^jUWW_K6y0vA;r(cתTe; M5T'٨bRB`lۊA]-_ջÔ=#_6JpyΜeCT|)hTJR '"$wmmzAK}*~X$"YcWK,Һ?U$EQC )p ùTWW}_M]0i'foCK%x,fKhl-UIȤ7%V7P%mDp*MxVp%jXUA҄Bp ZX@$M6yELHNRtD-yrAKS$%L%{-dj2k*~%RIYe#Z :Kِet¦7 EЗ&{M}^ {f} ǼDJio^]ȵV(n$+Û[invkj4r=Ub:"-\w(9C2| \rc?MbݲREW+Lk호d`i# @8Ys cPj5e-gȉp?r5HN%fEk}\jӜW(ƿ6e{J6j,%^_M$?Gї*%Gdɶ=w@`g._tG}K䱠8+eYIT S$@$"sۢS2Zk&iW'b#)$Н$1`,SfYe!h oY%+!CLNZM^W֨D2Lպi0:,8nOgN}$эMO7]{1D(]LJkW1Uz|&W5I6+e/p$PtRv=Levtvg%-PUN&ӱ>jӉ+@jI^`d!F3r94bx)7TJ0v@&b,}Ώy1VE$s6qm$Wo) 8[=M7aQya 6So-棉(x+ iFmLFB#F'Lj`25C!TXoy]zKj@lžWoOwF=h1/_#HO$4Du0H?ӨVC!*d[Y#[<%oS]ZLlaA[P?8m\5UL\3v)V~+MrA].6p&or'}khSp,n˴/bBWrh?,ǚJڰ,ǣH*SVQJN8CPa _ӿԏ/J^%BW+TKj]LUg+~ci$ik@_DGNg~V'GEYPL!oY,gnou֒$e4>U+eryjKz-M$BLb58-ɿiRت9qIAb&Le[()kM94 +R!XkUCSupROJE}4ƄHh7_ƟXN6ZNH=m)*ڠJ/UtC,tTt$qb+ƤF %4FL@ {Kt=g!n?]8=?ZwLP$iϕ1sczG17i JrC٣CCiTWH|rRd+1Β"(#V/{ vdPR?Z~+h|M'@OLݙU18Cæ;6&Pϧh7<tQIի\^/&8* ^'lIV,Jªd 2J iםA&K7$c9T(NBZE=VkL'1jwY*!y8g!犵YD/sd [Z@&8P:ՁjkKxeWSyr/"Zb-V^İ#ï{ Wn.BbA#PY!Zڢ^u2ul2>fUT9znh(tƑH9IgH~%D;ciOݓ8po)t&Gm"Gt9 b{^GIoc{7%n]ޜu{ȋIۚT;-4# J6mk.5 H䐵Lh9p~,UtM”Jn(_agEfn4("q)@\e֦B&%T ɆFk\^4 emlưvH!$p5I#cј!idHYk,)Ϯ;[.gmJ$C\^2j3^Ԧ,i{cn݈t1Y&kTH)H.)krEY<EEjNzut'[lFOGXn:(,^"8+PiuS5XK&≁WFBitͬK|PUBYfQyOh>#WV5)52FNE1P!4%,If܍[Rh}U]/ta{Tni~{m$39>7?eJn1Ϳ:˂eRD5}6u?vK'71C7)5b3(m5uIEXoݳN+D%^|y)̻ ueZԒSc#۲g:%m(!K-23o\esRV0UYӕG 『TtT2a.In}y;Z.=exH*&y\$b$8\ .s}y=:Ac6D+yo&u C/j|}Fsxru5~s|VHѶ ?Ol\okꢶKZ%&Ҩ0BA͝i#6y)dj=,sCfOBCqf-ßq^yt z% 䟛\cro4A3L óeF~̷?]P_m3&3ߓ|7!㢧_I.1~: ^'DN)~SvvJӳ7|n~23|3'~`ߣR\M6b(F_'hDgCզk$:~RJg|GbL7#.c?_hio;ֽkL}rla?M{|])Ҷg8>^,{w |灺`kA(/ DFz蠪SV\[Rdr/q+}A]f}^gwGٳzl>y?/;0I9( z1^LV{iJvXwl},wJ\ H*W"ϕ盵+jzヌnY\i9ܽϯVڼ{4D҆v8ԏxEY^=yqI>w/@"ceڦ.Я d_i8e CXtϧsuv+|(+~nM^ [] 0$x:%K`xP\A3;*;y*!WplPJ; x.|"Ouy9t~w M+&7sP{MyN#L^o`Tn8cCl'#qS6&>mΌmrj#sr,o@|:3VFQFAOtokJSbZDx>;w2Mx nĶxPS1S`MLbPxh^k؜^/6I2;lqdy8B&G]\OB>u-@ml)q)ɚ].Xc6NEOVE>k=%GU~0W^?.Q T-wWˇ"l1WKgx`Uِ5#9PYmkM,_`MggQ\Myn}_ m ^ZPrg[© 4 $p mP,>Ґ 3OCX?HᄸI(6,C-_o_%QM=QKJm|GZ&~H 40Ri5pε-uL'n>aH8豚,!s垢)*x{:зJ<9v@zlTg3?'%}ٞդ؟2E Κ 2EN"Kz٨3CC6\KWn7Lkŋ/ſ?LO۟2o?/K15>_=~ϥؓb64Q8v:9F198(*M>q̟gNC+g(N+h P'@}a1g$%&-^q@=dk0FFzkly+p/v٦p4H1KQ#Ʊ1`=sxAy)!3Sp܈v@lnt6;h/.՞]y2A3{"H1.οMTN}5 )5mD괅*͘TE2nFb ezF \{ F_C*G^|^6BEf"YMDD$m%!2826&ѐ,Eɾb`ϧL}`TO6B+yx}=޹vj􌎳7b;w $[{m Qг3S[krLm"܇8(gazZ}` C#dܵZ$_e\R%lEVF&55{PP ~&x3p7HFA0V<N԰͡ ۂcT04q ƞqٞ-`cТ/ ah^GuL<5^oi^^-5jv}N6Z&,}]YLy_q=+4`nar[$WHr-> W(& 6<)K3a`BS/SF6|um0W5W?j]dR [K{  Ph?ToU6Tb6cCe?f5qRɍz%G H;!{pt @%lO0ג\ۺpꮶYOۏ˩||(0pQ[Quf0y`8vpjn!e50Uk}C"@FCq+ḿ]:`stjbSόGV>nGa`t'"ٙVu \9c${Z3hϽyjƌgMJB LYl6,A1o|y oJqu'F W G7OZ@k+蔢-h}QjD`" B2#!$Oj>*~7z\h㣿|M_o y"&f3>XI ʡ:!פbk6M)jwcN#{FQ['NZ9WkV0j{ %}FFh>,n-45p:Mx4Da{KAw)Џ-$ 1y}fbbCbs\|Ɖ63J)eC{CFah-3Z tKg#*{ X%cE !dh; +(w@!O ťD$CkA޾zӾTĦ*l<٥PNc])S<S 3*ԍ o}Lv` $\wJfw]JyȓQ!-*IJv%ȅkFR5JF@0ڛ=1 6~{׸$$ՐDC' ;bG>Hmڈ1ukITCo9z+qI >$@-$+~x-S;;T3:3c b%|Gn rʈ@e}L8(6'L{ꡃ1*/ՆR!Ȣ)/3r" RZ##jSY43Q\pH"^MS̗n*ڪ'-˄zȓawq:/ z:܅p#ȅkJQ"¼CKRwUӜջC2a f Fܽ\~x#r1SGIlmmCҺA`t#j7QQ{2ڧmiGK4Z⊛噀| A!"Yٞ8ݔYɟBЧ[y0v Fq^& $t/[60r6FV Vtzw^VE*+c½G3=T[B4:iFP/H8z6{ڇth^ځ\Fc8jt}͢EoU6dM $¸tx\C͍ yyzxE*MgD\;ϒȧۉƀ'R[,4 &-;\exBebzf.~?)b g(wBYcj1w%"JO4̘ b6`<!NIz Cr j*lzFM¥њŅt4E!?7beЎн|VYT,ojT]*]~^nP,re(!e#YƮʩLa9/ x t;ɟB$(`8߲6BDvByjE)y A.o3|q lf#2т/O2T@1;1Mv#ڛ0o46f%RC5ch@4OhRWakBI0Q <dv3.<6fY^,ki(7}M$Q!x⣖8Ҧy>nmtIaFHz0OCJTk R G-DCxy 7Bi8"9C\%{?2A{ #d0;Ƅ34CY([U_7vt {˖?Y0afcOa#Ր'KUɀlf6ׂ\L]#-~ Id[[ĭ99ooηQWҒf4l-YYuPj!TzFb X2fBͳ,[k5|A՚sp| {ܛx7] r[}F+ۧRеܚk?{Wȭ_}-(^!H&y n(<޸BSG/B;E^6C}o4~Z|:;w]F;pb4m6)uJ|y*k3'.sH<砬=c9(OV/NI@QAfz0){gt}9b(s^AUk&G%p߮Ȱؖ"dWG.H`ʂ!q1shw[7;GaR+H+0x hk&FVh@Sǵ >2! I*z\_/7e]HmGR[k$ytk bћ/fE[uZlɃкYY*iy-}q2csDS%+nJ#Gź[_%ߌV hg%DZЩv{DR o#pڛU(ӱ=,q{l8['BRWYȗg|8pyӫpD~S6SiDErȧ⡴Y1%;#RJ_tln74tTMi=gl&W(3pQCn҇0FJ+o8!wP \Gmk%C4Gȕt}4. qC5#n$}uaZCw+5}zn%xh#МyO)xM"r!7` 680ǣȕkAs R0GY*%⏲D>b+5mHkŎEJB\fJBLBd83 7(+pBZ! )ER 4E3"A{ȠE`\&Ղ>.j ߭ )B%ѩV9!؜T%P,Q2i*8qsix۠*mk \_ e-"<+%HXpyQ"K%tQ"K%EZՂ6Ԗ[oU[?Vهkѥ,dp4jC3ˬzZy ,o N$ QK@%U5\􅦴aR+ہȠsy@-U{gڔpstΐ= sso(Nkă13ksMf8KK.5@ ω vҴe]lKe\Fdy#a4|Gw䐬>1o6B Fm7E`Sd4ܼ?9ʼnm%G'PSJ)GQnvhs{mrh;;MhỴk!9а̣uU6!gԛ>tfh婛G0%GB*Dž&tQg}0%{4!Tu1 Ŝw]U7ЩvO]3(/I1L*;ũv+:)9ݩ݆ gJWDi*PŻ./I/(QC=Jծ\D2 #^j'qƚ*gV4";nCRU]Inɼcx]sYEr/:,I]5?VeWӕՒFR)`Q]n%NǶb%Ѐ\q5OTOOEqB8 GnQR h7Ab]ЮFJ zS'P| WdbfmgO2b~6gԈ߭0*.ڦ@DMw([c [)o~>hsh F7c[ 7wbI)jDT-2JsR^]_~y{c`8}*Ti{77i.=L{ŐHs9ۥXW ȉaG!i۩079A)is5GFo0AȀ6 +rmX)*9ñb|}uJ;XNrrqMl]%[-KM))nhKeu-,MF(r 9 $CtWFrC`XjvjeUq%L gˠ-(<%M@ǜ (B9\@D+]2ܒ6 " 9JZM6R_1ꃳ0OBHDP\M؅ș9{4@>0BقRwt~Qjhgq?]&jG4N>TNzѺfi18S.\@]R1MG sbqs\4)O .IMqt72|/ )'K,ggNf!$O|~}Fm4i0}_TLYmOfmϹd;h=7.U j_}'Lˊr$,2NfVFJx)F%\c:oqyՍ<%% TDko.d(ynݒiJ64\#Zv N@n=42>iDhJ5k>Mo!ӯF΂%(Bn HdBrg.CeT JXFr k!+*$)bcvbzwN0݈%m='g4G,i0fSj]̮l92Zy&t6'DK9Bp%rir"p,&0K*$ZiSm<ʙcB\z;HyO\PBRȜLZ};O*/F \qޗ 9`qq"(!J ycV❚`(Ctl\w9 boRN)g77w-& "J*r2T33fRCbYu*fhŬakb\E+eϸ]7(KQJyvho*H\pAzIfӟڗ3I 0ͯzv(o6rnmVLQh \JlĦEi,esտraxSvN%}gr8?Ϩ>=9v؛g_RIo)^+0|f0X xgm,B{ /E]ݑ\e`46 ,1[+zL-퓑 `1_\ycSNÎd,_rRಛv4W>=%1^/kiw\:-)fEcuW ɥ6c4$d)s4l*₋m6Բ\< Q`Uܧs z9!aeΒ[;'.ڲ3 Ԍ)*+ɘV#tנ~ ? ToeIIؓAPBꝠSڱW&ڟxwQ C!9:hnqwDNTM6ZzP}hw6f33lQcGcTw"vdN$V"tGÇag,z0 _z#fBB g <0/^ƁO!6fH@$LvRpx*єDG45i$d[Q? ,8n$㜻 , 2cC1b̓g*Bpg88 \h/^Ƹm[Qɀ>f>Nf5sk[b)է]vS_v&(JCA&'!3J@fVDlGn㔡v lr3Kī+pG-hg{8W!Gv_ xm'ֻ6LQz$e1ݷzLjI }3Þ_]}TmݬpBi՝*<ؼks1f:9} L9= l `vPcz}0KJ61۫7>癆=M4iӤaOY r#q$F,5Lagi$ {lӞϹZsnӌAS:MiNtZ`H*$SN'FcbT*F-k@,(D"gh"*fy/j/X M.:BGou4LmDJm"m%R:R6YG֑Duf8Їh1Wz&zʉjDG"T At5mR2v1eq&8e&ieP4KRViGDwkY1cL5GՆ }$WHf l[ ̞N5-"mEf/t|LjAl2ûzU!Tp"VrB4ߌJou+hI'qUJ/&Ǵ9yy+Ѳ#e)-[0)$3D ΄ =ƥ8F1a12C@Ф6B-`,7\h$,j)H ,-nǔŅ)0G=9|W-.7i-hb'5C;,5S.'#b9›\\EʤJUy-kvª/ZpސvY0o,uwQk :ye_5C۬xeŸj4Hk{6wO=gto&} ]b]wr$/]ݻ*wנLpurJIZ3ʝV p $%+Db\gYPNBgJr89-\gg M=vٜ*VpGk,RM F_UNNU@g^#fR 8'LMN+H88e;en=nFOk][詔~0v鹔&H6<5;Kn]2IdhAI;v 2EMd!JtMH&ʲv en"df8P C LڕՀ0?dcΣ&C+xąF,`r6RvN'<2l}-0aPcGc2jy(q)"ˤv(~&%Al 83TK5v ׸o4xGka2qp :GEUh$ei ^0fRyf 0LA=b+֮{׽'(jY:p4J:#4ǘv*ݭrpy1*Fzv~8+ Au*[SƪmcIbF#r%Lyyf.P܇KUxfm͂:tV' UMt<(fzI2~C~}b]ڐJKt#pN硾Y%V;F4QLm8%TدEѝ~ td\INK63kNOa|;kٸ[Q831#O*Ɲœieӫ'܇0<~ӷ_sf pz%]' Ì#(r7( /K ^%ǫן L~\nNT^p`gRނ ǿ\Ҝ.poh3 1Xykz0<7c;rz բWյN}2k>>YvKMolxi˫؋K/g_M:ç^hZ oҡ/}t|֧'k`Dﻥ _Kc@}Wa~˗aGOςexԏq5[`{*PT౺ß/.&Kn(.mVʺl`pycgU|J֔~ty\ Kx.}zytt~1_ pd W`߼O~S)SZx *v`RհTSWyU2_}zSuČ̚)JcoN+G6 3UΪBFd>.`*0&0XB1EޑVfם٫Ida+sHç) qi|)J lifwij2iP Ebjԅ6TSMzkЃr45 8nRP ւj*+/Rh7i=eI+Z(-׊{r4PhTJc!0p/B`P&KƚvDQ0 kv`T2T?x_ (o)~sjX%1(=6zmFV8bhm++MB`nQ&F:ūvz/U̩3nJ}m[p/v^ŝ5>:Tm0+x4کHC(;rNCU4LJd;WxyNkcBpj{w#V=Q^tE9 /x VW3@+6̥MPk5I1my.n;G5K~Hŵ9~S9@yAM*(Laxi'2'p-b_ziIYa w|~X^g Zb/*`Mɻ?g&Ns;`mp2؉`3AYǽ}5[fBаoǚ+\n'0\u|϶S1f{o݆ovKVuuoηGr]9puQY_F'wsu8/ck.*q7%ׄm,$/ܠ2b ϸ42z{Y2ȕ*;_ˊs.+^ 4FWPSY7z#K'6Y21 T*L)*mEо_٨ Qe^d z\n!isB$KSE1mfI2ǚ cḏt\ӧbJ6JKP xz"$ZGng^2C 1B'гPT$ZxI2!PMc!8 'Wd '[}hziM ׷C9OQkjʷ[Kly0KckWG6f 9 RجgnwRbhS;{![{ЩH6nBrrlhoBJ_AfAE%$LVuhʑw."W7)J&!ʗ`pyN7 \!#L:su$-?5C?5f-Gc?L_H>Ɯ!WΉu$ ֑B45:r'=<H9~tk:}zt)vwc:W:2bɑ/GP :rs5zg ͦ`wXp@y !-{m,2Jdqc(xb@yal.om\?]~wZHE dRV43xǂKŨuM SH/ 4w^FZjU݄Y]3Ÿ.902. l_QrכZ\+$EY~ZXB@%xEIb,O6M'RrGӲaK5@@ED7NQQF\75Qѕ(HZBqTI=T2NYaM oZHeAcNk=n5$(@QZq.#U 0'7ꯆc&CQ\,PЖӺA"I MM k^3| LCÿSV$R_PIҽ Y7gE[ l*mݠrPS_'i׮G\Ohل oUM{(2UuP> C|>2P4< J`h@Pj: h,OJdYi Tb2kSWZKMYOTx[opuH 2F"@Thⴕxu&rD_^IIؽX*5҂~6*{jT'EB3Q;t [{!lw(ݔ2(|)(p2; A\Gb nQ<|'0ޔMiNo$fV(sR:#F LSg7E{'g{>yq=KnKHPw8/o>׷oі,o2|[7o:[۳O},8ԿX_.ĨMU/J@rpɕr=']4 E%%'5AU$vʸD7P~H"2`Aȋ)KyJSCu랮H$nx;Ya^qc:ĽV` I):ma4mZ[Pޒc@͚Q \9|ئVE0J}nXg }@#a.Ӣ]ci=F(5 plbvIВ Ϳh011y(hǡ1~vh[Mtb%)21kwu8nTg]tt4=]CTkut^I[1]cH$u <Mj:GyfAp,R8bJ(c$\$ :ikćN?{<SGgKH\%")h?M4V8#\J$("TXE Te͈( hw- Pij'N2);Q|\ XY}LS±L Br>PIBj6Gfn])k=~n 7Fx56 `r~I`!̆O3 iKCnã%Hpl˒H}zt)vwD 7 Ǔ@93<U^jt&d~RF/'Y/9gu hmz?վA,tx| MJNO3:NkeP˅ͲLYh|ΣJ,Eׯ1Mt/{N^eJ@1kkWow\M(^jySK):N5h=hG+sD߼H8e8(a0k4Z7\Ӂ^{e7x]f=YZ^=z̠ woV.Y\;?7|so۲uΗHrw_Y^+6/#Ya\\Lp8)& Ul,p,?~V{viia' UDxT\CZZR RT6%b4VhАo\EtJM*[))SFupj4V&{`2.S1.06Dthq6# jJ&b^-;Gәb8yH*Gؓc߿/,'MSk1}tT;wvmoV/: Wu@?y2% /?#[di9m]v}g~.KzsA;EZl=>E)CI[/+_$}d!tD:K* 9KZ 2d!tD]bPt25%}|?btzR$:;}"i6Mez S^EfrիH$We)kYqeHCUe VicܮoJ%JwCfBt~%Ps[2\@dv {6[DM+rLjv)LcpG9ѴLR":8d{=Yph$.0Ï9 t콞!k庺]^]Dmk{{b˫*_ʔb@ 'cӻwp]Wj2WgӚ@Ԁ謖1=/NVa} 65k3 z˛ʜ܌FP!CLH|eUWuL; ଢ1Ń[vSc`nzۉΣ,#mX*:TK[MZSL /v[s%lz; ԬeE?.U9ؼwJ1zc'w _/;g0ǓΥOlvmA[sP~K ?e9fW[a(ePyChe U\hw:܈a(FYq6#EޕmdЇ 1Nܰ̇8hŢ-ZpkW$%Qj% x{zgzb?p?Yw"gPDdd. bAy . A}(W&'/!%CLӳS?OT?/ PIkP3jf8[S&FRܡTz~|(_xg#o}q8S9Oԣ;9/ Myhbjte紋'U TΕVyaGݼ:= HÏx Gc)V, 圌vh *A8 9+GWPFO\`=?Ox2VD6܄(cw? }jQx?n]0;^טp{9u =A=PHSC:ʈUƟjЮ?^MX UwU)LbRffbNkwƕnO-jw6=oHUnL xXjV6zuvZP*]7S+ <]YnJ=+uQh"T}nV"tCiwwQh-HϦ7x :íu>):Ϊ`W-QD}'wҢ$$*١8Ty*;7꽳Um*TFdڔ`(e.- zڥJ!T%ؖŝ,jRmgy T*A!k&J CU$r><Ԥ6h0dxnozrj$H,DԷ!*# '#3fI 3@Ah:`bT;wӎnݗy=OVLxy\{$Oyuī/_&~d5ܺElVia! %8UHDaDdnF"@ >ezb>:N2W؛Z?,9xY`(y?R㤪lQ>!I5I3O.vDڢ'e{MĒOۣ&$S*Fl| y5wsytMav \~E>D7!DL'ƋcgCx??8Mvfw3^ Co4;f<ɺ 柼W7;; ͛.$uq} b0bx~_OL{M4}]*3t`b3W''JZ| d3CS9(txpbrn_~ˎ7/t@Jyv gOo~mٺǥEQ}wg4s Deaxw}h9pȉBƒ??ibمa9r^?7Ké>bڽӛ ~) /wнs#tuw}jAAHHhd0bd|B˭m,OGͻ>Ӟnt:kΊ~Cl T:t~Gkݣ V_h^oEGx;"wɞuЉ@Z5;*P:g@ @BE!7G`"q홤U| d/Eg1j,w23PP`?:v)ۀ lasjc㶘ܻ,|O}T5#=|03^H3~7D#FꂗNYh/X(&p"p73 //: "n1&8{{lC4=qw|4ɤLnF D:a|s3|LMf>Ӓ"N,G F=b$3A8bh}gTA ɧN4 8tW{ӹO:}[GN߽M<=xRI|7cbMhn Za]? \n }0u?}Wy#fܫ5[W&9b1q.aYo:?Bi>1@Jju'8ۼ"(GWSL/7E2kb䂿e.(֙ěΒdc˞f=5f.ͻBP!jjeFS#d.}tol1')@S4E+4k1JR @Vah[n>=uj%q/`6"k`0p+X*:HIr[eLpSo֮[Mɴxi8ti fdˀ.1[ 7, bc;tY^37FKldvHihk #`=*9ϔFYF5wKq>[FʋOzQ<[\( 8(G4 Z XDU<<&0W3m2Yd#!1Ȱ"A7ޮvGtGT_iv]v4AHGQtI`e>Zc{x+Hϭ(4H)E9!ydV7ǞF4o Q:Ls- 0(4--e( #M9* <}M{y!n z)OkGd lπ F#?`v יGe1!EXO/ (,s羅|jRO^AT҂c?P_#yjRi$@0p_P*3/B&i޲H"v8.KuC(pdcLst9B0ͬYLm7Δ>"7 Ƅ" Hv\?؇u1] wYp+g MfPJ5 6,R:< Fcj@@Q̠4Jm;Ѕɜy&i9@q8.Zf:8i; |1)N&pz3KSq=[M::/Y(g;ZC@grAUd4y *.{sR4hN+ʝ]_xY!G\%}qG؂n<ʮg&p(tHL( )!~ ZɎu&'@rރ{󤚭/zU6.7aDz$_)F0:(25brF :t@!ܸ_t}yvϓOOña _w6cjd\t:BD+q0\< ~J]FXL݃&2y2EPԇDsKDl -?+Q]eyJct}uGJmM G9mAe38 c[F{苔8'ݪ`"sr+3\ʩ yдRVr!?AU6UseHCB@?0H۝.lV?ҏ4k`и5[e(W@"D|v'gp$p $-lu=/ђT NnLsvqŵ~hڰ FK `\ yV-7;';_3RV(cw]h yOv{4O?oWF m xk#;#cc0 w%_?L╓Cwݫ_ަw; @Sgkz׭7~w3uUڿn_6xp>t'O2s>蚡;yzE> )9 T,R0Vci#8PVP&I&(k3{Vڟ*Ϭ__& ۣy?-;HE1i:z8SD5![-N)q$ J~)y,F_ŽOUҜr*_Ӂa*_} \pumwv20X/nipt̰fK[s_&>LGn)Ljn XP={ Q^[hmT׻B Lèv9vO( .'$poS"!_iF>HED2Pp*LkmZZA)|Ȉ %VE4͖+@y 6E!mV7T! @n=/8ZMwE ѮH(%v ΃pEWr!bڡv$Q%m:new^vO0"DIɛ#&qe*(gAhGkȷDc!N5wP2̴3J4h P7"VHpPtΩt8o9>#c=G(=[Cm"!>LH x:aAT($" O %&&{:y)Efy&3'X(p'e-iPHЄP)!w& Ecw9=t:Nڏ/v;{y,`0Փ^IVIfOov":śNX.&q^z{pq0ZΕsPJ_`>KB6\Td o3:et`"mS@#n ҮP9xB.vJU]2Ѓ|!QԸ3oo|"4;xᾈnʲ2RLU6Opx4/UYIz9<<;ÂS1ۘ"Nqw3Yv{Z 3x#ǚdl1xsZ(N%3M{3IK?8IXWOHiN!(ǭ` wKRK*ZgTW߭p],vvs2tϵ[iL4*0$CP%JJdbL00ba1!\Rh#. s8AQ ,/G>M6?p}я2!ub1ϙH#wlNVt<{DMڅhPbԠJ2!" R 9u 6JYk|L]hhw0vu߹OTl*@JTFT=T{}8HM5o[v$,qDq(L@#űIM$ e2QZhO,@ɶ"4&G8 (LH2BId ʈ0 iD##JTRQNW&ɗbSD*b,"'p5K8`4eqg!>X^(7TdCT=՘ lm4d;pA'@BmW TaKGalU|f*esP9 ݂(܄~ҤFچ1 p \a= [6 X)IKi?:ޘ4 IP#zc#=N"BRޝN6Ҧu!8kVY:i,.DF1=VpZHY̹ m#.G2΢]c =ʅT;t7BJ̿W!Ibq7+Nt*XCy(ո29ĦlOUT羱3q*h|· bݗl^Ԏ+x&xjGuGBoQqD zR? Pa x|ff7JЃ u݄1TݩyjBc0c)>q#Ps|+&lP+ʔD(\͢IƜ"/nc oTS? |y|́w7Xb7-3Qb5qWxô7Nn?Tt>/4 Y{aO2%Ɲl[tmZw nKuCJ؟_o[؊i;9}qkm-WGk &q+9,YYH}+=c'4RxY/WKC.3[h(q7j8WW}ᡑ<=Sકn  |spf>fiYED1#$i-$>.> ) v@O[FOJ8@ !>}TAb]Ab?)|[W%T֤a=ey$_[0q[Bb1ejGq&yw)BÏgȋZX!y}Єt޼ej9y7AeT)恺lGlx_l+'\ 5|rxkj@iL.], xꎎB3?v Yn 8%V:Ty'U??8$bw\1\|#}.z|٭PqC|XzPG-AXQ=DhgV R. Rcz aF bǑM3mπ㗳m@e| N'ťzo@:-fkU nB a$0[f; 3͍p_*է:.@kU-pK|%K?dϦoj]2R 7ǐ$dx1&rNYK bk&LnPsA,2:_9CO}ܓ?"y[Eh{b)7kc9X~&3fυAc=fj m9.1عX=S &cM"ej-bY،oX0-J(9K1lTC8qQs QANR{{_&^+EHvlm'Rk+=X/iBV˺͈VgRq$K;\<?mrJA5W$zwg I(g ڀ,^N}Vra(ۨ RMn.z2U_^:Gj&-TR(/T=Q LP^kD0Ca?(>2^_a&y6ɴo~8BJc6rP9 "d(lϟ4LI.*Fhx͑' 6Љc]~.pgMB Ay*>xCjɠ#)Ukl!NU\'j /K-H9R+p}(D S~4v NCC}jyb2x`HjCq5 .̟{:Lji˟51"^R6))51y( .2q`|5b(>uLp90=Vwx%!{t[UrDWGrđ?K&ıΡ<^)[ܿ)ۘ_TYل|`*h5rN0Bv'-~z2;yU~í ]?a<-?n7;]*l+>l~0&91M'MP" l;m)Wf&Ll _>O'ǟfSOuX540 q2y"B*Gj'9̈a6@d|]K=)v~—^DJy DE}Vg$aC{ myFQM\"8'Mjs1F\lp yn^pNF =Qem- Nù4&Z@{@`B5tĺ F[&‘㺏vO>F=uDŽ1D룒tXGZ yQxfH }5Ҵg@[ L*f9Eg/P"yNhֻZRj-&ڝRV,E`{ ™tiLd|Rq ŘJy.v>BRw>6ݘn/d>l[L2f1,ܵ傊8t7]AhAũ{wՊjJnK@B mhT0=Pz+ïäh0% w0n/ *?C{:;^[rzI'y{pg_55w/>Fi,*gHy>˃eZ|XϳHCՑ{T=V˵ƴW,- )^7gvdJ憛Y7˸tË#<~qܰf,/G&P7).GҷIJgi;󒄿fy6j=`*f`|{4#(Ϝ@ 7?˚t}I ܁ߵDZɷ{e>_~?O7u;e1X~ byJv`o/ `PHq7L:vr\_znO-ݡǔV .d  ؀8*; zBeJZPJt -k oM:p=Rɨ!anpؙ@s1h',P?qtbE4$u%>Q\y@zRg5Ct)yg(Co]8/ 2‚$\ Jhu&`iXe1巷?T1V,m`1Sagښ6_Ais֐~qR7:ĵrR2SLR3P$xgk _OOL_,LSɡ56ģZKyjV ^)@#0qF[їcDzA",hOݏZB*JS,P%d!8;o7x\fi(X2 TJ9 HT\RFb %TQ XJE$Ze OkĔȠ>v>,IFgB 65ܾZ}ҜHam\'NJegaܐ)gkעa2 @v U%eaHGj8v#~C%@، /sԭM~7dz<@@Kw$9*)QUb݃ZjG7gX@%>6s}L&>Wؾ$fs~n##aKG=6eaKbbpSvp`A Tx{bL&kuTU(z6|c n4'CtoASYr8Łb]ک% PI qhρ^5o,VmI9s΀MR3S=ɬ!I&%ܒ 2z{ق*njC&یOlޜ +>|{L?jHUYC>nthdzmnFdbt~sq~lwtթ,36\zFr<1l,*3ƣE25WQ\3X=~VZ2Y%T}fSAFcfDL5f7 2JcG|UewFgMoLfU1RF;h{?n:|LK}gr5޼RO櫔գEI9,ey=LQ݊R/Xu\t(;)ͼյVCo?D[;U%ٗi2m~kbVުam9VHJ$y΢O־x)GX0K&wvlw(|pD@VEf wM#]ߦҟTf~%f޻U9`^ XGn?F)u/7 Lqk {bNFm ׵JMg~;6E4.Dx{YSRn:1w.\@Y"K_5Q7{YhC53?vΑP*wƼ6 65uľ- VSZV!,іD#5_淙ln6sϳ b.2w6.k}k-e ~0Ҽ^hq6Ʒ֘S"Ài&Xh&&9qRajk%1z_Y9Ȳ<2YJI (4YJcBIYcmB$3ɘ\I3a - O4C0bY \9nr ǺaҦJ^O+=_lId충NG1hHm܅==QMvZ[WYtl&_?`բ]z?%Ek'ZwO&1۫G;{hw{\3z:$lμ/s=?w>"_ʒݿ{lr.Z;d_CZJ++(]<>]?c?ˤ) òҳ۰^d= HC* T)Lldv˺q%ye;nye>QJ80GhwKx ݢ= 0Bb .bPwaV1`~H .vuw(&18RQN A=k!t=e!0zVOG0  PgH:#־&N ^IJhԫ\IFhxl+ñ7!;n2>B<e)"^fr/@ܲ\BwnV) X6]&?K \يwɁ Ill }nMy26@zZF4Qe/Ck};hVj")FDJcʰIX3P4>c㩖*^ Q E> .a* x г0%> 6hJ0:PDk>L-5C}Npԡ eD6K>$Y~rj6NiG.]R9Y!?yzNCZ01=}Dwo.*iKTBKkvxP2p"yz075ZZwPP "qĄPm&`1K!$+~ji],>Q=p4Vb[ 1M! XeTR}luwh㒴Q-,7 fOS$(XC9o--Ze٭FC+@?z]#t{P>yV\џs4j$(_[mƄC8FC+x(xm';YϪ߆M P8)Ip$6c4G={C/lOP V@ OA1y"FH cnk34}[$DDIS (94KqR$%*!ɱ(cƿfL39()4iDe@p'i$Bd Wp@ v\3דYLZ*u|l40r gʖ[02O%a_kG8{X%ڔ(%ٷYDka._/"U. $ 2 4$F<Dmŕ7ZQy9 )8_⏓ )hD;)d^  S Oϝ]%ҺKKY?i;g_p"V`?'TI؎d2/cTB2yKZ=q!jhFQo$& xaDC ~=~ a8Sէ j޻caS>۲"2*DԅꃑGk]e-/soEKz>&YOO_d5Gk|lxT1bQ5bЪ[%B8EXLYxC[l Џm7h8Gu}_d%_g1}Shs?Rnd =Ѽ2q<^J&.q*VvGe9E((Hv㍲y8^BDP|'^)!ךZS%N3\$D Fi3"Db.3e}Pؗb)'٥*$g'gG0 HHM1$P@DlRRw2 FoK15EryΚ_6( [S,]jW$=.ߎ!"7 ͌[t/fݿ$Wz(;gۛ[Dp5!b)|l?oUQN>7_I7VFZG>q׸jK8p9*=/]Vb!q5ťK汤;"FC 1j85HZ $J$d2VY+2vVìHXOYOwn4XQ:.pǍo_<׋Z(VI':R~矦݁Z*\|5nHU H/]]- NwiVnzD7nUp_kV:eL9!pOiqt:Z^W*]^X5֍F_GGMl;z.\׳̏uV߾ZXLܥ-ƢQr;u0; )c=[ᓿ׆u>)ﮱ.gUÃKP.Nְ)䜧0hȹ&^@& tFؓ[Ճwަr`࣯-ޚ1yR6>o Bd}R$`oufk DAH |{ZUfG!HR)OeAr0)/e0%{pZTHG$!@r)%gU9b6ɪ)xI;2yIu\bA9WD,"96\4m>i9&£H Q|Ä|I^Or.}t( NQϙ_-fzyqqqRl^jM*CJZ`d VaCQ1U̍#4#9b)0aDX;ƞZØR10q5B/"kWjǮ )=Zj;R3Kh[8. 5 % F+ywIg"4g<ӤrK^%͂'eX5Rf{>go)VP=rYK"yaQPk~wC/77AoѩIXMFym>Q$?uC^1_u!LO헻:Jfk.-{R6$wW 1*>DAi`>yJ1pivRyu/nRdf Ұ@h(uFc6sy>r1zn,|FcJf 9Y lȯ3 wH^J[aReJPF?i/3.B!ɬ)M]pwe` Q%NF-4b~ /x׾.7afJ8c ځ6.='MjP(id5:cÌh%2ɜ5 84WHѷ,Z}ՖI CiwxB3ajPLj4ZWfd "=uDF3Ds*eHv4x {EB9PA9F+av.U'GF]5)0;X!oVU ﻖ'<@ZVkYZV %b E4,0"ew)E<~{p&f#vM@6 q"j-hD>)5PFd&dhcsU(;LPX Ae%=N;`Q0oK(x?Zb'TQ'"%ZbAx 2Qg$o?N*6$Wvo;LI0VTROzJA{$t͓*I}} 2;3&Md̫)b^AC !S8R@_ϩٞ{?'Vagzg~YVj BҊ N'jy:ʃM_X5%Ƶ?Cx؞JPm&x;_i1mDO Pmf  Bɱl4͗STgaV=eҪ`Vd=њ#'¡M?7/Vpq<*)v`OˎK j84sꨀO!O=8LRiWzRe\Ϲ ]Y5m_vVÖ*P ʸFQZ~8 "/*ױr7;T1m hɆ*fًYdŹ!PwL܄f:aZf&gk V7:W r/궱y6Nݼ*S?V=q/)P_MJ#۔W*QW 3z*+%(I Y_DO) <1L)F)>_!{.-K3WRjyy-߮zl:] tWp?uoO|>~x~ 1O%KLlrJ J۹p!f5}sbjIy^kfԵ$\t땺Qu4o@ٔ]G'wY,./k7ě#a2jRSٌc}D"'$Aξb{y^,>MKחtX#;'wwW7ojv3e7Q'I[7˶N>~ڜ/(@G)a6Td$2EĆY,e$;{NP5RRZ{`w7z@I/?amN۟x :avfc%̨~,n"qX4i~gI:*9 †[+[Q\tyk |y.Vxl3=_j)t}}ExWԊ];f}@eAδc(H9 AFĎgSF6\o^v;7䋛Q.Ă&T E7gLM/r]!2fa{V/[6M $REBe$]Z9b9TU(P;, 0#-^ …Lb=L U.my)WB!Lʰ2̆$#!Ʉ"{׉f|x)10^k JtPz0cK٠HĈhLJ?`-%foafV|xX peyNFZ3gW =\G"ڡ?(jj@]F Vto ^ :;0UgZsBH0(숑!l KZOqscr6d=73e"adMd[at*f9I }F ^iØ959!xLAljՒ o@/TbDaIJs018O.M*!#sb̈&*nbɰ@b-\q;g)-ަ -By Upe0_& ﺡ"=" m@LdH@[#mRb:G`>BmDX)Bdּ>H;39y>&gJJwvQ;"(H əD=& I+$SHfJDcvH;u[%VĕnF[7ƫtg8-Im'=pJ%Y%\1xUX*|% UH)gmV`G'MدYeNJd:bl~'h:1c.KV͜JB#~-:ɗӂě:QZ2?I{jodSk 9y4$0\"DEU^9 Z *IHcl:aŘ$QfiRXn_F؈RDd*|n&enN@Jf)ExʄoICo:ϐbbx~`PSP k3 en)AdCYc*-t}F9%Uʢ@&jNdhK]i߄D#N"S[8_HVhK^Zl2x&L)<vNN ?/\a3I(KHl\2ROe$FBB>o|`;#"y$MKt' 5,R᝞I}d+5$)J"iUE5Hd! \nqP^x|VaHzP8K'LTui<G>7 iZdpE䡈מnBoa~r9!V>$N-_}9?pTFbU3Fi\9in;Voӹ>[[$kӝC[RKs~/;ע腣G ';_ށ1,)" V|jјK5<}$1 9r͒))=Tdri":kTn=fDQIw-TքfјZ>Ԍ!Y\갅WHZ8"V:*56ҖaHBPs['fv_q0 T>SHX Ap=.JjR )."j)yCTxY. R5/=jm),[҄fɔv[GK]vѺlDS[r"%SZMc9K]vӋPu %Tքfɔ{nZjq]IB/KZ)K[ha#@t_bkJbQ0C|ɋP )1 @ '#J @YGgTQeqi))0!jmmJSxzue0hfq҉ƅV#DUH_rzݺV OEG r/3"µGwd}2iD'sJ}zO;b?!Sr"%Suoz;E A̤AR5*oR]\D{t6 !G.Y2cAMA* )]0sJwњhrFkr"#S׼3k 837sC;y+"VxVQ@0ԄiO3ٛľY/&P z^$y8yȧt~74Ңa5>|@^( i=[C> M 9 # .3ض 26XqPhW_<|g\3t[uwz;$^Z᳜K%\R + X"H50w*}# e?ٓ` /2Xͽ+K+ !HEL . 3`ł D1TS|͢8={rKn H^q$b@nŕP%U S5,KYIlz,R37k4 ̼q|~N雁GCS̢XB ; 83?_g@PGWoz{~;J+ݑ*Qng~t}]|^.7o߼`yN &)zstqGMheLvbk,E^(?.q7.䧍doc img{K%א*990ʽ[bjr;1`6鑻]*5Ý~/?ܽli&N$z`%!@_mka>3(7@?#gޝlXkoSL|j2L~?^]}0cOͷGL(%'߃;WdNJI(;JQ?[LʉwĬMKGse>datB1 [:Tv>m-DE=Ǎr]ѻ4'>.zBaK $g?LٗrSͲXBîK:Fa%4KS7/Q h=DA 0'-~K(a kt\,pɔΑNi-q Jlk͖&l0 n)M`T(FXY 0D{7U^ > fz0libdɔFv )T7S`7駰~L&USeI>i6 }B9Õ SH-9ky\4MJŭy^,U#$yyd!3 $!IĜ`-,0\9{. fT2> ]Ȍ+U q+ru ꟣wTaP܆U}§?/;JmhG;ɛ}pC^l6 BԠ+-x8d^8:er+I%4ͮoRue|uPa .#H)%c+S)a ~\I ] `;hĜm_[9}ߦ'/]*j/iex_7˃͛/cy=CEt˒` /~EUOU uTzN M =E 28C:$h I{*f?K[1[Vu>C}4B& W iAf҈P29җ߮ ҳR=xfc[H5vvqwzt, nfI-JbY3E1 ¸I9h>cIDjjJ~XbR*YSԅ(VO)UBiYrdPA# ).nr22HCG_WNGP0Q8ș P;Ѐҡ#nMz F>`Q=mϡȣ!T-"_O !"c(SD Ӣ#c \yT+ea,?G޷ ¶kcs[4ZW@t>wvScx'A./\^rb63^ajƘ\RWgcxG x9/.v^03)qtԋb%isn{vzn = LP=>'/e!Dh/KKQdp͎iRR4d @Wݐz/jvY~0oՔmNZq'o?Ny)+gHhqHǖ;1‰ :f|`-?oHخ `O!:$H=Cp˘ĖTό9TRp,kkC8}qWgy:W VQ7ƞoQ1D -jTn`S FFf +a JL-[ްe4+,0FjhUB '~x@(\+X3Ǭ`?Jz[Җ#z h-y"N!Eq`p{DA$ (Ƿ:;ѼyD"E,qk_ $"̰&/mDJÍ@ֆ2t TLҚ:܅r-u}%q82_+,}۫%7Ule/dOsa𰹡(-I9!%ɡ )b_7~4@ ]V/#Fɨ✨l&HFqy.⿼wֿeƔƽ(AP>t[8v_?~Y.QDC[_<{hj#]S<` y{ ^N)n$_N8vڶ>]o)!Szy1І(Eod?-éE&8, Q I]x#UՏ&%gS6K 'W~smݘ@rOh:@2)4{A;5vܠSh C|\Fa 3AVyÙ𺐆P4ؾM`hR8Σ5%gT,mbn iBIit( E;FU:ªEGѢ0$̸$QPgZEЖj"U$ ]ЊY*U5-HpGӤOȸ:UQP0ԫRY_ T)OA)S * J 4<[tD´;fT/S* u] {)8򑴵.@`sԃkҫS%ȮHVY~d%SF̡&$jP&`]`d^9/` PΌB~J5mH%_=9B4H/+ЂBB38L sx\'ว*6*A#7P"B*[<'(Xr+"9$S 92%ՐiE t !9؍X?:0E<+⁈Dp%֭BmbE<2Fm`LKM,l;/mYD~9X4\fvT f[IS,ⵎ4WpJԇX"Fc0PVĆs%hZDm .@d.@q+a$l+鈇 JZ,pif%5c2* 2+.< ˕@2s)sG`Oq7'_-wZ 9㣷#-H AalJ k~ I4&_/W+"^G22?$ @yw_ yZH)G]ָɰ{uyi9eRTT Rr-N@-, IJFeBV%GզkZrɧNKG猦je.ukXLD!A<㡴:{hK"yjN&}J+ҪS۟59_!&Ǖktٯuw5J ?5Iٙ7,'^rָ8p6vC3N`&S.xq[ܻpv;![qΐ]|?@j'ae8߃Ϸٝ}42ዾEao,%H-7NN;;gy lM@َ`j%V?}t2OZo&nBNGnɦyfd= o_G'd4NAѕhkkb+9k=:-k!蘐FnۖKs/NSyg3ޭ\{D\{ ]9MpYݏO%.kS9a:K8's- qKqcR.?ޥj0ʓW(aS9r>1ȋهً?*ϳ%ez+.\\!q7sw'fof 6|ݴnZd P9r7Jwo߶ngn5rɾۘKqwգ@e7 W~rjԍ\VS{Yn (߼N7(`RDo'_tRaIrg )dMoކ hn-IU-7o]/쪼tGS_?O ?.? LP3Cb3xxLV+irS֦92{q0#Wڃpwj2-Dq_Y$#(o?|%HP䮤; $kJR)m8ͫ˦ڇ@wQe>XM>Wx~t0wLW='\W]݊&?C:P;!22 To79lm%@M{tQ",X9aADq#= ݋7@$[nghCCȳp eYuh:~K T'DTVH+A/Cޏ!N"$?oUx6ǟh6uL|S^`_|6M9M: JJ# i R( 5/bRR)-#Arlw.JzB 4Z,C6 Z0^0TA%4F i͔*XiD&L"մP(YqDaغO9LFP0SPd7L- SīLQjPɀ+ F5L-epZ4^mt p).4L}9c Bȋk.4vYi+y5z ͩG[brދ5+'NiRۭq鮤^'>.8P?tTz fmzkF_!k\(n|wZ; ?C&V a6T㵈hƟ[/X*TBYo+T|u'} Q2x-ZO褉@3Jwt>'H֗5^wYJ|>k-K]KC'y߬VMo%G-dr}1s)%ϥF׽)鲞SogV 5_Ugrʕ}|iAWipGhrMgF\r=OZp.K-8OZ$T/ &m9,)*{;oEdC&ٔn<[[ rX;HZf[L|,ڊܐn@%;i&픧&ڇfJ.8gѽycQok NM1r/|P*ѥ7L= `S*5DZ7_9_1 Dj;}[pQx0FQYh`9 nP 2L,RIBD>_#/a$ru*,VWBCZkDi(y6dyujEJd dj\]n'{ kٻBH sӳY1X} b>5.qxcZ6+Q9ٝ43E<9:iYĩ+"%**՚F?P߮KΧPڷ)bQqRCF\UW7 %Zt0qCzfj?7j6Z ( T1[(=x,TdBYp8iCJl,ve;F N̳pƎԬH)qjj|4ʳ=Sq%c\YNM!6J" XrЗ# b QR`p4N zQm,݀1jFPTB· fH_P뜍2i6M j+HE5f:c?,v@#nVU7X%RI&d8/Ӗ(2;2"9k(0]@PA#dc9d(FEX G#݃Vjవ!-l;תc*mpAF :纺hRȸ"UȜs.T5klwS(8[8K,ܖXbx-ⵆ 2zk]_֌ʱHpKlmi$7g]{J77v4xn72ǡU=ssRK }#y6{cFP.ލ^Q["Y 1q!d:F3AU5ν~kՉ oN'N061ida!0F\77ucapy߁&USxN@1=pރI-Mf 剉0^@lfd'1-J%ϩlg $ػ 7PC a(pˉ_٤ZudU!T#쳭KZrMvAΪ &̈́/PVռ+rU.f1msGh̀K (?_MzxM5zvLӃSq2: ߀ G~^hr1H&MkW= ,?N9f񔠣^k7Fr1Hao4nBj8_ݺ3gm/0MJ)DF1:FV.օ iVD9)e# e^3\~+zЏJhyM7 si&{ӊ{hEHB&Ꞇ&79Œ2p4~h8ovՉess•ļw1/~{ۗ:[k5;8I[{a? gNx ;H"Iqkۘ(̀DkКU,6% Ep6oiED=?{I-@oIԟ.rry啼'X!݌Qgi4X Pu SY>k()9:{ޢsG=5@;jN(h09aI['wbJ6P'Ґ2gb{JbD>?H+{;ل&Yb@^ME岹`I?m OmI{us̈́IӖd٪k*C!". "HBjOS?^lMX+h6@H:E_a1j~ 3KCU#  ,bAXLB'`!&P^(Qs!EƐe37󀕋H(aPXxjPp^G"2x Y|@ASF"c Ab[yv?*gg+듾j̭DN9ӈf9>c6\dR9!FiD~R3Dq EQi3ɰ*"%) &pTx@@eܚFQBF|i݃sĔ8)!kWGQRF=^-91ŌhQet8˭B'mwָ'uX Zw#ypPp=\hW1KV`8b4x*$Ds))J-riP^ʢN p+,& A(z'DKbz>̤tra#*xv2FJL \86^ &g.Npw5VOP |Ru-0c`bkv6SV;[B t y4YnA*u"`LD(}J[l  }53P9뒹9R{£a S5XP/L0҉{!f++".`G۽K:I~qkK9ϛϛ}(zp߿=el?>{S]gl;4{>^ǕG]2uT|hImdř8cs+ff"ɡ1<yRq2 y(YH1lYDQQbajZI2TsshBkVs`5fAzkrBxrBEya",bI|~s ʻr)REٺw_ RAAޚb%hq,UVSn^qRb^Ц̆:ufo*m,-jnuoMl(42E…Mn$Лp+|z׻Д)D-<v+kwTJ&+a^{0Q^#jK6#jA|}`:@r\[)R #j- A~?HGb\ߚ t\] f4 \ @ۄ:ݵIv1O/?OL&?roVى'ۄ0\<_s\އ՗*a~zj64T1&-d08ZKT)g[deɽ{.ーJ7^V( W(s|kg];PF &Xf-@T3fM`7&\eoXXn()#̫(/V/VL_8?[X t~Kt~+t~q 4¼S4R/_8[P>[Xcu~&lbM&suOYFu=G QHVւ@\GG+BQV`|(C) y~:|D_,!չѳKŹ͕ag[G!Ͱrn*۽|,gDғ Md錀?`ރ)YJ]P՞'o(Q49aZ__67/Bh R [io*W% )͟'ZiG/zS(!l`NN]L)DɌRc(-6N̂[ayd O G2 ev)ɺ%IHm,XW p0Ԃp-(wr_~H~I$ ?vH\SdW~`]thz\`;6GpO|u Pe6?ݗjr>7w-_]%(j/n~y~3z_/=o.S+u0wY_ ufցL wc`/ 4|`xM}R{A w rXuǫfUE?ciǀF!_:9o~_\, x@p w%t܆z&.m6TI{?5Xǫϗ͟ݿ_l>sA?R M^dRSPÈHa)FXItjeN]H(s$C2ӸX ]d^ $E*vII$sXx4#˹(I^|H]2>4]'ԡS+5gxthG*xz;]+ &F ˇ`๵¡:c౥{DV~zJ8TQJsLUuˀXw;T+E1RbPű|UK{|MZ&x8ĄH@W1MTzEzS} ٴ%TlemA*@]S}ba}|ջѝ&6$´|4 x_vaQ؜ ) @94}0|@0 K%}i:]?dBwQ#"}kʽm r !Su?"kg1zzyQ*^*pTל8%@Ώ)>=?%n|gA.qѵ͉{ ɧc;3u[W[/H:fɩ rNө=v݋ LGaHЎa2Ӏ )zIyOOI$X3= fT慡L$3D .BP̨Nk(7j œ@0*cՂ ubnA))mx*ZýL0s]oǒWa@X56*öCZ5d/i tկpj DZnj8VljUn+H3%@-'f2ghpfҍ*aWYa;YN:? F=Xd&?̇`LdE>!n&bpy]y.,ߞ jc( /0.? _drS0-e>3fiN13pS?}0t^w`;oOdFiL (3kFC^&\ Lp+QkL¬ 0@La`V!SbDwKŠꤎwۢL.˴c-Ѻޭ y*SrJޭ8)!kW'*.PSAK$eŶATX,5rD4-::*m2]DbdFz&4䅫hNUZ0kgEXaP/Oc鮺 GV_&2Ze!/\Etjݕ;.y7AIcﶻQ~-nɌnMh W"}e*CvR1:cv=]ah]քpM)*#wl/rQ1Vn;edZ9ih]քpM)%Fݏ-Zc+n%nAn֎А):~^.CAIc[@oͻ%3Z׻5!/\EtJ/_nr?bT Nxǫh]քpM)ޟUX ֵuZ}]2պrQ[@7` [GzYū-azٶ^ K`}2۬ok+_Dm%J}m"պk)ojt[l5j ɽZbIKYH[ ւ4j \-2ՊnhԺ+5Ü96ܤ%0n>a9f(m17j TЃ1ç96ܨ%p.ǬD19F-Abzp9x96ܨ%TϦL#sLJc6"9c19F-w3W96ܨ%(F/,Vm17i ûL],m17a lZ]昅Ь19F-Aux9fnsmIK0-^BJs%Gm17j 9fosmQKX's̊lDc~9fE=ѿb\96ܨ%0/Ǭm17j r6`r+s Acscn`1k)scn4/e:%Яlloalqfr-9LA?t'[L~1 -үjүPwa4y0fg?~ƃȅvKB'[Kg]gRUv'ӊ)I,bujP4 g??XWw|oÌ ]8W\V*7D3ʸ{o|,g\#/1LE ^؀=irSF҂1Zr:,&o!aq.V\_ABVְ nuݷ 95'#w ~}tfU !)8 VFyKVz\ $" ܥ@\pǹeC3Y׋LN:ǟ><~i݋r?bF[rf G{k? =׎&ʦ 'EE*?ӂᰎ3Cнc;i満؝m R#H # E_Rr]4}$)q~F NCoKWIL{ s" *CIbC\&0SiΐH_"`4HB )Q$BHF aSEu}K!;X2Qpqa!e"{8 D:,P8l.&rJcbx|QfQcJL`!0APɨe$P9PLSCZ)[jM "=&ZȟT'a&a)8v{? {QAOEpLa!LjABӠ7rLD$[Ja6 ʟ.wG%}3, d HW x & Qq?гXc/pdI&z.`ǼifWo$*Gi։4N)?&ǛV6A= N4v曃'i*f966a2G_{϶C|Fxx|Ėis.2m[{T<@ەEf5Ga1ôPx2p\9?KDfܣ~B nB1 &`80_pv Wi-F]¤&ae z-|ӠYܙ#7# %F$PB]Fnp}= 09id&AKpa-nb(2+?L&+GeIle2O;wOï:KG`8R&tI293ߘU( ¾zٛ_f3o2UYuqJ2ӑ?>یtt!Ƨ&+GK f(\}A[)mA!_> 2GJ^~5ѽZ8I Y^_eo`yп|?vCX 0]Fmxd2 :B{+K9i>';?o M>V ׏PgPf0w iSH[d Oq-cLÒy4QShB[%,_*/^ B؃3m9~T?^/.&``L{kQ.#$-t@\Q ew<ݔIZ噦^)3R4r Abf$ t2auf|+ΧkO[fKf53?u4Q*UF㩀ۿӯT(:wzx!^^=d~Jw4MݴԧBkUx GD6ήO8˜ l& TJi%Fǒ>;ڂMBԡ5Ru qM~f`1<biJ` \8 (GQ<$=#PJ {u=3_at=JYJ[bok=.jO"f_$,~*p'Nn}Fb!zs`CzՒٓÏduV#c|MyG%gVWY A@d}t=\'7ꁽM;qW޻jʧ+D!y{08=So8@$fc7̝܀ZWG~qGٻ۟}ǝ~?Lo OHKW6$&{Ka2%[T=QY.>u).}'LuLg f5{\Apg 54MwZ4pl= $ d@l )41LiѵbڭB#9X ]{KW2'AhˢAh$/-&"(G0Ol`FOrjB"jɒ%B5#TM_wj?֩d  B5;`cB8pr=ЄPXxjPp>KC^`Llע ; qZ7UUvoؠ\SXs-f7* c%N0u+^S ҫB)F $dvLgKTkF'ƖԊrU["jK%wݗL{`U4E0{#-y7+7лbPGuRXŻ0eZydFz&4䅫hNQwӕ;nQ1Vn;\EpښwKfwkBC^.T:BH܉{%ոjKc=O3_>w3i],+{4Y@ݸ^E.r|8ZqN@8({!OppGm,Z)0&%~r`^y%@V/Tދp=;&o޼܉VbG,< i]h*I(#;:L< JIJx4I(BB(M]?TB9{@*wB)#noI&1J+'K]C,iaL7x]f</ez,0Aޫ&% ޼.4En9cV:Rذsw66x:TZ-8ߢƥ3)Ķ/6ZF_ [K2TsO6'&AȆ:d$:V9)t [הxfҲfzfdzݜ[OהPg$sDklg9::^㤖,6!:qR!3(ge`IbpV@{DpMlQSUNʙZhO 8>3 Vɢ0{ M) kY[䙵]cX/ ,=L0r9Z=4ؖf}9|)?rv/W,~_͵o!BMHĨ7{bzўVG[舱xMv+S[%ٓMˌUnnB.Ƈhj4[V@28^Q$pzl Ec칮p=Y Xş>2P`]yl; _ݶxa$8|W5SGZk*T?{WF lQyD!On`=F,R&)A!)QbKKYEFDFFD+6ϥwEF= *2j@qҙ~K)F 83! 8H510yj?D)ceJVlKr76cI82\yʄ1[8`# c i\ ~w vֹ:ٴ-85PI4B楮>A;Ӫz_ס#\ۛw-o[|\kIh{_#{ꯑJu:o;NLyygzxW/?T*m`A|5$iﰥA}Ryw/o阥fz#o>TH^=ڼvoN1#zK:crV+jqMnvȶMӌ{jI T:zHˌ16\®k:6y_ݯ6(,re+ Pmj%@(h)nD_҇'SHlP3CtA`FHxÞ})(Zfu9{JaXtEX}57J߽Ƴ8Vմn*8 f(~!ɍO>r^ r5 [̝*Qd#f'ο K ,/(( ;ڡЦ ru!++՜?[*VYv _-k" -Ct I+5͏Eea µE;k1WV11$WTQ yt lyx(^u;zxW{_jt0jDnp}nlt>OFKj *"Š7%XgkHTs[իs+S7عo_+*aeCdf>g8!\( _C`e|^*UiMt2Nǝ)$QBET@$tYJo'zVmghr5+d5:,vͧ&uYJ,L$WOҚ,t gB?߭?DN<(8Yx_;a828htW[ESk doIfNjiq}7bMA3̯-J#+z פW:̛񿒻# ]UDq#͂ʺ.36 Eĭa\Q<>?O @BE;?,}.Lܴ(1 fzs3oN.[;tlu%Wf7|c o5/.l)]MHcUxn7+w9{6PmY("h@@h c@Ptp!_c? ΤqBf'<ÄH.m/|Mfl]ϣ1080bϫYMNj|"JM/ӓƹY;r<wPV L9vc䭗g@R,,4R77/-WdW!Thj%44^SYˬEué%*}kQݸvp,xoubG~W&,džG{SHݼ^!A1zbUB*]z҅[_fLWd˧{Gy2Z*&hjhTz5+5rhwL@W վ\|u6WBsX{wX:hZpvob%bGodo?_fugi*Rf Y&A{u|}ZGi 7Wk.ZNV>y 0^YW(m:`}xB*l!Ul]ɫ9,)<Iﭜ4q|Sx ^%&`%qu$4oF!=4) !>j[)U{']. M2Kuo| עүAQ_ e|4B3ź'?[OvX!ҁF f<~v aIGE*pomjKtET!Oy&ңŎwlWRy˪h-&켗ʈB0 "DUȲT,n TEslDj#ܥ4w#&$U FȭQyl@TlkK)x&@GC%G8P AB[e@o;Mwdt"of,ټHA<`6F~p(r#~:Ui"F3 x(EQ8KBHDhp>S۲j}aS[v2 #{|wժa04rV~\ػ-\~&d}Bc۾F~Xx(y8؇[-\@X]/=AqM@VSAԆFo>g[Stvb#c3\2W۰vL^i X~mAFP2N`@.EEvPSt8-j%P0h}!YmPRc@)*TʠMNҩԌS*G1Q-\hѓi^v._ׇ%5=%F6]^MTeA윥Y윥YuM/n 9TS D+g%28 A84&z2Aq/L@KGHys?NGӛEj6NG?/ih>w%,]/9^^uk_KQa+uR~ePH*Zeay qkTŎw9r~_ݎ68M_7"R9 ,P7qU5a2g2U?.W.$2k<,{ k8%BBdH\ȃ*B)hBgG^S]F$VȌf & ȿٻ'mde+/+ c=Immv튓/y5GϺVg+k҈R)V-`th`x+qAU804#Jr\c@29-7*X B\F%tY)!!5v*ψ}c[WOwe6FxB ɻެ[#)e@~7@fRf>Œ "d/JpW] _Z r8xׅ8}wM˿34`X 1< , !@{Y_Bt0ƲoJ~p@v PT גpJy,W.R^Bc2|c`r9Đ Het lV:l N;U=U +Dr3_zV!%AbA F@!Mr$.DkuAAee$WUAU0.3&FR'r;y+vwUpyj뤔w>1k ث+nO7U wەԧ> jIOSq%(OM3_#dz%, L g%>{1gz{cD2$'XC+޼תfqekHܵT:0lP cJЀ Qw3ަa~[Wuf6 7AŢM{nMEn?1޵ A}BQO2œJ}:y TN\:5Uѿ.rRCRS dy yIU8, qɐkNRx?W>u)*Z_"TFe /^O>GNz7˅e~7.ώ_a\2.w]Me7k'kA/3\“'0,xu!pAB K'2_!Ab0n]d<''TN*Q0UdQZ۩yc}^*vN(.Dg# |H.J₟taO=O+YH(n𼅽RۥQˠ8Y j/=& k/ ̡9FkάL@b$I]0>0)ܬ0;";)ُ&է6:T\Q/rŎDXqz%|PɌ@18 KH!G$xDD)n!C?};kc~K0D.pXj)9X5؞J;thJu9Yēsd9L [1jc,s-P呡x2|q1;47B>QD_9&o/] Vn$Ӟt{ȉ2+Ru" UxS򳕐Di<[ը~ǛRQJP?@{>d<;D4WCoE.l=F[;`VuKF>y؛է$Tjvh,#k'ln!%}ąYZ}k @%!-og;%*[~ޗLQzp6f{9Rx5RC/% =1|K! u3hkzEXLƉfЪHT".b{P~}\)ReMaQ騊zL`YCX %'P iI˃h J7/8fgǿ.wuS/3%(ҵ* 7<_N|&Zi=)%SVՓ>(z]RMoRiP6WlF*Nge;P FHȏ6E<|>x?Oi}unǣW'@Is`HwΧVՠZ VsNqnE==}KrV Rf/\\ˍ|\xʈC\I$P*HpDQ瘼J%L2f{i7nVG87Is&)hBꜘ=Bte},Wz,6QxI{(dccdEq.x+J=6#7}\~B6xm{cBXAAU7Mս_nک g9hzu{TrWF)5퉒j֞(.rn"Ʃltkk\sɭNBo:~Aʷ\uݨ0:ZV1nk4%HR% 1*nMqEߣMFZ)ق6bD kXRC`3Etjj$v4  "3 } *T)LID*+6 SHwrhEjM#iO5/f@1{ތ{Ǿ:J?{HMSTg )|psl=OLcҳEbw&tߺ_|yfx%ovn~";nP۸p9S+M $j iӉn|M[WNƐ஻I!dωfg<`=!(1$BOS/{>pNNjg?rȊ'dm,(4'i8IT%f%;1" 0pҸwS2h vUUXB_ x1&(!SDKA>~"w]VRZn9v> 8aאF;jYi8Aڇ#{|>3(,u!^b&}#cm@hY()tn&Q D;BmPD)d gDpF'4 ,!8bD fnA_썡xC! Ɯa/!541B](Y͜&aR:SYxI1' ?'CI50B @(Q|$ U@ZqIp?T䔛l{bY\C=Wd&ڙA.J Ng_kW~fl8 -fRj4kjB?jZGuݦ:gM0t%H=dCnSw5.YL,9=1=Qͧހ G& ڦUqcFr<&Lסf/pa-pY^DKrY?MVT4e\m$1)weqI|ٝV;32]3`gmXz@)&)oddEUl _"2׿.XS64xhr`E`loI扻&yI *IqttՎw,w{-Gɛv[>ͬn7¥D|*)V(LȂ|_tSK5J4q7%z2 i:t1M{~؃APW ~X0&2U(* &UG]¾b`o2n{2p v 4-dF=VB?aW} !%-sP yUi[ybA[x:=ykjүja>M7Qkf7H.VzC*jˁE|+]z ta>SSwy#xcrnXU·LUFyǹj?C9#(Bx"iÅ\54;q0{N+ nTÜZ)m -4@s5UW0Tc12;/ e-3cYv)g?ynH_2[wa i.5 p7^VtuET9[u-O>l쐛ܰ=Y@X"3 IW.Bp qa^]eRf-ߩwRc-hwXKd}Rk=Kk!v_NMF_dMgjuzLJPr*-l \ ܎5Ua) >7JJ'G ARb`9bw0~t@؁Ak[ )֜ dO^N&$x ~zmX{]ޤX;K3gU *1 \#(TyMf = Ӝ,LP<뵶0`KHk c#%ɡQ Y\j1e/-_j|hCL9p%~ Ŕy}o3kK{e#l,z/A?w4O;QI%k%܉S"?K;(p N68)h6C}3d!  1t)Ȩ"\y_)m>5 YLB%xa3`V>`}V:dyփQ+N* Vs)3ӥ3b:e6APa)k5Y&$:H +"}PpBX`99N$K Tꆠ`= Sm3omdMNmU&0°{0 EIgs8ҹjһ'Љ3#k22E?A~\̴}ffOo~z^ze_#aZp+ Rt;kw#^]yтo.h|{qAϗ n[!lFqۘDqP9J7|tcɔ̈-a FHfU!60vFgշ]pµ 4\Bd4Z{Ir,Z:tD2MRg҆z fkvtrJ;ה*q%5 <#E$Yv6{B7t̨,y+aB6@MD@U,h g s@X@n(a&Ԝ~H@f&%6Yyv6Jͼ߈R"MC'%H!4ke-h:qmP&%ߒui<խqfR;qp gRv(?@Nu =KܳNhy֥Hxmg`wtr|zeLY$1i] vKR'.Eē!(f+%W> YaҋɹK<* =@8k,IӲ D iuWJZRɭ١§\!ThO5c3z $1^雌Fj=[AulxRe<YבQJe크$J AQ1e4 zJ ݗG\ =]0L-.'\xz}9-4.m~ZO$N,mvNogGN׋|MVv~;x䛓7'oN ߜ~K_^T9 ''gRAd\]"YED^fJ'L$< .oǯ~Nظzn/ʼJgv"-N/'O\'SH(Z%=˥= ZA 8\ ݗGFsKFG^E2nH K@%0 OI"`<)[;/їaݦiʉi+0b"34r663ȖǐKhW8ccaFb&GSrю,)Q[F8b6_b X(at,̀p Y4%Fd fk'c9FtTpM)hw 3NuL"1"m/fg": cёY0(w(YV6)X]L"ĸu}ba2#VR1$[T  o# 9rmG>Wxg!kFCx! L0## p0RGW[8Kk hb 1`7Xr!-c V4BlKy=|:w̑#ɳY! @ǒN{2U0ڙ'zLfZ6[y9-kuBƦ#Y&,|eQG(k)JM^^KQznE''Atz&rБĭ?(9ӈ;1~XC܎yT| *x*rљ) u_GdFė2 P˃$*r++F,7I 'e3pR !჆ck!ؒg$HRR.K4DmtBF+Ft}Q]_!ϫ ץLt/MMZ~M&ڬ}9GeWJ\+H[!:G< smzh%FuK'ޒ:m_w[L8?x?qJbŠǝoLx᳷hxkB f,GRZU[6رar%J06{VXsBϑg1(.!@BP2*v)BVX L:c&C?0>$ڟiǘ%9B Z(9H1 &I-/&;\:u|('h7RTxVDw[W_ si8?}O8Q%cAsUnr GPW #M\~U oot·W), /^Xo=_/._|OgN/XwvKOv ah"!wl{'iudyݙ:-T'lO !}fy':T'T{!]=&Ygͭ8fg۫&aډ$χ^^0e7mcwͷ1_]UK+k7WͽLIV+^2\=`D+}2c7bav#Rxۇ?<҈;14 0 qJ7%t3mܓ{-2Ƶ6̸(=_WeMݗmf(t"9W1NYk-`?4)9wh ) R^D'!cROFx1ĵC70SAnˀYN:&  Tv CIf2tNp);,K&ށVO@sƀQOHh_:(4FyӁjVoF&PƚwM&Vf5f4Cf E;uP1kA4lw^hfJe p}UnD2:BE<l9S@ \o.Aes|ԁYgn@%pl(tjFg3_6ru颹9AU QTb9TouvP)5yOi`CBΞ.$곽h;*|ZdHb@(tqt#R r:DC mjj Tȸyz_ 5j _6$\LFJh N !y_XTZ"10;IV˽G7H9c8-V !gɿb " &n/0MvȎ$;+dV[NF``[fYU,VwۀlS쎉R Wg5HicrQȁKg R~U8"m39@Oб欗)5 ˾eK&G5[a-rԩ&Ȩ}0Jq\9&kQ^n<6Xc^G9]Pbza: 8I=}!I\O&q=mk*JJJbT^EAh!D) d "+ӐDv uuZqݼ\SR25Îe[u$jGdfr0?K PЫ,[@oRJ\%JpܨD@k4Z,;Rr;ҶkʷȟIlie0HYc+P21(*D#*_: cVƒW$.NAYhyRØf0Z-UFkEnb}ղJqq`mV'Gp.UD{IӈAš؂\C2E V2QlxL!jnZ).^ZwK5J%#'~fԝQ T'-&5‘1JnMhVjvP5R5=(C۟ 2a"9I*qFDWdPj )Ywf$ӦNê"L;.ň#y0ӺRjN-l fr^˳}LYQ8h/d$^Not|K*#-L6 PlYccJj[zke -,6e ,HrP'"Gi-'a7烨~c J_ȑU/ăgYJg2uHW#%Pj&t/IO4&ilLe\1r&Ө"(*f4W}uNH]Y=|U ۔J4L hOQ)(RTVWAiJs"+J!,-Rt 1 d,:M76E h9x ?)? Oy~؛_;|'/CaRZZR[e(qjHF)Wct'@eRin<#!{1},xZx>)WhgG ?ߘz 6Cˬw]X!{6#GB45=9\0m9)Us_ˋI8޾ܹ_f7o^BmP^!Of'Ac?J4M%v!i\w>C^xڥ}uO 6),NJ+ X2"DҶZ+ ` S˷y5kv慖¢˔vq'ga^ʗ.)v)8vP)+-RZUld^, ͔7J{= C]0ߙ BHS*AKU"ȸ򎓫b齯 x]Con) f攟Q"O5/`,rΡLübFHXmkKR:f.MD-QETBBjnzq j3ͤ1\ {H~F_3=mEdxxO|hJ='ޕ}LR?|n:/xe UE,RtᢤPƄ زƳkMF蚳خkGIS*Jy 'ϑ7;!7\ZhÙ ۱9=5L^njaFyrFh8e.:1tCOt6VäI&^WN5oE>FkGF*l kiҴӤiIN,78*UM1tRFÑaBG.eŵ.rB(-ϿPuN.ޔHK3$>=72JJeRڛcVuInK)ƀ, +p&84^4Q(&⾫?.Z͢sE)] Bt^+k晤#nCi^jZJ <A1tD,D#$lʂ^!) nE2_5Ig12i̤:٫^\yW޳v}Sw}{;&q J ׵`wIZ0zpnc/] qT l/CҖ^ux)Tb MSG[v#N>cK7V!?Hl[;:%\ÚW7O.K2]7+uA0C<3svo-ߴ Fk iDޯ!<m۫N0~Hy o49tO{֍ oh~pq'7 Hp|%4GtO3 彯qwI00Ԙ}o+sw{dzvE˫*\ad gCUTkJɋ-k<<͞Pޯ"*ڜT~P\U\ sU쑛qjg Զh7|\o3L h2 _WJVPIcCeo!TTup>J!4lYV{S뵔$N 4zgXfΰlKdlvaDc =Ze~~Ra6p8Ze=GnsJ.hDthMYu҆Ā[v=v[M&bfĄ߁*IӦNb$,LҩuԂeOt S6\p4MDGz mJ q\uZsBa:-raFDptZ|^JRekXLՊ.A Іmuʉ #$g1j]n Uocc'vGNI?RJ=&OZXc (C:I4qdؕ*.N<2Ӣ=T\~eXf߁#, Mjt>ƊH%?.[sz1=kbDވ_^ 4wwήq4bZE,>vCB7}o: 7Mמe 4fc}UNh;mS$bWPk]aUE\E>o]b'9X1N l$a2(/ iKS@%ta { 0x+-kܳx  1p{8c92 / P8ҝ(be{17e v$W @㜇r4'3X/ y8QZ9|w37WPͲ۷{ovٹ:"mZp I.u@ g?FgfŴBF}h{s j-T8NʧqjeXܩn(hmڣ3ftʿΗί/sP*LfGMI%&&M/eCꊏmm+-?FD>&h 2(l5IU δ'?el!n8gx/2 h5HzdW ^VŖ5ڧm7gg;` V d+{tMgiãpT.iKIQ]Nb<1A.e t8z2ZDT69qѴT(W_7Yapa)!ikםkxn*7.mq}-br@^či"vy с?Xd. ٩F4gu:$T>:8L),!ô BOhÐAZi4Oz ի\͊%kVgQ\ F-/Ѥ$|nXtY5RH$.q!ym`O]ɼ#@S-d\9CmAPY"[}ND#%`k.P%$rSIbRm,-,;V8!ȹ85lu%{Nt!P1v8g*x \v2V5 . nw#@,aϽF<q2E0+M{-e@\yfqxp`AIZm7Te>?C?_~+SJLٕVMsj?'wᵩQflz9]^ܚm\{׽e##ocSt1?qb8:d0 S!gT2&eUO_<(߂/=xg4Z8b5<zFˁX/Z R^HInwRH_Qh.;Os D9CdHZZӦI풆E eP H0y$wuZBnF`,efHXsE7ԏ܌6b'OO2ODh4]Oz"5<ǂf"c>ob-{U:ym0 y4 =:&Qdw7Ѥ2&@dΥIܴ8w 5tZXG2PLYjy= cDX gJ<'ݍb9wPupkhF}3f 4 E1sR2VnC'؀p"mҔӔ'FmUrrE4<@4asgIFeqF6|"TB嵽(BP_Tdd,dz(:A ȹ=%oqW{bc։7w%<(GL%DHeZ8 F&WK.#}mm&`%n+ y$׻܎ Q<'ƫ!l3M$`cCt)cn8PK<[┲Ӎs8: gG"&D_T{D[eF&2վjIU❔*lS c) I΅(G &نc}V ayռI%0?oj 4Ü>]T *f8?+T: VsEDۭv\NB0JX8 ]9 +bk ٘Q3%4iPFL\˷Bt:?orPI"(}.@V.;GZ'tN>&q$&Y\3iQ+oq9W2V#4 xy xM}Ei['PPDQݫ`wX@"/Δ5&s‴z}SR ; ЖEb&B)Bw7.XMPP1\(킊낊eFAE TP"AAolUP+*XF򊭘ڤ(O6kD"bj̓1RV6JhnX5@9Ss]l(ҘC%FvDsi̐OUN#V+. a.99S&߁u8}2 BJHBҌ ζSr,8Mt5B  :Θ&S^[&C \3Q#-%`jN>,B $Feŝ^X𻹇MZs^z)&7VqOJA bp5jXkl%1m5a aZqD[KVrNh*i5 ?CBx<&h,Eɱ&ʓg9G-6JvQU[͔օ"&χifGgy1(aqDet 5tFXi0>\c=Mh~9Oʯr!KchfwYȅ1rKtOԕyY&m%-N E1R޹i GtUIkݕ$nv5$I~*΍NO* XF \p1dDp?#hR APy+ N܅WK*ܡ~;+4Kv{v} I,g`D8aүku3_Fԭ4JI6JGaIGK1jUP Rg0uH(EqOޭْl7~ϟ|eQks{b' :4eF9ˉoG ]y 37jZ2 r.q`"A1+SS WNI)Z*?s~6ya6=U}V\ހm(VRgUW42o1XlT{j<}RvtZrw0tAF-e7nvGk[6z}Ee+Fb5ԅS*T8kr'i r;1qBdz&8b7\R%B#JA+Ҳ}[OQhl4,??A?jxg\l``K7L^I`/E0\㵑pĕ(!jnM}_-ѽ0\86=b{M{f~6E'3`!/͏|Ap~k[9U-Zm7Ri VN/> md`)~i߬sۏR,g0{"-I8F46ky͏X|KOLǜ$U}<J+n.Ӌ '[%Yِ̏`قMnR+ԥWb4} ~Pl_q!r>Z ʤp&.]na cS!O U>TP#o*PIVf,lݵK ud1C(J[8e<82ځ'j"zko0  74lSKCw4s QFTPCB $~ 4jZ4pWVչsFcYBaK( b A,0(P(N:4(Kgb啷L' J WhLH#2Q#Q./=4hi*b-y l\cqU{kyb-A=(.8"p `Jnj$ , E2 4^ s.mM5[u(tƧ1_2992ev?h* SYa꟞/1q3 %gGpg:3Z!-BEx70%8bDN& ꅤƬ!O,yXoeGތGg9`0 ~+Uևn!o??|eNo>M qu`r?e˿ow|׿ˌ|}iܟ G\__zoo/WG/8/~y&?zzW}Yl8]xWp~٫l~iOKP/_{ [f~Y| %Qܗne81/)No!8g\<(בm,~q?$. /μt{=}ϧg~7f~L0~w W;W-o0tbybO>&ṿrO&_].m1L.㶡ŵ~m]oGW~8.Jf_p~ʴiQ)?bCÇ!HAVCg~U]]y^g?NFUoԳDZo?=ϹNaxF Њ쓓g,T^Ò{~[q+ dz{>b'/UEɧ7x3Bcot~djdT_|Pc*;\MԳ~ǧ/8y!d#v?kz}>(_}~~V럿w賛{]1ғ{W2.~z47Ȼ_W cOYIO༎U~a'CoU⭞fp{]b^]/a}$0Y?x{ ބ݀DWzŋ֡ ~'8ъcPwjq|9ތneLaj&_N&2Ijp/M¨3<\;y'~"v@dwl Z@jjaùO99 j̗x٭2n ;!\Ww~/Ks}jQ6W$M5ϱj 8bi DƖnQ:Eco"ꂫ WnfW\]pG?.K~ ~ptv@|%n'aеf^bp @CFiNݩ }vn$=kͫ^EXZjp|Xr5VqQv;)٤+Ia{2Y,hvK ª xF2lD{M.Xki8I*"u 1MT&%hR&e3K. 1!&Y.JZH>"`B$&YȤ /g)ܟ@KE>ya:m ]W:yT3TbL UR~nRT W2|lE@۾|sRjwR`ۿrX@syo".rz[;= ,gqpH0&("P$܋!^gھ}y bT{rc`8E` HИ\(e):Pɽ:BAQР48˰)1c8я%Pb %Pb %ư̈́MZV 3L====T$̠mzި,Єm4{)N8h&Ȃ$%%5 A`#XQ+;lc~%낱 .`삱7N|Q4hF9ЌM6-RE곒 wSAj#b2RTO/~~wM3iHVCg{֨e4>78©<\ M)C=[\z$@ځ!OWg 2nmⶭ ՊZ톸e'gS 2`ē<\.P)F5܇Z #.)8\ٜ@id2yb0Ι:HiXF;ה{Ş*k_7u^|}zFr$~aP o'~z+fZ(mFh3mnbږ9n(XȝM*wZe{rw˾zFmY(}zݨ9@ m0r1ZZ&m8}6;ߍym R‰oPwοצ?Ha=~iY-4k0& RBrKZ6>(n)(cqLAhVEvPER0(tEd̲ȣޣ7Dfd<)p}$€2`D(`I8 Z6!o4j) ܁:mHP2[Zy7;O_%)s#;)S0' V0| TD>m4Ad<}Pd;щJ./[J,bTUj7 ~4B. ]ps [.ؤp !Y`L3p2bokN|?; l~aA|'= Par{z&m[ }fGQ͒(M2)Hp6zS*Aeq0wM<5PV"Nrj'&o]XBUHp؇ܮȂנMj~u~vUճO斖Px'tb ҅ .\(wB wjL[je=;!}ԒN;9F8`4,i0Dw9_c05.LH *:P&Ĵ66 6PkRq۶h9Mi{5֕ⳝx!{WvhD|rR<4'(oy jA7qd\%p3e\8LHja(x8LpJ-6jkv~+wًj;iJ$pF># &̀zH;I0B-& ~GPO`[3&0$(J"Kj;.,ded0O#ۿ/\HAQ2M^~OĖMZk`jxL@ fҚꪕKkŗ"t,-%bҢZZs3Қ-5PvJxvDNzl5lZĈKX;Ǔo e0vX ;FkNUU(] Oxk+XD]`zqWt]+kA2U3-JҩƓ#Dy nG׷S@/F~QFO*uB;#ݙ,*t;Hre N,/莿{b)5}5waıLȰP7{솜wW>y2?/nL[ /*3 N6Lu2nƍ$voƳiR0LMjvTv|;4 nu_۪BP}tO&ټ,7"8%8*VrNOF:1NP$S8[p's?kVS3՟S@'`,՛8VZ]m65s[l;NgKK$AkO~\`\b ސhht?|!$kMzjhPkt?ܞ<.(C%5x\`Y:ٝIJUۡ LHhm8!6$l%ns;Yd #EMw]xgzS%W6PޟF.&?riqJUdo;)R3V HfRJ~yQA-`/d5qt,*E)OVxR%`=y\PB`FKrߢ 0Zs[RjT(JW(Q-\J1F)+X-@TuRYk-Zձ{>Yb{tDyk?AF>*;%[:*)k͍lKW7V(X@8K')NT| g6uҤ Hu`?2?WWWCQzy}^zoBYY]&Ń }|G+-(6"Isc2Kd!`\KϷTuK_ >rtKoOV؄/ ؽȃD+e4/zbAg)GJ0FZ[ ! 8й`<5GҲ |&< ߎz @_!0 >N;a~Y9/qL[U܏1^ʨW50Q[o!GQFMZMqX/4= ,uԾ.hXmQۯ2\1P],10Wܗ:jk\#FVYLZ*)oDordHdo*9rqԦkp,\6F*&Jp/o,J&,1 vʘ71c=$`ᘗ,I=m%d˲Lݖ-n[}'U_9G!b\T }Il@FbG-T*H y]CBmPq[B?-l2߿+yI\p 16;(Z7Qac_û{`ͳ؜ОU߰V~=Ўxg1g/| i v 2eN*e]B4MQ)粔,d:s1&5Eƕl/nV^ݹVud釯~!x qN>֝:8?ͧw _9_)s@y7qB^RZ-;oj{Gs~z_/^'9AgsXxg'`d&{l |OGst_ZHӒZj,aq}XXq%vzOW_Huֽ 6ۧ"~:[g2Y_ȒC(.o'jUEGhV)?2ߢ2D+)D{ Jkzǿr5 @pYï7vND^{dwbNߛ) E^?Tdӓz}(Ib(b A@D2ijV> ў$Fwqq?Ҹqqqqq[{s: C],e/Ûś-*Ż:ڠ}T&R飷rj4r޼@E S]Tu҃އrȢc4kVё]4í1]ꋢJձ}? }j[m:<[m{moն7ev2[JM6>񳶚!Lq65%SrnD&פ*[_| vMۙuƩw2|M?XQt_ lxr|hχ_~~xȦX=4qaAe)+c*;ķGi<2!„f%Y]rbM5J֩JnVRKdB6ͦƽ{>pÃw.Da@vF`.0{񰘅9PK1x.$ F;D~pUYBjxm8FO]A/VSlC2MnF|!V+%pU[!iLw&!.2=~u\/ۆm"ZܐsAُ@~|L<;7᠔W)!־}vHXZ3:Ԛ}$2hJdSZRKqQBŖsv+?{c~E9ףk8}\$"o`C.vP>`].C FJZRMl\UUJjK)RvF}&Bg+SbB=l6qF}(<>rOOpo#U|2/ݼ7M zpqM wߧ_|~H Ì^ }?S׳҇{M;Va"-.oh!˕!t]\Ǘ_|CK@Hphm/|mu`Xn΄<@ ro{5d xzzv붩B/> Oϖbjq3=qСPtM2s!5q1Ct'8V&)hTAslΗ\KkڂnY-{Hy{s rM<;%}܎$Z}=e)ɘ a _bP{n)6>)`bB ,Իm>i#{rP 0h (Gۛ[5O @'@-!r]m?1c PD2ܱHOHGS܄U#z儌.;hߍB[`wf^& hfi; ʨ s8,=| KvNE_JyQz8!y8jaҋa+N/;N4$Nآ: NbX̼zĿ/_b}vz2Nς[JљC4LCm`.C^'rZ˙vdr*,3rm$*oS`݌\~xDXߴeZ|AC!&&u*!ⷙB8x%ddVkӪq*6W0qF.-d4G׌b ^+~<9~\*l0zȊZ(Fۂ~^6ׅ@3@*iljB4JM,uJ:OBfm6lij&(Cx_b e>Z+,+P\$1jy2Q(p `+ 4{ Odi.7{uոe^4} a3^v}{ɰ6. wg"@$\l¨tb^gL 5' ]q 'WL-p+ *=`҉l #+2g}|udO!ruj &%HXy=7 e;5J}-j)=MxlPiHsx 줃o}p'J #GceMׁ2H d@0!qyޱFU oˀ辧FqX8 њհahj%^)AE,L))i0 w!'½#^sE0K;Fy< H?AVò}bv?ןLcYr@28\:= & GN;5 Gdeosv6#cFcP6ٙ= 3k3>s?OJ 8?6s1.NO#K~d 2XǠ޹wQĎ+ӯL?ck<EY>COf֐Zɱ5!52fbRU\xwVz.o~xI,q lᒆ_B8R%ց@&![ɖ5^![WliBł(c#R USnGkȕjJ;?9 2*{lͭb5gu3z5OyDlWVv9NYb=Ro#/hװ-ȶZ7H+Ćs_h~-VL};(LP![r6.4^KXrcNf"֎XB?A3H8pIYRI~ /E0d\^J2S* 62.eM\ Ԡp|tRY桞Bֺśiyfe#8e|o#8Z-͋-Z%*8Z:.T[R-}|ĩ\@ߚ|uazݧMQP;I}۾*佴?~W8`AMxq{ԢMT1jQ+3Ї`y! 0q}nFZV%Sn+ U}*N`?E8 sR $<<=_6 ib A&{(}z}dՊ'FQlha$^h}kNU&۶W,KۢnfGI/n/ufX%dr&vFE0BMX k1.`eJdeDͅZF-.ywQQ& Ը= 8`H⻚jpv spZ>;ߘpzc,`c;E䂢 PP1 )$yޱFI$&ehx$;'uY ~.F. fW 1)s$.*gHO*NRҧ/9^yFANexj<9er7/z7)!h7o XLX9n1mlg_H~ uB,z ێ:n`'4MGL=)|:nb$ۜ'۶]%>. xOvB ͩۛqm9.A~ZMkZF3'{8S*.8>{vz2QAOxcXn7:zwwxxUd\/EW 0{Iց^EEE{..~y+%0"wfvr#${Wܱc([EJL'w󨺉y.]ndX@q`IR0%A|&#ZCbh1xՓa^@Q0mwK LTK5riP)Ķ'(Tyʳ" B@TNR5w (vH$F0jiHqqu3bI]Kցr͆q4 #) $=f.qaEG 0芪X`Û;<4pK\;'E.#US1][sb0\9BhqZ*=TY&|9(0 v,.E Q`2^jYf2#嗒JȎB{)kĖcŪcm=S (s?ҹ R+"24lMkfP4ѫL7zEޝ/~';E0u)`[(5bz^([<}S;HE"BR4L %uBp0a$D5 劼;_raK+!DڑvJf ]$}^4WwHMgPEUkgv+1C!j"&䎗R `10z4v5.OlA,jRz0v1 g~"Ҟf%J`I"2I-F^I…erEޝ/fd3sʤY5xc8 Ԫ 5r) iъt"j+|:=3jPճ(s~4=+Ucʏce7x@Fկy`XGZNQUd K;Z-@vxMkqwn)W hZ Gn4\7܏ {CV"U~4xYw/en O3hqߛt >$G3̚ԣA`j:k <K`Ji4M JPSG1/g -4,Fgs" Fˍ8D#M_4_'?e E~Qb(.41o5wN=HUUߥ~:yşAa\;E&pĞU{?m>Q>lu;'WdjXM?ɉxݱߏvnugouƋ؝Y.wEdӮuY?0<=^VF\lQKdQQgЂ~ٶT, cL^ۼ{W!!1j%tN(`c@ٴ\ %vki,R 3XpU7Vm*4@lvMY"hKsԐ4nKh9uG˙chohWv۾`A\Kk/L$K#zgX2}{vLʵh _G6gf/ |[!2>Up ~^܈wݶYoF$^ mX; + s6n5ڠ)7sߕ`_ \v@958IT ,M}I~v^S)hhtG&ޟyK\'է> c1'p.6dq͗뿿WSg2h}?i߉;I,"nn3'wr_ ɻBA5Ddžn6Ә8ᔇ=Oi^}b߰~/O;ܥ,jdq*giwgܹNsdq '-[ ,0$Q JyħrH7Xl2I.d"/D0o\rKg/9[~>=}ڍtG"vVDD(YNL@x˅ |6z F5 3is7uQ&Z.WNz,#]S& Û%7ć{xVe6ɐSPޟ,I!WR&JWdR̻&L|z\ >=]l~S&u#{$jȳpw4ˣ gc/$UޤU='s8YB`]5>t? E݅(!7,o嫅FNvUTMvyq4gə aTi~,;Ǐ &slK*t$ߧ 雄 "5`.A6u?pIhR~Z 屢ctTa$hw[IiN桁fe,qsڹFdL]yW ןO .6mR$d'%)ʭҹ^㕲홴ꟊZ]=YJMn_L2)jjV2W}enЇߞAW,M/ŕm泭-D)z/gӊ5_&?:Y R B0PZQC^2jQ_t`\Y.xʇ+uZ*nLӌ 0e,mf"59Pdd3ȄL{G0'3&T!W(SUB LR>NFXUJvƈrZDr0W5 3uK5Ǔ))Cro!KG+ Q08ҹxZKsNIx,'.CKv,ٱ9sMF._kC ۓ>ަu_ol?Q*{ l4Go?>ݸ].Pc!L "pV'Q#Xܭ ]Py߀Lnpۅ!Vg!TrEKe$+xJ\+3!P M'-Tg7NFHx;[)ػo?O+mLUAfDT}[e}DA\mB=y}T#ۇU93>yzv #==zʐ#nwzSU㽨 +BۇOU!zq;|)<_~:i~T5Cgqƴ1E jE't!A{JU3:kC) ;Ԡ#O,P"R ej>RJj@Mv5#C ZDGq'◊P+ՠNNFXOn{۸g/qW}o_7o'ûC(>4efMp`x)GN [J8^$ B͛QSz-c|o3.tB^ ܣ. 39 N2c;΅`|eR ~MrM>z 2c*v3p;g@¾,߃Г5ԅ]|3ח=n%%Jx"`4~tXpJg6I$Xh^qQТcwU@xv;҄po |* T ,]߹wKk2ujmwAhn3`H,`4!o wn8kmN%c'x)["J P6?R{3 xt(^E"VY'ރĹ&']Rڍ182zx|D 錝 jT)v5ߴ;-ۛYl58K s z1#ܨaME:`.! irxҀ?0F1T/Z|,R:qzRmdtK.á(!]>FcdǙ4@j45A鋮*i(1b+Pl&^c/P@Vk4?/[ߒW/$RSIr-KwDEQ/"s ʮ1%*ixdH&?STрxt!"o*"oy.Pˉ 3Yԥ,J3`:T/&JQi }kq߅zZwq](2ñw!20(%/*"Y~J>nS)a{LdA F`M̚}hXj .p뚌n3YS b][:C}4yp4P;jhC:RN],Cnq/r-5OH8X,Ɉ$ JJ%~IѪe>mtNrLbVb&[{VɼtpiA wމ5c^fB WB S!QÐs&eϊܭMn#Ǘ~`GtiymN7Q:\GO PD%uk›PK:7KR֢%FpK֌>";<V`G7 GhѤ_eC*0u@-lK؀2/d5[P'ZBs̡+0&Eŷbx3-3Y.`d Kym]OLiޟC _n\Ht.Eeʌ+ZV$S"s 2T X%rkf2fr:!C )ը}wS8u-lnW>W+z)]%Y<(2fR#T^Sc,Sq`ꏙTFv֝˟}{Qj&yE!ȤAq2BmKҰ?J3ȹe@VJQq"UY)`b*yuTJ02Ήkp 0afLśMG'M Gr>63Ǎ0!b6d]iWVY5*i yy}h+j0xԪW>ZjTP)$)2`} Z2`ih:[UVnUYݪnuQ0aT T:7=/\OP!M B}j-Q",{R 3ɛ>\[3!n>c(RC8IsmuKڄ`-rY1rޕn={B\[IǰMBkp}Uz|4lRh:Oo tqNΚ?&_0F|`Fi7aLyfNY)r(oI`#.ݹaBPr풾BJ $y"HC:oh@<$9 c󔼔pl2:'wCџVZ֘b̈ߟcڨNzT=~y䔠N9jWNj^h_cj5怱y9&>16.y'OK \>\'_?DU57JJ&Ůi` AʣK4'Jc BߜY,̀)קj;<죀!H"<6L2 |#7Y7z44s۴y1Z֘kEU.efO_RnR꤈4R^M ko=Uo".z{'c|NJ044v l&e s68Iy?C#w6|KQ:3 iKI}ɸcUimcic)6#NJ҉0d.f7W [k\>W!m|i08DsqJL oXbfEoXZ!쌻HQUYK>vj)5 Az-٩elm: ]}֡ܓ'C E޼Lx6|#`dBDxKF&(d[!tVxCJC qbtWi&dj (k]eS.Y%V<xGLl剺vhłf^@OU`z#~MwCI+?R>^ʺ~O7+Ӻ_Vߟv_EKb^L2}MoHlR*WP  k@ .>}Hd9SH \hTXd[hA٬ɘH]$ }>=5}`pMyOWǫ }mn~s?Wi5gWǹ?')ao| tvR[b!O1?[Jĩ?RE7mUOBOmŴ9Vy~2 baw?LB(pnZoleyO$nʣ]J\}\@(g{UBλ|d%Oh+1w+X_muWs֎04{G{_7f: `P,g5٥NE^B&+U`?uU ɲ9<e7 {0Qܽ7o.Sf0<5 Obw || bOF ,nᷛ--xX&y*kw5Xu2iHI m%Cç$#GzBԵfL6C3kݭkCh9C ^3ώ`COjO1 A +?KCx+x42v*c𾏭98^W0[|7LiR|lձW9U'} kH_Cא5)IE5!} H{Ss8EͥɈLV>|&I9{ԏ8rNtFs^ɤINP<𠘺(Ҋ֏vEuj.9ƒy16DF ~%8dkn(濈]2Ipx[!(ƚ-Qhdnedpp)F%k*,:$7Fuuz{ɤ.v9Úr?~wPB;}IJc5vv="W77"ۈp!oH_a'7m9CuL~}]#]Ziog/.0M018.EJÒ6vƓ: qm˄[LAP'fZM6ל|ûƵt]\W/TIw;ԟ]{ ÙYÃ2MU\$'߫nlޏTqCޟ00qk6v0iiZɿIXqt%e:}11by( И@(<꧀L`A2T@BXS""tP5F_魰pNY6hom![dLjlƔ ޼?7*W>Sפ+dvj$:óI!*u`BP;ٓRR{.fĺ}|+YnJ_cv||V֣# @[BKre+2 ,[zzpa$^xyy8Gq$GcXzdiL͂*| A]/`n'ߧB|['?!`UO›-Z[(w"XS¦6+З;aziDOkR1ߌױ޼JiM6BD(YHklJWOwR~6H2ƥܸrRn\dd_pRn\ R.nX(RҎHOr2$ox.Aʣ-ZA5 x-#Ҹg )ا/@ xVcG=H)u(tO+fNDjlaAs˶!B67m}MKb͘ 3P cMfB.xOtxoϏ5D YcYd8rӆAjOji4 0L((+] qnkOY,15<-oqF229 I@b+~Xko.ĐHgc,dҸRKc!M*,4biy =K;snl":}9WL8;u2C3Tɼ1!m.j)]L^]2ľ/V՝rf_}:rrHzb!f1;ʻc cf2 m?pH! Q$Jjd.SVP~ K.^L?hPRu q1 zsxѧ2QsT@> db^0@pź$5HA/ 7HH?R)GQW>!CF^mP(}8wfRBDf@'#d6'<86Nd (hHc? `y,YL | $ !P5_$%5HbJ/7l}n^rG!.x$F_*hSQ6j)*BPۺs}[/]V^;?Pf3N^߿6zq4T^B0t-rDOXAb⨋ '39HrKHǣXRVtNl%@ Ui˟~YQPB5j'c&h bQ$B7CceS$ '^cSyS@~K>^|]!<6 33 Hco1(L\ȻG?pgxe&F|G.m9֧GFaJ,@(AxA@.u_xH;%D G\=f\X"Dn(}L0@s8ю]Ǡ'2Xܠi-p;T6" Fc@ \Qq'Lyq~ e "KRەT.0)F" 0:}#!H # @ȗSJ/ݞubĊCkkilE]PV 91!^?L(VJ̈́ȥ+?neic6~3}1A3V1 VqP}xffE_(䇦V^A[a 9+3d3 sީQSg?^Pk돵D{;_[K`6[ )pةsRݞ:Qk\KZ/`ZVIYԥO6=Otgr<~\k p{N;{RJ}l'1gDtR}EkH,3M: v'[f;|)Pr֌w|0TN,oU.yM5;Nx<_uyxu(;7/mwy7Z{;nk5^[{ נҞ H=ObvQ1.14sDlǿ y#J9G^$8BR10`?w%L̒C;KgjFq3 B+*0TʗUelrpe_l"a׏ŁT1' S[ LejKȚ6DdG:]owg#/⪽mИӿVu̵-+J|T8-y\҆q1 ,!҆ze|&$y|Wy0Yet8##-+-u~<@Q]a hR`Q#[yڅN{[<Fjv6o|RlGU{*נ[06Ϻ -dfx5j}7)^8+n|=/]S¢Ӛm /nto@Tg~#9n"=[U$U8<qɓaO|F+;Nߏ5J#M[=aKQ7#X]$9a L>(S| m{&&ms *D F6l rժPlnC XG B4M5*U4$6BeZxC [{;-A0hҶ VP8pCɌJƧT}õdd’و>~\QS @SٳzCNpxw;`: ,ZUeQʒ(n.SQ2TM֤uPn*j ^ _,@K~\+%DʩZ\z=j31[oDt0j<,Ms4[s+ߍދZrmੵWBgB-.ol+x.U6P!R/r'oR)% YJ1z % JV-YBdBHU!~jdNK6so[+e:j'+YT&S) -iCT]&"ǧ5uAx0{pJACXKtMrӐIG `8X^AAlC wZ߹aӄFf@&Qap=#1'\'eH&3,Er|E2oE2=19TLՊrdvij⍒XuKFwBUW NNWBpE]Q :YJlQ ˷ ={[FW-,ɫE^ Βb$k!ɑp1/K 1%sz4كS S`b\ bNxcXo$s;21!<_O7n*a׋Xxֲ,oDy-RtC㖸bB^)¥XqM"`{'Gc3*n޾٭ laK3ejwZ(1=CDXd#ii|i[2}h415B*ӢÏ15p 2}VF'x.J֨Vu@X)z w6z[»om/W0YĽ/]PXDb{5U\cy7:Z$=L~Y:8i,С,.z_;h#VIb rQ./(1T,IB!K l|>LJ|nm AB +Gp}5UX 3> uyѲ~xI5OuJ/&vl3Ra2:yzSW (}@l>htvt8ETG'W3Ѡ'1_5x+94 Ӡge4hUw-o^.2Y6fV'0jѮׇe性πF!OL !ӝQzj)=X^H+g" ]@mχO8q47g?>goms}6C!W%w)t6hnGpqȝ:X0h w}W O~|E?؃vׂ>qb] J;Jw$Ws8}MW}x;< RՎ{|ӳU!Rs 8q/ 8g.9Vq>ǽ}3^%NQ 8SZa{$/ łę5\θŝ7L_|I]nx3qJ-9x:!_IPmXO*tXh]6町Ƿ|v=><:{  A{&IlK.EgWq2Eg3)fr 5hCW2]Б:)k!N}|k8P=̴qţpvGCx?sЇ3yZ^uDZt=x]xؠDZ.F>ʘy8=!)ߖvwŪaU1{G)ՙjI\}[i!~GAqWGܿbj9)eX{bq.+(9-UXs#SqөG> L'.9rdb=Xmt2GhͱH_Ch$~A OpgUm)L5_uy1jv_ s;|Yl>ߩLᏖS̗̣W1 Y'/:gǔ6W$G');~tFN^1z6br$XC3U Vɂ8S3xt(̊?b-W3a$|αnCQ^>Y\v7dxC4cc1_}51 DS<Ȱ#'bZ/a߇ӝw0ʚj$7܅Śi7m¯>L.-U4u'|Vc+jYrwt}KWoPmɉssr&f"{:[h$yT_\QI|k/sW:~Hb}}]BaE7JZ/nbq۲q\޼̓or3fozz,ͼX~n f܌ :JFȄq@az{Y_.S/߯(2p'WhSvr:A$@Uv 6IPV8%|",w;< VJ;ƌ% wXUQN(BЄ9qǐ%P:w;il56vΆ}|Daa51O T5!Pf lCi\LޠfLOG( z郫_ר6H@GRjb$CDӅVAWaoOts# v$rQ[}wYp0=AMI=A :2`:NM8}Ee=vcZ ֯W~.ߴ{"@c7Pg'Zr%HpɑB#j{|K/}(N~cqXj|>khY]QwslCi×ձͷLwfl*|aЇf ݰh:5CRsW~63&j<3p[3x4hp,\>ărK,>cDJWB0\LK"B[ +S*ZJ@]kTfPt<ňH%yZ )yMNQQ(JA@6%&՗YV֧<oy}e4jQ@@ACHAgR8Yj^q>a5 9[(=e4~H!&%Mc$6`P9;_\q[Qv5ɘdR lf:V.>FoD!Y:re3Q<3@5dJPmtf"zdNņSk߈x 4x<{Y6>0S}7(@!%VM]-ZW5 .jY(+>#]Ko)n#(eفbhx}\ۜJh ~vuswȗk!%sfY6߼on=3oՔNúT?C0-KeZ'Тo>>;~ȃr _Aǎfm+*.nmmm*~IJEXH34yS߯AjFԃ(ab{4u@ ܈=)E-0cLN}([`@>D p确K'oͬ9-֚ LR'a>,N@Rpϐ[ Rɺa)QP֧Zû!,`}jL}fyq~s" %aI:95.dkyk-t{qϭk٢y?ω'lN ldV<@T X`Ʉu>NsVYhJM&9w1X3&;jp$@췢o0a5!.Py T]}9FAHͩ;\Qd'ZqpK%aMm+trb%AL}Kp#V-WTP.ɴ;Un9-AN6*26)vtt Ɏ`I,)Gz-''ׯPN-F'#$or8`ӻ凯>Qw ~RLL*&_-K4[#ǔ9]Shƾ>_]${||; W$hza0ECŴx%JV^گ%ĉEnԝ/$!w=Li70F,^/qQ~}N%@ãPOg|P9;J֤zlCBug8_,/5 H| kK/zPic[]j 6j/'cw M+]`+7܋w0~q4'pI S;FVՋ2kBf7\!4E`_"+&Ý\qA?bj,pa5X?|Y(ģ4˚VA~ %ad(LN%U{m_L݃.sviZ|kX E"QI0&0BBa' ۸+SӨ>ըJI͒].wc\ުQm [ ʺI^RH[qkKng jԃы{a:UbΑ(* 3Yde1{1VϘG8՝#U\|8/5 3Qzso4 CMhi,?d3r}YkTO`$" Wyȶ$0]-91EA5rʳ7X!/"_ЗpƟfʩB DIdWf[42]aB2vnYWUVո~ԿAR7?&],mka~~;{$:1dKP%C"`3`F$r*3+eI" d@LiKS"%|E` =MiF F4H0cWn'QfLVÕD2{ p!5JBڔ)")KXg (mZJUYF*mV[Eys*ו^BS2 tUfjJV` .äi;,/-`$I}-9ОT !,,;Z)',5lPYVpPN@N%S9\(YHrf|L!B0oXs ,R2bj08)ÆgpnACm[@"z[C!d 5"\ǺcP"31˻M6$*]F!O8=W|'#ZIVP&; kL[pZp+QJn`Վ|޽b903U iE˯1wV,g'^iDft~]'cbvkdAB*^ Z˛'k;f]qaGK$GCqw|m 3]ek&Q}-l;/|xChyC| J\?TLi{[ŸxPⰋb4^AyMtKم@.5 Uk@)GoU+DmzWq@jTl gϴ ex3lM nF?=Nfկ`xE=“!,TˮYC4qӽ3s?8JTrL99]vJ:I r?\_Neα~g]@p  &;H(㵎]zW5ZP;mkF`Ω@%\_ KV7 a.ra t ~5ۗZQ֛lTB. B'l@Vwz!`G}x,<ɢf d$/=]yDzӽd ")qt˛ӽ/dQ%BɑLg6SH./񝁣$}#7ڣ_n#Y^@e|q;Ll/`L>0uYm&3?,&W(Cg$$~+JRYKqGһol涋< &ȕU^nl."xvABȵS)gإ߁VIk/b26Am~,1yx2G;l/;\׹dR_IO'krWt>IA"W!+^OܦG.3w?ނ.ۙ vjYgsv03v50F*bƨ3MeEXԲcw,ҘLST#C_Pէlflf׳W5)rwXܤ9LAH93 qc 3]9*4WC :ɢjmA_뫘rgq8.$ji,.xna?`V2T&}]6SK!]Z^XߛvnۄDr+;}8J j/zY}_ zF"LtwRܠNAȇ: &m(~ݽI[uVpJ{DrkBLB%0\$tQH8(׽)@cWhb,a+ackO~Wl4q,^ 씠kS'" 7{vvWC66^" zB$]dz.JO`ewr'lgE^ƼVy%w7,Kg&OS73ZGMm!mv`[āI6z˚f$(p>a)5Bt6"Usp}rMN'TK$YPMEk5HhS -83֎89T N>;M31yb`.Jp6e,!&8Ȏ 8)\d%7V-6(YZ`VۥJq1C[gq#"~uDΎU2Bi}tQCIc "qź 0DpͨkF{2k]!6;-d"vA7m7>W-PoI7gַ"W1 ptA-4W9`J<ռ(T94@JiߊمLNKzᅾt Γ}jWDJJj_;)x`ⁱx^;BXIsbPg4O Ǡ,f2,h+NᝆP];=t顃,R ,X,;[b$+ ,ZMK:I|UT X26zo`A$ 2o|M}v|P۴ZװŷTFf"Ku/t6P}:њ!yO %,2Yců$\cFP,IlMYc=ߵDWU4 %0=ba~yA0D' &a)3IZ~""wτVch -ҝ(bR#g֜qOuǖfoA<c/bSD(}KUcZqbbʉPWu/T4b2, Z)ivIF^9myII+͹fpk R(H.QDc3aL%9Gwb$C!D.v w~|I@kviSBR`Ґ=Z^+qS}t7\¯ٛר \e5i$ Di:JMn Ҿs &, b2(',e-N#( 3i xm!gJ5#*w&tV5n5}3x<2д@Lww3;9Xvlytы+ud\ߡuHbǫֈsG)˰W"gnl&| `-X,) Ի2Wt=}jRb,T;= oNo4]`̈<.|=kV^HF«>LP \==,'"otr kjESpE| ` !kՎõi.V99A. x{Xͭ x_ \bM7oˮߓU~o ߗF):@aԴ7*{>PsL@ARO5 \:1-ib0Ҭx Q=l 2_S+(7NF9X3AwkJC009#ni-WhUZ[;#]1_(+EqÉ(g<! n!ϔ XM0«8&Jʑ'E~WSoZ|8Ji}~Cn.?(=0-x5}?i*;Jyx-H0Ǽm}x;H֘u=ۈ1xi?'Dir8$ktxhLtˢ[\~^XW\_Ǐw{"{fDybǏc9eh1apSΨҕ h|Ʌa4=LEo0]9?7>zR2_ o"#bI~}}jU !'&xAm msT=p~ZGHT\qǫ<RxR,QE7t#4dTRR~rWIK. AJmp)`zl jLF{ϵFxrR"I0"}H{($jKV6ޯN49G(IӚC \k D0s,U0 E(ߵ sm2i 䗽jiC᝛lvHƻ$ۛnvB*ARmjjiqH"{w@ iwgIvg)v7,|TM{|WWdʼn<.A8B/#9M_Hz;Nr򹥶lFOT?NG2O.0U XӋHkp*j.FK*x9FqV#&Nt^}:\{K$:ʣLHrbʛ?Ui{$)i1licI095\=gil6=l6=lkk+$v/SB[opAItPA| W3sB<Ը4S(ű!܎Óm;DJ:izzਝ\I'q;qt kaaۅPL`Ï8w8!V92d!'HűCn1ƢtNS7RY4*)Rf7-ב,X{@Odr}1w(1˸xmAagsTܫ1Z_y}_4}3*6aZ 2dB+;Ɵ smcj jyT=ibBžv,wnlJOnI| ޭ))u1YiuTp<N2gY-y><uVAښDppqD{{s ^>6ƽk9O2! q{>`rFĔ7.x燓;uax^_\>hh/Gjw`l͸SV')6R#XU+d{àt"qfF?3p XZ?rjyy5BX p|Ἠ{o1ؘNn7H*U fZ)irWU׼oմTL¶C PGʦDTT󆯮ݙ(]ڡ u1rtwV޽H9(^-~O~+krVge>qSqNm5qk{9BP!9zʤJ]Qpxg,LAx`ϼDG|}lc%{b!Rq<埲{LC煵b63E _Z1Loo"ټ8^[ 3g2 4 dK=1J3 u( ㌥Ȭ"I9ERگbɬg 漍m%;!+<ʀCHG3FOw^IyNǃgvp]w0(!N qA.{?tS;R‡w,[{rylU!XJ z y,xIA9lvuy#_)`Fs?4VZR|,Iq0yf kY:7|l/AjJl d+<!gkvgTݥjօ@ruL6$ML@VE|`C b59 PrUNsjǜk0\I˝R`&9S#,"h1bA5J|>X->hb@HnRp!CcꤤyJy!T"7f:aNxKal .ğ-@}ڵ  DK At&FX ΉB>oBiN<" B{fiR-^?M}M#S ȨSA[ d2@`l8+7Tʴ÷ॶ!dwg{Txv>&;`61kB cc|'r܋i³DxbSU"ϓyb{Z.[b }"Qh2YڲƷB29@|`4KÍ2{.@ڗ4J4POCPZhٳz}$½Z=*jy# zzJR=LR2٣AAH+ѱf=SUMn =;|LK)EDyg绺a[w=6OR˫L%sqăk`%QZSVѳ]yWRbnx$)o2Qz]k3o^p -Vat#zZ&_o6 FWR1teBtQ9Dc;w5afrh溓 Kh",!{REJ?92K)#1y#K(9 ۠l%M%sR`Fc@8t|G>NQl@3wCֱሟB<2Z)_ EY,hj(.؂u3j;5v]Zemw ~KF A"b .JpbI{tLj^Ά|sBZH^M)?e\U%dUPQézbjDY`#w(/!ɻ/SYٲƉbտB ^^:pO󺧌 Bt<ȃUqQ%(G]Wnhu'A {o܇0q]e?-U@]~ Ag_y/%K `J˝. \SHݜS&T @t+L$S]u(WʚjbϚid=Leq 5%7idU&q>lv@s0r]b:<Cua?ۯHDوr^i#* }; )]Ts24$ u\PuEF |`bRJ{ǒxپ&Rβ; LDS,5'ʿdcjВ#ҋ|1W\Y/ 8PKH޻LY ??δFik]g$ZJJQO9D^j,~\bi-mfygd4~WM5Wfu3' uҪ1wuMX~ fxl98c+80 +nS"D珳0Y6]<~r!,9,$/?0lrxL\uxӏ##lyλ /ŋCBEڠtHZ,''Rӝa;u> n!"4ȤcA;ހg ˀ|GZl(]?JD9skG%C}rxH a"1LC$!uvDe2Rd hckZG Fv!@(`iYt".]|?t& K;˖lS$9ErR$(EޤD=[$XkX tb,@GKJ,IkHatDP ژ A8+4^{AdcBZ[T5nr#ڵsȔDz4VQT3SV*)G  4BDrJDC#8yΌET"9|i%T@d I@i- bCdcJ*U= Orۢ\b=%BI06;T`'X"-N-j*㿣7>D%(`E&>d,5K#8`aTvk,c^@{pD@pA*_x ~qV)LqKE.!t1Wa՝t,?x;Lp61i<~G6la Tap\\eF o/%a2mW/ _C |~s} "*&1_~5s $=~p%Zc9Wf:ET/ K f_þ+7wޔml)ȣr oXU~Ez(4k%GVÚǡ1UY#a1A&l6@$!|gIʝsHviv<'W&jb@YČOki#Y9|r!0Υ`DSusPTBLpwqB*JbLZy<BʀPUPY\JE8e.wQf8`4V#Ҁ7 K\(D$BoexD(,GA'o>* LOHI fAhfsAe4qx˯[CeFTYR@Z"-EQ0D4BJsEmTR;d0s8`c\ u; ѭW UgAuQ Rצ@!B39P=((x<@w6 wҒۏs`.=7,$P>?ސinbds+knFpGw'pg^DjH}LD%TXY?%/y!4h@h g:J5h1ј}PaB<}( '$AC 5j2k%+ޜv8ؒ1}^Cto$%V=7_HME-,ur%(o4yӨi@9)|/$7AVպ番n(pBEYqL֫AVi&w `e뵛+:WGUORT=AQFVŲ ȿExv\fu &oxvJsa$1شz$"z5J$hσke5I֣Ucw֌M˄֌Mj|fn*>1!UZڜ^OB3p[ԣi:8kQeFdHˑaHvR:7\{-v:!BM~"_YKzBN5ap%i=FG5L)|4_4)#wm6ys @K* /Wl&IhcĐKؕÙʭX:hQi}2*1u0h=FO8:9 qX@4FMC 4&j8SM6U7WTM(F_W`PK7vgkǙhGkiM=ljW/#Ȩ-˯eo٫k8eZ ojG2_2~|yEOJȯR KIk~i4}k٦xIhcĐcpV,8vv%5<քHqU1td{zpƫFW7I(DXJv ak-Λdx]=Bm8)5/^86׮/^m pڎxB{tj_RW~6(L.9!aFp[N'NN-m^´̍/PE@oS?uΠʿa|8B0zӢk'/?Տ[59iGU.f^Ч~7c5WN.Y^֓E>'~S&4_zܙFB W[4Rc¤tx% yE 5? 4x%/H(Z+j%y&@]dkG3oKG)^+@5ZHIhc̐+<\8)c$8C6 #cF(,j".0ADEUܩ8˼ 5GkBĸĸmn1ءK$$e(c#J)%C#,hTJԓ|ZgѿCnq%(C|ZrFqӦz4`Zǜn}e_te{Ay^""E%[%4S2\;% H() ['U54!$>3WiŬF՘Tyw:nLFBB1`V'u`u$l6`1P ̓:eյGBOjDM[jPwǗPī 9og7naH,OM/0wV3$WÃg58UcbS35Wc(+d)5gJ*'`Z;zWq(Փŝ֥ KBAke @Yi$ m@>cy,M&O~7e.gq_W,WA0kKw zs6 je%RiJ%Cyo@#R٫Z\OX,( >.,Vb/ŒrC`hl,fUS0suY+yc@)((e";ᔴN;)V1$h`yLIWus~=@XD%8VL+ONKKqy"M] G:kf.gqKTE~7oYܯSq Ú3c҄)5LisOsTT'/Guh$f> KF)wʯ"<-rRjPFa& HB奓  ,nNù`ncA1f*Işxyks}Qv?PbK!HG H v4'beGHo@v]f :@p;Vzһ_] TK&5? ic}M2B\vr2\e`W͓>2x!NJ+9` Z]JXwR'ZOr'M( (nw~s{ h86J6DL;1qGezkk)%f]IY7%W1Qv8UI2.RCdĸ""ߛ(pڽ?+8<| WL4^Q¤MT7$GDṣ [U 0JCijÎ 4E"diKQ,rNP"-AM#h5ţx٫Y,s&$sK?eRbRz#*,keBxP଀`NfNY*|v N!.ikL1JtA 5. @50" 0orP0}t15{my *jsnH`k,ӛ=,\0+p<\']!0\_JQɘ]:"jLIU\ۑ !W%r5aYK-GKaWPCI{Њ5K<I-D]I 6NC% !A S(`]Zp&Z)ly(_feUHg; 7BN 10V w"XpLt!m" Xxkֲ:$,$XG q5 /n#Lz 5DS I hRw 8jȥCI ` J͖[i,K9~/~RrVCg"?ScBFõ^q*[Ȃ їqXtw8t 'Ըr ,{mpݡޤ' N7~21mxճIXΙ'Wy̙9fM^<_>dٽ(u=y,Mag~҄rTwDldH/'߬6K-. AIs^PߘA8WjtJY@jhJ4xL4`Y <’` l)²`BY:ݑ5U_-Jw-1x")T %pJ|H<zRK(0uuTxRB"9%_mE1^LINCeY~Rיh/t~Lә`+Ūt,:G8_ lVSs׹?l)Z& ױzy W~+Wol!2>ػm]Wv9r`ftwi{;^;Ihvd{_+yv&W"@UiM$z9 h&E{'Ӏd+l$7zVJ)e;؜NJTϦ># ;læ͇EFܰ%1:ם yНqjv x6w]w<݂+ ;x njG+awԬuYU]vQ e0W>cmFLEm4lpjezK.?ll4 0]JC*k0Na8޲9HIJvb1ep[mGf{.)W=Um۠ca%FKld!0(U*o >6&.6Y}+,e"4e&7LJFF)'̥J"u]*-,oi3#$ #2x@*-$$Z2KYҲgLv_d:'΁ vnV+c 斱vYБf!;$nYvg뭑ǃnq84X5Ջc pB="xa:RY+bhמ ǓxPhKk0fTV߶NNϘ闙h3-t]!JKn)w;:koe1JtUL0ukqމp%;özQ3Y+ׂC8s+ax ~Z@-xK$h;dͰ;`/yDel3 * R%@[/݃-w( ̭B0+:^z &Y+RP0+}QsLͨ7o|IrW!˯Kv8=hIKO.ܢgRx̠(0s)cyd"{+6sE \ar]N ^Uqy~iÞNIeN{"S sNTkb.wX"vA\ȾV#joQaZYũ,Mgf>@SmN$Z!i6%qZ'IJd+YL(ɶҡxh"3"%vyf)H缾hޣ]|6خh]>;;;xd.T#e/rS_R$JLw'.mOjG5&U-FC^ UU[@JQs_\/K't+Y z|#iYWzR7iW9@#MX<~YbIWקG%IחϯgRٲNNΘhi7_y5i{ЋF4BYbRV5Ezd "-(҃jOdƙxx/jzQgAv}v~%I}}FD[e;%֘VTGG8N+?o^M /;I~"WNYv R:*6A@PՠvI*+CXČ,s3vZ PkLtOH2N,mˍgW#|vB8᫳XVXE5Ud<1? ZS-^<2#vi~27*%FY]E4:ߚ|ohD.9Xh:ek-wt8[eѽҢóD/*˜.GՋJKԩw4f!w7${⫝̸_krgoϦo+!#1վ ~K)I+zs/Fka(v'no}~('{yṻZ9v;Fܞ"mxB^H,??]x,.jGss#pV8̥7o|u?E2ln>>b[L{,@w<ۂH+w+>`&p D;KlO4,)v]$,rpEN@]:`iu!\(&J05ɐ =j;, 03k#( ]lL. CRhs17" `d]ӎBJgR掑.l!BfQBF>ZVUY]PviMb@=wan(ӪŠsV93<䚫vgH].gkfsyH,W6xaT_[k1bStPnOE<"ݢ2jЏȍZ2J(#L̘"CBG!beUsT48o\ )9$"xA$:ΗPjG8VGWd)31 QY 0 02J Vxm17RCTr!O^HoHI-R[, XQ`#hqBdΠ>6s(|""Yi ɾi4@"-W/QQ.RZ6=1zGiW_-`] fe# "FJӋdJA좼dt["N7e p髇*z@;N)}|zyA[cTzsYJjaiVr5L|U1VZ峭mE:? U.n9Sޭ"%ó2Ogd|6_e<̥rҒSޑt5KKl'E)/Pޥ {4ފ tϙEB4[\H1) s\&_8aD4P?n+׫K^4L?L]}tWw3?g׼b$g ~_T 2'cZFY*'6&kEW&"`!MTl. Y:\\6DJmU=G}dC{}]f}y2Lۗs6J9l1!"/B颦^u067}.yvQ޾Nhj:%؂N3դy+'}U?lA&%E{KäaD_F%~ ,P棊@QH`m@7d RV`YlfQS( b @(TZyLRF:QR%~R@Қ(B ZslzJV$[g.*7(%e;m!4H+%@eGv^SVR3l6j?DaJ-+tزV⦡-)gME /mQ!mQp2FңlH|bMЈ!eʽ9ydA*mAƣ&x'j["]jbH`rJSx掁F/jRur1-zP||'}-Kj%4:W9>}t29XC5̐9a1RKz9Xl:!h 8FpՂlj _M!AYtTƲ$`  1dZek[B=%PU.b,C ˒i򞭣:+-.8S>=׋?8ݏ?ݽZ)z!L~=9c۟]>bc 3!_L&_;bqy5eק>O9;j[mS[~Ne", __d -?+O5lZ"c&=Δ3`h @ǾRw_m5cAE{۶?tОÀȺIӢ+F\INog(٢fHE%*D) Ϝ3ڴ/IJ fٻ?dmza;,7ph\>\2Fg;~o~ul>0خnBKEKW-=rиlj-њ9њhiG%B c&8ᬽ7JVY`GPQ k ,e_nTΌ#&P,R*Q'=J0֕H.MGp=g׺}sjAeEɺzԺ8tj7M(,ju(}?,jbw@=(I=]28']Ge$Id#RRAXDAL5Ҝ`SqY,/8!3x8ϋAͅК 8òpj!)w)mD9Uq GI"QNp(,$+A(AiB+3&: >403}5alᇜEMĎ<,BfeVO @mra j8a&o22_|4ABwUaU#x#뾒Q!ۥB-.)B 7B$/}H[+tΦ#w6]9)Wx: Ql<8 4SmXP : QBa,ƹ{5}Uqvsf&Ge"Yqv ;8 @ûlWa||m/j7);> s&0#HKS-2՘p.|A%$* 1q<|".Ba1jlflC\<hq#MGR)vCQF!SDq?n kE.3p3J3Ac 8`Aci qBkT*J W`c׺[tTbcj§,AÐ zH{̌fQay]quבL@FJԇI#{dKz8)ɐ1rStF J 59XyQ##:NLH[)1C0+TEMX[WgB 9bG%pUHe/_ii=6`Bо%VZ^,rzص=Q ɓSeP,5r,6qTZN "%I 4Τ$̤)O1frlK֨[zA?]YW콠*Li'R1S!DI!yW̓\ŊhXPZ,Ʋ|N6!})m쟽AZn "fJְN_F +v/ULc{zB| B1Lk*RiN`hR >5M5iexD`[(éV,pDUHw.J D]Dcp* "i޳qM SGb$~^ PJN?Uy>K2\O\Q:v0m!B8e $ZY+eu˛*)Q\g˨kh ©u4I-!FQQiDhoX:P<Ug2yw,"CIexF=2$G,n,fs7wݽHS7'ױUN>ft0koJEvA}VUگV/~TUiSSU}OWKkңzFsj\ΉZ=lQPv,Dswђ T]\dmɒQ]]q F~ThFk`>HZ4]$8C-+X̊#ٶq3Уpޠ{֥3Æ_1[< uUc^S_,yR!{FPgwt,Fq3Toc][/$eǞ[8p| O ƥّΤp}4K<1ag5oQwA7'~Mq;Y}HM}& [|&O\%sOH f{u&(vz:/&(>rv%,8^pPK޴s~5G[==m(Lj1$ix'έz % %"yhSI0smj&Ɨ`(p-䎚X3D(MS q*D\e1In[>dR(L:+]i K &VsM{w ͽr!&!<>ۯ4?\Y U:1͏H>- v/4']<ÊDLOYڎZIi&UE̡/  52sDŒRNhҐٛ- HdsPL&1V-Z4S[Ds? 선 e'J;3զU<<ޕD)F?LMOJWTvZԻ~Q v-n߼͆ҏa"iZP!$7@taKC/māFࠞT#;EE&\׬wIg~`dl+:)Mi:̱A 9VM+ge=`A-^^OGY\. 30VP[Lg| `heEUf+<_ܵl:̿|{:W^KY/Yml,4fdQ/]H!p.$3ĒBޟ씆[SNߦ=0Y\^we_]SfHωeNE|:^  t|80N;Q6忏qZDP3N* =T[̋/P1ڼS ĵ9 h_-}e79RZ>I%-7mwƻܡ±sJ^_B,Xq_D8HIppR0gb01(>N˸@Zgyi DBR1B rNsId޻+w 49i-ck4%86 \& !-J өK \,l]@ խўP_i C:شV8ءa1kô:묾0^eiËN(8%o +|y3uLEQ\ذ9Z. [Nf8*%츏I [f3n.Sz'GA:4\Qq|CrBt]oZq^ Z˛B+X휐9YGl=Q`tr [+/}taatWחIyN݇3'+gš3a _=BZ_ %~JOqXGe@if@-sH"i9mݺ#ZyQSފo=j쮨UpW^׺?Xmդ#czfw'g7a.gVzvYE}ْ]j;'vNqԞ6=B8/hINW˰qݒ1CKƜ.f0>'sRjs>uwV{~ )3_f(O޲WXs#Zw՟t~Ww@wqOP֧s;֘C3t,&؏o۸r[Gs5n=bۺs&w+5Jڠ W]O KTBcȲg/|f-cK"DXǚ^I| y늼n Aϖ̟a4DZWܻO}0t3DP4psA¶zu6E&M zD_`>(&\tzH",818g GuEAGQ#Z4CF7)2 3ëۦop}}45~a F+\ihAiʈZF'vp%Yci '&:c:Y,q4NRӓW8FT!NL56'&@'BR B H,X阩4| Viՠ0C};5+k-ܚR.07զfZ;Y(T=z\4u=[MLygw ĸxIzRZ3GHw BѾ-CM:&R>{䦔"M-G妡Jni©T!wl:}Nt"V8L8T4E[Q485:t[F2X|*گ#X: jǭ1l|4{=k]w UՉ'D2NlE9%JPA Nto/];#/~N9%.}Vw  D,~0DdvЏJt ֡C^h|:?_G?!*Xf2:?U([^( 5{^I-ø<7 q{E@J7 k"K)}To,єRzO?Oр0"˝,[@g H.>HwN;RyC})Bw%iQKGXv7MpVlZ}ZbOd-K]55֬{m-zNO4?F/kd44OEXDP wS##E "b<'"RX $Ρ4ߏ3ֽWK8|L 8/ t[j[``/^N?ή%t1Xs7w Ffᆱ:/~L?/Ls׋WSg3sWᇗwO~z͋Oo߿Z|pn_;͟^o_=z6[LM̮>m\+*f_h>\Ifjj/Ifj\ odII6}$%ӺL,ODB@ t7h?ǔ/l~U6"~d7/bsc/yo&70`I5^/Ӆɿ./EE?/L>Ζo_~_~~:n RJGj7 a%K*:z7aҋJ?}є3 ))6ELQ9?;wϷ|N)w@/G+x)~,'gP^BnLEðlXN~ -V̳uK:PUEB+x8ϟ"Gg;,ӛl_P TcAw1`М@y3fOc |@*/p|yn&&͆< ~3nAa)U6Z|z=%~^H];*s<lUbTq up8 &;0~AϿ CwqV:;rqO&Oo곯`3_|~ vVݻ)?f㧷~նбw8wz%6&bRe,e CO{o+a:n4ײܿ&p VʰS1K70x:- \8]tozG' fGQr=:if=âV-MwL_?vCڽ*s?-;rfZ.WO:8.g2[P9{6(j1syGvr`QEP8ط4穑Z)90~i 0R G!ecBiC3x]ȉ\v~QG .|{n)d9ѭ; m@SP3v̛c쑙7{$ʪi0twrmZ6||>ID5'ƧN6ΥG`ֶHPC[^/YymW84#, Uco^ˁr^L/g?8Eu片+m:o^Ĕ*gq?q, )mbO !hp+Dpr* ?z8rJ:\e6ݢŃnwgKC6f|^1xq(ۄN0`` c.iǔҥ)Nl Di&se# JV͟L]@C;|:865ɩզȪbb:p5w}"^ 4 ޢ! oS {Tg G F*8)< )bamXQ9#(āu7};k18M r9n#& fF ԌӘXx1VATPHMMK*m  V)) AV)}JQ \<Ɣ#LQ̦tLmdWCu*}p:XCE`eה0'5u6:xD+N`?Jh쪷l8{lHc=j "!Din$HZD0@X,R [rhrsYP[^jD@ Z/ђJ;o:l?&x[,Ґo] UعD\Is1+^#:AE0Ҳ-,{W |%ybH1}Y^sPݻ&齙@!DDpRS^B3&9E6`scw]zps &Ź@C3PZȻ+HA0Z'9QJL=Kx^FPWض"8G,xO~%[+3y y#\ՄwI/{af "%-(F`c q1`T%㎣{MZPd̎ZJzd ;, in:]JHe5#?td:;@+OTh0~5Ia|B@/ٵgH_Z!l7ܤ"lYfƩn.(M4P"aBSMI) _c< ex*.}<жl~c hm9Fs5֍,\jI&ILD Im^air-a% U϶@fECZb;#LXDD ^Eki1b>Y̯~ˋ ?RI.>x3r23dpD>4vJ8$^BQn5cp ^b޸{q8ΖT}c͝)&43KpS"$zkNy#Ѷⰿ:ZsI~sVagΎVxpōR;aP.aVpW>DCEvܯX(q(ܷz\g `N.7- qrB*-<΀ @^Qb'!pV2ȒܲWw`4Uzd{3ø<3eb+ %Vq?—[Sf?fi\B`>h-E} u!!'MoN6P"<.pWkq8pskH)W'HB8'<\KlhM'e {oB%K]0jLq40A`\qZC]pAX ,5>_C;u^+Dqf `Ɯa/B##-RgfV'(5ڧh|&Ԍa!{[_v5Kg4h$:I3u0]ɞ֡bGv$kQG (F :Z0-ⶇA**5epϕcv.b4bdұI&RSұLj qұ&sp/v,kq:"hR4;'0Bq)1w[Z(%T-;%8} n+jđ"7K]H-]VUM}QLsc ڏ悳q[0Wd3)8fSLpSKR]Sw^K?m#OAc$8FLX\ dJ8zGuǸpZ|-9yFET2p4Iᔃ1^y Kcc,T NJD|*'pU6@`aKjElaÀu{/Xڂ'+&$U<z@JiÂ,0)ݮ +hCt0Y)Ki**Up!B5GC%n{x+w91ƁDEWz7O7Bb~p-wqʐW1~CTdKpXNP.;QUn?4ਊim79ǹQB>.wB<-68~{LV=xOvpT0y(Qj &G:|yҰf9D|ĥIT*M\pr"CY -NSha#Xh?PiI*RЂMApј2xdP8p(CML;+ҳ{v׫Hx^5fmYֶ[Mxy3-:rzYYp~GY{f@Z0w/xV>ڤqsU?*c,[?:ϖ w܀CR\?ffsEkdv+Ip`f, f..:h$)xobi*7ڬ6bΨCš]wD?]08ɚџ5b5M/Œ#OXwV4/*=uf(~df|,J4A}0f'$-U' Rp458.{ȣ, xh|(oݹe׮-[plajǽD((ząR9EO0ф>\UcC(nZ܅VBE+©^뜦ݖRD6k[U$kX9: ziGqk6hL%Qn-D l#sJNpE\[AO- -̃!ls8'z7?8m14hdd傣@kswJWBy#/i3pbhYtpޮAdpF?H8QS$iju͈$"#CAV*sN.aVy}mE/R1Q NfrjOy ҫz$`_!"gޜ}q%=?X&A=&Dfb0e3T8|mprQo΢\IPQ%iOfr lPąv0 Ne#>fʰ8@}[spq^iED:Ov/5fF2ڦUͺGu\<6 MngOtf3}swOnt雭},;\:qjL|Ћ3B|6SŃՍBnW Ϩ]_l9խyvc\m4xjP:>a&\(ӓ3DUQaO3c΋u,!:@Iy*ɳI%{o+_ Ql+Ny?SI3&=U2 %$ EjŒ@5b aUΛ;{b?='F5@`_0.9_lB$G x$p&$YrEj*<: z:4![_'No1+^eq'RZA u2X{E:hXI>sf'@p'˥Ǘ8͒gRL3MKG]~V '6J)2hQ0ԵT5 Ґ+&JRLc+hT;Aá* \9@QK9IVL;̴* yNЮ 3X^GG9FdH/zlVq󴪼zo+o)R rJ3 A(ٲ+ƠJ;^ݎeS_2Q3^tV{TWˍr 'y4 89y=ptSr8AeöTU(_~J^{ݕ⪮˧CyEV5ԡ:aSz||_:bctoFL=cG$){y1V/!%l:СJYIVɥV" rW`D(&Wsm6[ Kq}TuSؖy~Ξ>-mQZ_xc7_ vhrws<{=zL^:Fɽ^?swͻ^zHF.RH4$iF0zh0s7ثÌ. C{0L@KIʸȁQǙIn6$GU$i8iJ%ՂOȂsqqʕH2cIlvTIr&ҘD+(FH$166ӲJ>bO穻GbAl0z$aHl6.rMqZsjAc3d9 % ''f$K5C-rL<ӔPRN$E rRq1*BflhufL6BgC\3FfZHq`Ep4z9N5$G99JHy!;`d9c $+[{ kf>]=`Yy,T6)qM7J pO'*I̮aJRdF]eq XeTf|<6"K/aֶ3OgߊP`%lYvއRLǤWϏXa0VYqX Bܭuxk Dpu\KXNE倴, 6N?D mFeܗQpe/ԬV4Ց9DFl8'vK%G++2<4)lLl]_': {ȁ|EIdrn'JcfNמ|}x'^SJ./=Ég"E;[jىݸB);Sz6sC]BU ^6m)bÞ%(y A[t:_Q։X)NV}y*Ja[UVt4iJJ.Z :r(#8p Am?̓Ey &K^MWK&^L$5 RQ@l y{M_q$8SWsr%„'.NHm:i2m@j1$@GIRH45 Q)R|xD qڃ ~bw?Iرw&_8H`mA wJ%;nw]mDBN1NTNN$CSי[.OQu--6[-6 q+I` Ғ㶬sGjATv"ry"r"*‹ITE!4_4VWNf4j0*I1V&Fxq7BWP((j`_F p[T禁ެa0~N um',x(y<*toN씡vue_I΋.}0ueF^jsS=d&d2e㺊1T'f Bp4d6$u-DWԍ߮I:UOGLVtb|3I=9c4`WOI3\3f 3RRLX i2ɨQCd&rZ*d&6}/H!DםMdw`HIJ22efG$6#T%c};6N ̬DJ3B$Ήbm8S7y[@9NxQ\X1e8G< 7ַPQF>b,6 vG$ʵf!pAZ7̔3[˰fif8ADfg@),<2`zRcY6ViMBЌ5l~M'鷏fk}WmCw\>_oϦ5/߽YMdۗo ̗xc8㧻[6/0}M8/Cqpwk>~\)~g?=Nܿ3]cdbȶ$BKL~eS/OocM9Ld(`O7 @sD34G?P!O$~XfO?k3F(j7VݐB6_nCUBlL^c=^MO֒`f_?F&%l{}5-s- q4F[ --˷ԧ9ݩ^-o$.J_KVHxY|%dAo}fn ĀX_6 ' ڃ hTh1F!ci~ıf11D\R`0̜.呙ѿN!YrV\*CNkHL '.GE8~Cf8&/Ku>r_>gg!'v*cm7 $, AyE@U}Ưe:3gā RyDf% /uR+"8Q62oß|]| A^Źb@GNE)>O7.tSo,c܇~ap#j^'h KpV~c4^2gȖ: '>8 څapcϩ+ho+LJ볽8AJM)B*#(o7F|ar` 3d}o▒X`ԍVצ+ Wt(rP7bكG}C #O5*9Ua 1 \\ 5̣)VK*FOgТKy&ylǚަ3p_',:<ڬO羘yve[KO7w[0KG8x73{6W6LJ 9klYW>?Ճ$Ckrv}S+3S>$s)YXCf_E9foZ@vL68eM(|ڨ"ÏS&ԦC X4V\ `ԭŞP?7C7M#/ lUsw2 {;BXn;3<su"ga>cEs5mv3;dok_A4ul]-ρ9{MQ/,,z)F /|j֧ueM8\/8&b.OvfI8?č/l ؛s@[?$XݘNz򾱗7 (7NY}hGvh%go+&|O`&BQóp|4+ mi IGJE/9kπN׊Rb8z744~D<*jtv"bb|ʑ Ιا}egjZy\ׯ^ə4c*A6HcL"WB[B9F0}3sHH+2RlI˯X݅\\/' '](3+T/\7?_bnŀEm~kABi }pO0N1Dń{_~_Qz8N[@__ 7W). JwBGןgK2Ou3Ⱦ>3to 'oʡBt!tT8R8&*]5-!q ciSʑ=d1ve]t%@ 0@~R,*+^&J(p)ؼ2@듛(sH YpXqR sGoZdg8zQ  aa;y8u=0=W{}OF9ʠ .|̗|G*-)0ʞtUH_ZlKnf_v) ـY$#/NB]D:#(5x:do%E RNkm搻mUn2u[N6Adg' - &m McYړרb BF+t>%NlH>AImD(Sml2MDK)Q.hUwZxJIdn0JΏczۻuZ8eb{=6wZ[Iք=亖(ʚl]Cv sRFx!Le1[`DX!7o{.K\M~|$A\?vu%Ǘ]Rn qCjߏt/쾹9V(}$oAsDs< ^;`Q+{?D0Vhh+]5GJ(2 #iٹ f ԩs ;Xs\)u!zNr6[t@ !DR$B95na7ޕ8vwCtҙ/Lll)c6?yV IRq]CNj&H8Dr!MΆoHQOQJZKv=9谅F]&I_&9.X":덋$ A戲98u,"l􂄉Y[y˚ΐHF'J8Fa3 >%fZҊhfgF\yxpvRT_^T *9n<ns/߁gs'ۿwpf4 h+cH_=<+qVrѺۈqC9`U,V[C gdΗ)Uf[g1S%RFusClܜK6ҋgLlt>+IU{@aՕwRwgnX|E-WZ(o#kB., GVXwP>\,ef[5 8Ok5U.GQqı|Q0kfguI"_fpe-Dߣu]]%Dɩtе)^x-5)]lE$-W;k6Uy%%Phm_\P@*>TnmnvWڶ8;JpU)UW\5[kzsqIːNu&"d2^5 R|apʼn1]z_ -J'V9,0V.?ShUfMΜ= ǫ ,$FڲuL$VYD (+/c֗I$FF2uVC Hh,~Sǘ<|E {vY, jø `:JV5̩ +E9T.FUի⌻;O#95?sz #\HpEm<+Y rgE_By=kJ|Q?B2ڠ]*lQSeu$q"dVlrV\+k>J)9魽WY K9ANWXz :ɰ<\fJ^1 NZ\X9ei4FMiZ +r.Z- J6-,ᨇ[>Tp a@ݥՌހC!w(5ʲ)CCXבPH'Vb25{b1#mQC8`rsl0A$1cY|dF@ MYEcP]+Uޚ.Vىʺ}l$D-"$%Y!i1Z2ˠn"aֳf(.y"B}~x2*훋K6 8HۀN^}WnWɽѢWwcoF:\A4*sP1z/g?! !/e2rZ[=]62b9͚fzVqXa(* +;.?Ȕ ,-X`3 v2cb ]("(O YB{&s͢sȜy̥'^%msĞP8M-+ wmzYŀgs' f&ٗݴ%CfX俟jJiH">lIs.=qA/]8Nͽ36Z@*z@ D !?aQoǡC"]Dy{ O-تMڦ:&uu雞UcNA3Tͣ^ P]o02ZE<x-=e /=5AܔaXd;V2u^B R/>URv(JUi_[r)1%Sw)j*PZp#4|sf"W2D'Q`$($ёxD2q*!T,q5z^bO^Ei{0m2 겥2mh*D Bڢl;3see̯2 }^ Sh ֊{Rv) 5=]b" Fvyk)|xA&2<$ l` jTyO5Һ=Me;$Թ3֔O[΄_Z]K].r9j ` XL\Kk0)aYO.Qi:Gʅ9#sh%P>0g/aΙGD\8S\\yՅ{sn=O5(D5{ZkjkcrMTDQi7R3Vjܡt}j A$=3?,GUh5vhMf%,S["Kv=eS#[e~l^'bZ6C *]Y:`\ڐ ^˵/s^? QHo> 3,[yӨeZ0Vp.u}<զ\OڔI:ݝ.OW6~ƕ_jpӁ%ݟ(Q'f aG1Fc&ULc$u$ԁbIIXkcíT\!+VjhI*۽uY2mfe39+5)7 & Sem)-鑿"n4"r¬XJ'5F"Fb=j0+Z&+kH8Q|=-~e dIĝ+-:6؈~h&6qV?)WXLp(ikQX?@>ӽ|(((lblFT W`Q̩aG1 DC񰖉e }aŭnTk`8i۰Z:77eïhֆZ cM1ordM<=c^NH8'gll#dVqmR9M8%ZTsO'\a~.3״kŹ`|M.4#lTav([53W8@n=OcDnK)XP00|jUG=S{ӻW y DQ159| O1Ad@4cYNZԇV`Z}}N1Z%q@{pH!3JERB1%ZX&p:D ,L5m@\0|꒭Ll}ōWU/ɋq^[̡y\>ԆbQd/yO^܈={ԂWfQ)w^JxaM' |t7yH{|1d~rP-d;a"s+jqŊLX!9PbB>ݾ6CTűF=BeJt(ҭ{x:{O9M{4e2Ա #ܛsTohyhz81 cgVA?*w 2 ^ij`PBrryARV)|1{c"cgL| X`f`XbDe_>M=o8%@ ҜǖTעQkZVM;i?t%)`[BpN)9N*d , ~|ϟ=BHy$~uQCW~C qo .d 4Q1TY)c[k*7K3%̑ޗh= 85p҅0Viʛs4Pnh_enc93J/X Q.(\)%m^5rqoom^ن \F2d:5b1)"@"d5ǰ)-cBe@uX LuTFJ]dcF`dR(}g!7Jq`O.T{#&=w{U{xL;}Ȟìtc(ƙ'YK^JkL_TYxLՖ7`/H[&Ӧ:ou QO̠eߠS8{t6卟U5G=Sbgm93Lq*P_ު/R/9uo_RZN8[+%gYϩ)tKHkT/ڑIA= )mJk!cD8vD 0"7zzQA"(h#k)ɘ:p,C,)"kMxl+deÁ6JMFu D R}j"cF;!uY` 084TJlH-2Wm:XewGTYaʱ12Fs B,2#ѐh *s] /n<=ӱfxU^ûo|^Ug-;j/zP Kq*R*rB--nF&j~6ۋ}ՅlPwo崘,f^\:؋K9ɽLuF/6@Y\k"PcGkyӭwntuk i17`*}=sjS($NE d8ƴ;Q5Uwcav)'Q+L#uH#b i a7LK4+#"v;*P3Ov:͕R3WjgH> @iy̗cə`$6&V<ƀTe"2$F!0q̉ApX=Pީoog1+s{f5-:)i62a2huҘKvFulأ4E'aYE.B3Sud4hwO ڦВP ρe SS,L(;sT2>-C jaUKM:@.ϐZI{)sO{jKB\|G^%u1sOøQ^㼯X gIp$|X)33l_ hӻV+ NL~T|eBF,B,bF@}raq* >/0m";[%#6~Wb|yqr.UjD8X$WL`‚rtMi0]4xtnVde7w*:/J!X_i.LNӨR,)~^ӣ3KVd`w b@{chOqd Y[qkreDzܒ)6pR 9?s.擱kT(EB>9˞'Ť6!i,+ٺe.Cp}O6g22CG|{-jQem#VvN'j ҉K sz7A }kd}֌ nnA]Eԧ_\ pk$7˒㮇*|0P$"(ᓗ ' ` ce)DPfIHٷܔ@VN*4/d4C*G, KuڇJSm ug{L4}oƹn܅WRjr&/l^J/2V(uQ4'%dK\(RLTjHJ{܃WR^Վ|B#@mBР1*po6szo# c\f>V'qN(~UL.o—6{٦4!\_#k%?m* -[ke0$R2)+'`*gCZsb[eb[e4:pށ+<4q`F1aϯsn>No~'S~6/=Xv(JI 4z4}[Bj0y+e+QWw0Y<؇,Iڡʑw&_7IC:vz ;_V)I 8cډ fT#I@šF?/([.wY1BgX/\Ze]92^<;H,pFh( t`K;gdR4܈>K9>'vUIrvbeLGW:%iB+ Vi 4:(9)9kB U$HmtxM ㆷ1%=;4YXL!'!'q"Lf]"E󕉻Er6?bc/kw|s2NH&"ջ@, rosr.qUҊg`g<3$YAu^R(;;ly挝ܦ­~7 mtNJQr8[DA u$x (wЦ}֝z|,q~jyW[^+D::x6o(:'QZyn@1E<4􉉠:|]:λhGpEGеh ۳oa1w+(;BpkMxtLi{)GG4>&#T 1U'-B J( Wi.q3E* HΉ*) gOEVRP͌MV_WBVZFŔ3Z ED(8$L-! !FmeSs C`0FUt$1CW*渏o4H%%:%zϋ7g{/*>Y(CwcKt=Q/*;k!BK WߘD]C.4ɉ+ s̀]{(ήr]Mԃ<FmJz>_IQ} q ǤN[_OT(9Mö-FW4l 4Ѵ֐6*4PM_mB-sOڍdPA h!hVݧşJNB԰B߽͒GK14&Q gRiYmT3;\yoHH9Sd̍.zN? BGp&2 ^2/sK`% 'cdHm.9/XY-m@a00P%! :WZZhɭRy) -h23­@a.:9Q|3*!'>A@9KA1g5T@laiP#:'ǢStZR&T J)).Yĕ[F'g4EՒ]?u2H O@`T}5A& mvgwX (iw` $i+@]<6e`t;+M%:"4;*SQkI)K}Xej% KMd"]ڬ^,Ђc(5FeÖX=bze0[]bM¼}q[1MvmpA-1^v_ɪכ2}tѡBlYa@g]djFuIsny/ϟt>/>$NpNB^'A9jayU8&$wvhjt"ghypkCF],ݛo ?VB=ث"}.R$ 2Hd5M:ERF 7s_%* C6n.;՝ڑՊe0t.֔qyiT8.v WO^.l46Fx>lyl9x_?jjxWq6\KW`Z %}Yi4u~󀩦3>ğQם;|CH"9?hүj6^yd׶@)ޭ̉cnaM=w8>'DjxnpWO_b-Z ,١*RT9>;~KSSɯd S*ɷO&`KZ>B,͂5;"Y?51Ia )pyu mf N! US\RP^zkO _geQDz6EgU0ǐrb(V3O~%RG(CIi %% .Rh QJmIYRuRo.5@Kb"TBKGBjcPG?xF2 3[ |EC'rɜD MƔ`L@S8%q[PH" "/p0¡qE2Qu6y9eb ^OtԨ{dBJiW]ܜ&Rh3]yRR唕1E*(]i5Zc/#[*n#{0\%'#RZWK| 5FR]'cC^. ZKћFصQ "a/tX@C9nx).hB1@*ڌTpP  퍅X.cSL+͙ b3bJ q!npg"6<SO@[S%PȌ5 (@=Fj$02AL)~$ԑ:-UW74J)C2-fm01EDW3/ʲ3T8L7.Np?\Xk~ mJNtj"2iqCZJ(VcX,(X޾rπy^HaRR EM!%u,] 3Q9NHa1wGYQ4߹EI\Fʂ'%)bՎcnrY]ZjJf3%:>؉Om=0U)u^5.[8_Ee-$8ȸ.3(3`rL:IlK`˜CBQZEȥ+ 0sP($ S3[2mogNhVv/hk%:'ʶi)L1<z wW@p9Y`UM{%.\d\h֍-]ޅ_wocȰ"%lQ5LcT:fX91u]r;F `!Ir_u:60^ 70`!ӫq[UG_Q]vN0|%}WNq._>X&nVWt]ޟ ]aV2Z" Y½ If#xu*8ll$Y 5<]|% GQuߓĚzè&5dO;RhA,GçR8JJ$+ _hYAvh2NKÚS}N%JJ3}d81NѝҪB] N:g9B tZr)#_8iqԞⴓL\wc͚7GW{ڍ; =|_c==\1iem|0V.ׯLjո5ᕇ1dd[~Vi{$J~څΰY x2pNIr6FoZC%:zCs@7i~(.I jT%%q'6Fr9vөmL}GvӇ;:軵hUdk.,䕛hM XL`bc:hwtZ9ۻ%4ֻua!D[۔WUċ andb)-+^7% dJcmVA34[bv; }W0[;az7>S $#nx|J(7`:&; }P( 5B" O)15H xA2Rvۿuz7>ss Y`ATS*$A(&J9 ̥$' %z|s}78M)f7:6ø~O:se&dL=t)uNz8N{31,}r@iӵvK?KnmH 2j㤡)Ű5^xq֮lWAwuKm( NԊǧk1`]q;lj[ԔN'-bGo?|Ʒ>]tse Ҩ~LW/<2,ʉ5f\֚,WdBQ璫݌]!.Eء.0QW͟xlؾrٗ_FqgypLiŎCf  O"yQcy $뢣qNK18Y A:Kx}*/dg_}(5=4q=CPG_ܫJeW_|(ȰoE-+į?ѶiZRHdX_M;_#0߲ARZKg!-x!#K_?_N-eQ^= [oegݯe_XrݦV)AkY?Fo6vsͮtZexc~gokBxƓFI(Pɂ%UC`PFrFQn("P8v w:abVN[J0)RIk a8C;sD؂PMdpN&aFa7qbK)q!JƇ]_oS>`>z[,駅%v`O HJ=zY9[G߹MJ=r>"Fѧr:|:/4=͕߁#^ߙe<,= GiFۣ7k.(PJ~}2\z#D?w 0 ۉs8:5gzZκ٠Q,9Bi~yÛCwgz >wƂqϝ}1_&H~0:Vvn'ʾϝ;O䖲9AIEWN.W 60IT+ RG,b7ž.A#.+:v/h_.<]rJ{{n:9mka}gzf/lDA_9qMۍ.~-Du-~ Lwf(qFSU{x_BDH*Y1CO2$/H&Y*rK9"vbf0TaSg 2a`#Sׯ^!&  rMIn6Jνόu*R`u؇r^o0HnCy!-8ύp!5`J y $pI33A\cn6 [OM)#45isn(9uc(F Bs`.D ȩ7{>K s4NO/;ʝw 7ӧUH}ǂr?~p`f|0^:y'Mʊ](]w`nڜӿ2y vk+Q-I  h7fPb#:﨣Nq{;V-="R!!/\DdJ$x-I}Gv9SDw/ #[E4E?n Bb#:﨣Nxxzwҳ#[EL9K8ERgWH  KF;mW0&PƩҽ!,Ip\Y0?,x;1.!Eג& +A I8Aq9X&QMmp "ij^ BΠ)K g#ګJv;Qɴi\9NB;.SvU`k EF@g\uINFĊB@ -fH&-Z.0 T#>T I!$ʁÕjSadr BD.da5n=%FUY*he~{dI'-unJӖS@4O7w^.ֈЕ6c,Ʋ,bsZ xwܙu]Go?|\ 5oMeS1_O<O꓾AөXVKb`C3wYS19뭖'm7אc=[3`sÁ(fnm pԞV EǥIRwۉ)'˘9?뚹?>ũ_~S}C5uW6#4LT1ϩͩr^mn$K^d(,Hp&p+ks]QP&f-ZI#ޠD{j/ pyE%DaSxpHnPO& =ۆ 8E*b ۙl_ɄgEqgG03/NWW&pԈ=v~ѷNJRnނ:dc|d3d%U?L3p팾V$Z% Zr,}^u7ͬe~+l!t|V"(U[`eT/Z?(byweI|xǤ>h_`wֆ3~!U݂Ւ R>}#UUŢFjud~hjѐ|S!"rM9SV kR-@N9&GRXQ VVcKG,Z:IE5އĵgd <@Ćjf7p^jX ʜ& Oɔ &{, Ҥk;|W*搠լo/j AÌWB6z ȲQ eu_ԁ~5u[\ރ]l8Q**5PF GRU8^`Z p#AWMdZus fa4Ƙf]lI9 ETonU`?>|(_85ϖ>釫'xq{c/毾=Mv1SƙGR^%C *UQiᢕ/Rҏ#E-V*tKD`]J)@7^ 70QŨE lQ=&'Q]Z{eLT^1[0^b 4fLa09V9{e'X9-sϷGN/ѠtKxr{>`m>>эnutcn}ǧ y zLत9kpPF+P3boޕ }eF"c*EyAX|r\k\6/\:V~3\jى6YkS48.1(PsXo5p?8@JS8O:haf6]S[vgRּUE4PMg3BjnSaʀQK̶y%M=8YxPL$IH|ԚK6{c RK\#0P[|YIdp %d&KQ: p l9EF<-%N -3I +^5g:iD̬RpE5Zچ&h1sBZF_J}?/#oco?Tla/<=iG~Fl,1c&.5C0Cp|Н= WK* ڽKm}\/&>;au_0;5a̝.āU3L7̽[N@!0Y_{Rh9὾l`':a.`z4 sw񁛸Qx U\BLԁnѐV${-<{织fyGsdX3\.j<껫}zs7̞sWHiz&slTe2*z vSҶ㡭 މ"y;=pN;/5IgmvrŁC* Um&lʭm՝54;l$/ҵUZ@16NG Va6V+^ ¢_ Ӛz^.Ix VEBx|>>YZs?jofӭts61[`%}̜]"]Ƈ0)~L{u2x&qѲkwofܽQV.dܧ?^ʣ ïg6ҳS,ӊ(g'!|fqZAMLi[.)C'u-[MH\t-}vkb!zRCy᭖K;7c7]Q\ Pn5G˻cD>Ik}v6\ڹ6sG{·JpEQe:mȲY 懯Ii-~Q`(j8FRJQsZGFat 4MA*6uC/}]?)!q{{KMgo3is.۲"6q<$jq9i1snFFc7HfDN*)܌;A$ -NX$DR/$\', 8x$=$- w8y!HJ:d_/4E6=e.1 &f!SPB$IhSUJ7K,w^I6#y[te#M&Q[dCq^hoA8GNq[94?-C^A:rjf %Wg])J1Eyء`TluyD4y=ޏsy] TjkH^gjBC9P_ ȐB\7nm֐f*~yp^vL];I- 'E8KĐ&RwqI9}!cεZ V!onps{61[NG"1bБ )C[-- @ B -i q Ӱ xƘ.^.9.y˜ ?7;& ;E9! BmG9cA˚|Ky4%Al([^xIלiVj"fV`b)t}_늢,8(9#K< ʟWø ݚ|\ wFNg(>>/ٷYӪA:d'F} P*)wF0d` ix  j{L r ELNQuGCԕ6N$qb 0sԃ]c `5c!`AF8LqS0aBEVUWNZd%W L`cl N8A) CsA^H5z%gHl_:d6_pCv*(C^Ȃ< |Kj]X@UN-V N9wNjX;\ْXI~ u1Fyy'cx=?ꊎ  ߿%oy~K5xǯ~ZjH`-!,|;fԈ0ppk&o1!zyl̩ Xi KYcM("<>ֈ2P(,k1IY5y ꁣh-}ծk1KRDv{Gz&&#* 8dtf*{5ĩ5S^y$AdnlT‘8 ]Q V[KkP0X1-yl74)+Px ?#ęԒ+ɶ`J$d]uƇJV9cMU3k,̩$05QJF Nk8*BE(DFH91n~nƸ(V ^i5Xe1nTw’e)4Jxӳ(s1H4}nJD@SEˮ}  B+gGs 򣞳S,ӊ(gr W>DTeϚvՄ#j\ RN;h# S;nMnMW>DSG)Vޮlrx;mBc^ !4ۜxq9|HMq$5 1»z0ZAEYBS1IRV?jWĘn53[-.J;@[Ţ@?5TRh{]N>FT)G3V[5$CRa \C\_UNASgudA罦M$TIIbpR|$l/t!L̿N>rݖ$-+jo~agT)IS `pqR2HͅBO[rtoXg_,Ф _ αx)e܁}I܁lN0B6#$[\mK[B) pGlF-(ښwpbCξ-m[5 !2 H*N Cp)d}φ/<&`jEp&V 2k+"̥8G,wY Ra)Xe%2 H Hh hC@>^xZP1O)!UT<*EƸ@{MHl ME* to B= 'OBK^R Jz~xMՈMu蟖GVJU[qrM9k|6S J'"4v?Yj(`~N3.utt]몋4{9ft#p:pFR !-i#\`PAfF2;)]yV}-"/+@B^cI釰zg` ycO"bt 9rFa #txK&0ApMI z W!lKO+KR}a-xs˫˯{V4ϙP.:}0nܔgGKjz)1Vب2T Cb2<݇sKXKr,@S ]H.EF)TX)$ [^mx鮶Ai3 hRƪk1Rj&/4pkBȉ\T{")|oUNF( SMPgOaW2? 1 9DL$+0Π>tmCjÑ|=gYlU5!L s@~)tHS;Dig"[<Twa|_5[]|ĢxZxumŗCOyƻ[7Z[F-Oέ S˨{g/1i>V9Ss׆T˲OdgT(Au~m<: )4綒MCD*7=nA9CXVt ͗^$A?"!.mFj-m/v JF:$Ns`EẹL0 W-ZM-{8}m2;]O7,G#fr2zrP! ,`оt"iT`ܨ?'ӯv׬oқRJѲW1 ֖!.1oM(}(E)  9O{mUl`"- 4ԦD&vu\$8(#A<k}ٷr bX]E|[rx(|O˽δ?Մ(oSܛES-$"I89\gnBD0Ĭ;4^89{/_$RMƈ`0'ֽmF5x[bҷ3w&e{$k-9lE4K*yh7v It2h$JB{nU R[ E4K$D)Y|@햋AR1hya\ $j6$䕋hL?4C>v Etrh#޴[NvkCB^-SQH+E7~ +?! W7ƆV _}MkP咃R(aK{^⿬,&X̳*a2D־4梺z~m_>87z??j)'o&;uI$V.Zm/w~~5 1A{=ӥW@}٢y52fτJɔ+8^Q3Ɠ"Yiogji(oJ9 { -I=:G1JoZxC` V[a{g%ѽf;o ): Wt9] ;DXWd*V# ^ݕ%q ԣDGO, qYTHzr 6VzW5mzf WzuQrӽkdUz G)]:9>ҲeXO*Č 6įG)طnA1B0v P_mm-uԄZ/!ezXgqb PS iw3,+k'4B(s2+k+:71J&՛W &a0m6da7%UViJI VTd^ cgBa+[٬ϡZ(-$iJٳx%L`me%Epsvj F JcAe WWQtzAj&@? 1Q@\2D.xmae KȗqTDDNJ4 ]Bo G%dnr1H%;Fr7=&O+%.wo%iCB^Rj{nݔTB9v EtrhzMVuݲ'ڐW.92Ź4F\ RD'w&^&]k쉦j6$䕋2%RJeeig3ѯ僽\1}+4A#+PoϿ?naE(gZe(f02V;#3eRPReauN#iu)J %fhH:oYQՈrE*\"Uӕ8ҿ =L'QcLU ?l'~#G ݄Q)'(mHE` 14,-e"58xC95B6Kٞ,c1o0|yZZ?a&;|2K.z1;xs~'l^o)ʁYGB۱ζ[Ě%jafeG$Ok5"l ئr5ca_B7(: ߛx཈2,MݘvO7ⱈQ =}36#3gox{z|.#@(LǙhlN1Vs$s +Nl1p;7'DZѢ@`.N%a"b&"&d&*9s|U{= j AydA ȶZk}ʲzuϠ`#<͠sA #@7HVQ r(x8LK9#șF3.^:}V*-Cp#\"oPnÞFY2'ӯϘ汤$G?-,d0dY3)r3 Zlxux쳛[d m` q{.w˻sI !¡&&( ,J\/~aW2`KEjtV2K/<ě1AsqƫWWB/]U$/;!YW-'NJ5XhrI DPv=A(E; 6GkpI߿_ܲ߶DrDrvVZֆ*zT2PH CoҦ↛d ZhNy8Q*EE+zfJh5M[0Iz@g$Ba SJx*pe*UQ3fK+p,!%XD_(,N -f\s-I0.͚ݵMWx2.֝pO^)_{Τ5U3Ʒr0ەRN mNhpu>R2FB?K~@Eر;hB2 ζ5d#N)z2 OIK߄NV=xk3}R+J韴n++K.Q(" [c IY0+CqqB4'A9D¼Aw,m4<)2VjI[f#J4dK(Ct8Q_I-3)s69ꏢ&95`vlB+RG"D 1qa$r >8=HDr.µtu,tj5)#ȩl@5aXavYHN)MҵH, 2P@#pW,. #kͼ^#!rx :@hY؛ȰY` ^kjX*q)̼sxr+YTAP@ia7Y K$N8"4`#0`*,.֩ʂrI[rf\.>;C#J,G# p떗>J s&m/l׫إ'lCcLN@J5ؠƊ#KZ)liPYG1>L8=bBJ_OM8bLed[%)J(0]w>:5zQ:w1iML6 /An2^|)M0D"_xQ=joocTiLy;Năegz8_!'!?=1$p=}HBRv T)ix3(aغ BeA( ,i8 t$4#`'*`duH.EJj+Ƭq,-Ec{n{l.X$L"pĚ^o?6lo6F(BU[Z]d{tv.0)-il'z`U?/_2{?(FO7~QOc޹ͳX~?yF{Gы8^Q"^v`@P.0*6^R02 <,gҟ}ڄyc>Wm]nrEۨrN8-V_rxMOK!"iL-#"ʖ5E5=`W{N ٙcqF.۹0h뛋_8*Y)#JI뢔$JJt.Le' QHWbprd'%eCvrȎˆF.=F S)ӿ #$s`A#/#* a1uQ9+LQkEWz/̤P$\k88Yʑ{ :~3A3}?5ɎTP5]Z :sA˒%T$.PyRf"kŁ@AQ $$.l2ҕ[Fe`gcuaQPf3%8F[jX A 4dFˠ6fǚ %k_Iʎ>ESC^_oIIڐ&pSZpI#0MǝU$ A=_9  Z$ S6y¤Fr<#jT""\Ǒ 2'B!+ 8C8Lu|\?qrs(yv7\-Pv]|E12>0sT´ǿsU )"of5H|wr̷.Ȳg%i1}3l]z{s}n~rjU)m2ի|NqYXQ#nmb:tu2mn{;v,B!?gca 7KbL.;ƒ8n31a\UzlbRsqG3eq->?L_r'Rz#E#!fT%^-Fv Etr8F_ <nMnuHg.Y2UIf[q-$(v&4j\N4Y !fV~lk7M:r1H1nghO+>[~]vCB>s=Y(Dks7vP$kػP`K'Ϯusqk LG}d*Y_统Z#IM^#뫋hڰ8W?ۻэwڙ ,dsR|4jOy_MYul{2\kZԏnͲ|]cMbfEfNq*̩S$οVY<"O'}㳳v6*Z: sQe-KVA9Gf";sK r~/AFgEMKJnU3K)>%bz. e^~齎9 ~{,C|o4-bGgPnjjkRSQFpBȁl?ܕuM#_-'zsԄ1!P zݺMFP R'Y$"Z2_^qHp~=A @3E(+FU oN$H9`q@RJ j}z& Μ!V0(`mq>oK_^.c~cmeoU!Q _\\tI5۴L1? F;($./S^N>7FTWV MZOfRH?p3K,"|OwpGO?$stts7¸ŋ-d.nMF]C0|)ykђ1 C77^_GVb7#ݚիfz\X "މ&UR~;c ֞z 9ePx k7(7R,RF q\JyG帔q(%t$AEwmguyv8[V#oiOv6 MBrIHb/V] Ѩ79i??C2Á!=4ǰPF;nYOdÄ#r|7eW y忿x]M-^loCqp"֥~ٷN2P!V!GcLE!,"j='Vr8AZ3Ey Z_9H N*SڑUo<1wш-{vS5#&RH*\+nM3 r%RFoy\͚Y4.u b)^Cgq5|򎰑ekpz@s ԰PŅd0V k%Wxhʫc;4&2iGƿw8j;[)3߉iv\$mx\aLakIe0po1q`!V<-5qH5š$R3^ݕ{wZ?CcXҟ9&Tt M'@HrY4E^Iĕow/(((LV[yya^: 5(O31 qn,񼰁 f{j܁Z~05T'Y| :*%,<Ͱp0xHDjI(҈{}LZ7PP` "+ p 9'U\W6`v@`:&Z]=lA";CARqU,TݖyN~c?$ Y @["=$A*rŐGs-f_ՐҜ{ ~! D3*.a*å=5hÒs%s#\)  -kU v  )Y# $g{9L )xR(qY˼8$/ *wdO7;7Sͧc;6"OՀ{;w^7}?UdzV}F~C$YiQg` P64obYm$j:?]]IcBV]+BZ[2 F桛&XR Xkd(uXڏ՛]s7@#ҧ^ AO۠wn޹ z綮wS\ [J׈A.aմT%G锵~„cQgGON>:P~\mU{.I4cRٽq7Nx+ u|u_/X DgN"8QP}t ^(/% "-7y#M:TJK +%htNQ ӵG3~׻oײ.CvY=To֩V?yRibVY}ҊCo"|f.썠 7*~p3n'"$"oӺ]-d Y/Rx /ݬf,!SqO7 o!?;EUnWrب]a!='k\$!'.{˔% >bl/V1cԎLjƚky F',^FY˨cMdlQE7Z}u48hDgt yđGI8plۅ* A>O qtWVQ16=C}򼿾jWq~*q]-HxyEi)r617Ijl[')pK)eHZ DcEMHf mm": n:϶P;j/ȕ rP:̖1LB &7υ/i-f] َ9^mZbx2-zoE*؋gyG1p`H8trf9qqq 2 ͤ!C<889u.BϻACF YK8+X!8=l@ 8#U)$HR40[$1XNKk~ƙJ @0<2C2AɴȠrꅴَ9^WvX`f#l&ay#6Oش1?8AAAc7k5q)R >kIjswsTq򞑏B(@A g Ԑ @S?iՇ AZ,zr3GEOy }}={+dﱦPwoN eՇ\)%D:؟y2_Tt|aY=f"Rpz)8&8hc81y{oNiߨ[ Rvm&Q֚SJEoCVc@1Ǡ!]=#5D!"tw cW¸6C]6웹74;9\w!µq - 躄,IJTsqβTsûJ5M^y7 ݧf3yو|E;B-p)HW=3OU.Tp,OԌ2u`_ҼP)9^`,:;/+&/}"J }dl_$ QAnD%j93pR'ER"/%P#˙%L BINu.gĜ~ s'ǧM 2V"G}-k9VA0em4sB"y6~j蒘VXmm#@?Eb&% ]%`y*. \pkZ,RPkDBbnCv;zazi,z_ZWz~x^l{T~x]RZJD||TB.Y~{Eg/w2Bz/^Lg%C~|L}#Q٥&Sp1eywJC /&Hq O~^!_R[C) AnT H׷i=$|0fAXȟ'qtWDo(iIdwW_j2ίS_=2Tr_)=ܫ"JZXMr)jk~\‚&ʫw-E#$g^7kHeB W:1Gd4^Q``47@ UZTVl#җL-Z(aafpTSK B59RuQENHulh%!LjWu7|xjO=pZT<M'dIଥ:qO;?=m!L4!R"a|igĻ]$^8n˺ɺ$ds%Pw%هAq&[l$ߜ ʬdްm%bV»c$\َ9^Ξޅv3ègbxޡ1sHfDe~y) 9&rly;V(%жm:"$셊{N(F+bUtOg _46Azҧe=5 &MY @('8<5ѭZnnrIr:kOVPQ'4;3U615 %t?ͩ:?{WȍP/ c&y~rzam_f!0dfZIm{0}RYTSZmR"l~X.^UHuQV3:* )/TJT.XCКLV"pɷlx%( '@(c >q;YEKJX;e_U/q bfui_q35Kߜ7iC @er2{I\me]yjݧ<&}؉"5mHp =Q׋j?_uZ !Tp^>^BZU A&9q NrFz˯i;r^ώ^,3j)Q׬e bo /?/.LR*k8bao5f:eCf-]XCܒcy̌b/썹}itGfͿZ4X󚮯ŝ`_J.9w.T-}IFx آ1Uk`NyqQ6.gnYGt5.$jk$\j)0@R`-9݆vkɶ(:r-joR% l =Rwm]ɼ* /:JLfYyQsȣ.y <ko& 0on;Qu(F8ss VP3;rІZ3GǮq^& /.4 zE.[X_|yV-#K/ApoUU -RüAWw?c{k|ș]1mJ-?.3;3:` {ݠ˪fNO_ql2|*B} T1!>(w=g_` 6ji}W푾W CJ<H[|o:QB\:]S\|cc#~(4zLj4:,y`D vxw!`g ]|6-~GS G>hci[2jr 0\A_ o:.+>3Zk;ڂPPEm$O͢#}v>Oar~E"Z>'߿?y{ e+p% ='qkhmL?.#1}rbQ AΕӥJ;Q (9 O-_.[)Ik\FNF]'z;`H15 jbSMe24T'Xv 1hP7Ԥ1OIoe)BPOK?} >,ȔZ0-P ~w"0C5 __㧛s}wFΤ=xN؟"ԇѹ6>=;- ͓׫e'[Lҋ5ƹJ'!?B\}Ucw6 JBOVl(mB#Ey]qj2NM^Ʃ˸䥮&fchJo2 @ӕUART$x\2~TfɮuR]'Xͩ|) DuR|.gSqj3˩K1I=ڠ|&9c+Wب+)+4U +$o+]'$]'X|XH HrN iq:-8NӂueNVSe5=s$b;bdQCeKn׉:In7\cۻ2ZF>EXPT 5NPT 5WBE׊7*%H9ȅ "7%A\K4Xmtvv|s%Jö|֟螣;NW7֟*QL:Q?x߶uo o*`"R=5}=uW )F;le1 0h--S2 ঄B9 'dUĔ Hu_lBھM4/52}w-g0],Vu1 n?\YYWǫ['|=\1feni?lVMvO5U[)i{ \_Yrױpn[_l.T\=rv߿uarF0Hs(mBVad1c N.RXJ9WUxǺ@ ObO.<1$co׿x^d !QBˑ3&(NϖڐL0(';IͥuLv '"kixSZu$2RHKg]#}IdN-P/aTt}]d";2%HfI2OK 3UC%$TVŽAڨ<""#|.o҄ ^t#ѫ +U˒qrp)ǘʳ2ҏRL. l?xF$K4l7  {*$>=)IO ZKU(5H2sia~˲r"J(Qrj/ +mbf$;֢scàFBWeGfq!ChhԤ>}7sBp|L/@a*ߓW-0i&W̤/rRfB,zc hyִ;?ekdd ʈk{,|>V1d[JB:Kڌ~;l?~bâ~g0WE 9Ms^.a{Rl@@^(~x .3OB"rD{WZCߌ:@k3E8J }`^3>Cέ rOz,o{'3Fظ2%P=%/%R=BǛy.x=L5oNG_^]tr5o|r& =م[VԎĭ>4Z5?u.6GQ镣iwoE}s[*_ -_'~k<9rtIF~deއ a8GG^ev la khm (ބYVFӎƠ +oxޱ@/)8}tCJs}&.'jNyԇIuQN+eUqCJ4@ڴ%DC&@3+SEV+K-NRj$OI"X(+cD(uT#A$DM.;hqtvt e_w9\\}TDe fU#CRY%Ҷ:H\(ME(ՕP=qIG2$ׁE^A*S%UJ)\UVǒU F=qJFF E'=]%<l(&i rE R Μ6̱ %-jiAr9gL+4mTŠ'GC%@F4oÂbQ^7ÛX 4md--B*qP kFFTpްM!$jm;'f_̏xsI Gdr5]Djr.X ZULڬba6b-V1j4,M+[/;xBh)Ye ~/Q+[pK2#C ۖ$'ά\v =L)WLZť=4'c_d`y4hȵV!Ze4̎8'=N`-d' e%kJ܏v~u& ٳ~> bB|58wpF=I)Z <a+3ֆGoULskiJc,ʹrqòG20al"mM]]4CމdPɉ (ͭmDpMN6uX$J˟Zv6 )ussΒՊE$aqCbI9Ë``rҪ1"ݦD!,k#bbq$ Y`\"vo}Rm3E{%U?^be)N&Ywon C(yۨ7b5{MuF6HBg,|i=&2@{V<ݖPP}A_ś~8o{zXۏ5tr˭{)kO'ݷ!Jv3-$ :C݉j.J.{/qmV=KwhޔZg7nIʰ޼Z,~S ȐRZ'fR1YV=T%q~t=?ymN4QLϽ:D"\,7O_?2{bqzwGکXgER'lƙݼIj7xRU^'Ԍx%sU~O5ZoNl5ڢ2 lAu@.+h"jG+R<0ٻ6$WLa@ {1δjS7<$P"l`fVQbbY æ:"(=%Fe @ӨhﰗB'gFի@fO!t_~֑MF<85ggs:olݫq6ЍHtAsjRbqաog$~ q.b9/ټ: 1 Vynlg3:|̂Oa&k,ft?Yѿ5og6aC9j+~%a@O+/H6&!&H^9 ZirӿaqW',K)E)hkT֬,K4$!o\Dɔpn~vkA4}G#O_H$;k< ڐ7.d*ώƪB:Z[ οrZNkB{3P|nGq٭Dl6[~Xx"r4GLi tIXK) Atz%afl#؛75JMBGy쨟ā;}g6p"o΅]}1sy|6YQcG(U(o-YSEžu 2G|Z=BvR |H#ý++#ϯti7DRLpˈ#nUܓZ5qv>s,1j]*桘 re "3l2 x)rRh8fBKB ]( ͅɁ0HZ!'c"BR ; \?jI0VR'U42&kȁˬYg_^c]:O 93#dn( Tڛ# i:+0m9(kC7Ҕ2QJ܈Њ"T,d;ej _`UOg`dJ34qCPf9sAsɽirH,3,g@IF~Fjjs!``LTZ)\ZL!. {r=RXcbbHBL~(wUW.Hw,8]7gkMs?;<(=RJT~/D73(/)(e ԧzuQ{( ; umWWe`NM/Rl})Oi] n1 ܔx-#!L$;׋tv2nȃBuVyQ}|AŠ ʹw4KIpbVW!VLJ!KMgKR/Μ)V `d)BKuۏZ[ŷ!zyvz"ɇ 0*r]F֙Pf %MN8ZŦȌ oΗEYc9gr1]_]xn黋 Bspx'8'.ׅ2cz?\Əzۢ ƍ5Z`2dC?֒QIV$6# L:)3GP䞧Ftc$`NHn(SF,P1MÎxm p ı}%M2}Ao3+nȑٮ<(vBbxāndy=18% g40R99ͥ %WH @!ֶsi`FPl5Nm۠%R <8Wj@H) k.̃(j%43= xp)g6øu 5R^Ȕ@EJ3rH6HJZ!E&Gkc/9B24P;K"^y IsmXԔ) xeѫAT6E͉YɳL6N/ ]گ1'ϱDZV{ vUjj4 m!==1rWNe|r%ӣI WcCNS u bgD ivh/1fUS^9VScJ2JPPg%e/~I@!q(/zwN/[1"ŭ{AC,dU0wǔY !='L^@K֮NQkbB1!û-פIKx&Mjqlk$!3 Thu*ΠA%b&D&U[vLCWcӨCeOn`fpJ @OH#ބxIŗx 9ӌ GQJq8R N tXaa, `Xб[H9)J?rbPǁ_ӂ8!ӆrΞ2|nqE0g3C & ?{yL M 4e:# f;ŐЙ_?S9B`l1bsA] S~ˊ0fIG+=YZ^J[(c`G!IuRvw3Sc>/O ַܸO[8cRP"+"t]Dz 2$@}3QQ{0(soK"7Pߙ ]_i 0q!";te1Oy \(BvJreuWVwUVw!jjK!77]ս3p;%jsc#Kf`X,'Zv8Dn!q\s9GثJ%Z@8JᬣJq`B,Sb4g9"gEn<&!)hOa`TڡGAFj`7-,ZZ`KZ-آ' 9[G?eތ <؏- fhfWD6TfVUCL7MT j?| <ٷ,53D(MHb?R7a)i X{C;N c z@DіAC;^,~E_c'UA,/J)#7܍VewF'^7lÇ,E]>YTdm;g=YMq֘c梊?PEwPޅ_oI`$;ŦV:ffY,Z0) }_pK)E)H7O֬, $!o\DtڍHP5 Etv;KD5ʺnSd[EL#41`:L2X80'PF^ӚpZZ ¸y[#!cm~WE~EiQN򐊓g~xRb1C6nvv s$*r >}C2v%ݺoA&c2gN1ӻ$/Ƶ eįe:Ͱ˕ x c)O/w )ǔǎZs&%}&AH)u3)e}K՚fyI˳_ֽ["k QĮzPl+~?AƜtR >!#O3q8tv\8Ͱ3*# L2(:rqx3v#+-sĠ]9I^ v!5enD;W}#Ȋ3WviGP`;TuԜɖ>vǴu> pKi*AdO.Ӟs'v5`6r!6ohk{e.-゙cބBܱUsZY_!Fuv{Z BY"e@B4Lt[Γ,lzS|.FM5+aLA.ƷaIu/V>b[-̖WF#u .!V rM8 Z(A(g0NX zQ805H ?2;neMX8$a HR Ns+c*G\ taZXqcqgN[₰ $Z+Ɣsk :!n7J3܊V?guOy4leܯǦ "DV#rc#MYF$#yQ#r رƠBVk`&h'H_%w%JR*- 1X{PngV [k9ϋG"N9+{bu,m@f@\Ϗ^"$O~YtAOկްXk ,ѧ_ߟlXr<<8]7g拟VRPeynȒx]R ~7?>t㖇/b{g^=R^/ / mELj[J)/"Q^G `B⯴6q`ޢ!1k]$r#c\̫w鬝Ic(ϝLI/ Lۼilerf7b0qYEO\I.k#ŞjM5yj:W@זنiۯvϘ_Piט1ҵF;XwUOa˗nY[1=,mlj{qt!C*xt ΦĴQh}C"(D%Q=ry2`)^ kv`hΜq-䧧7șZVI7*dofFpXF<ᎆM ya A #;Jn|Aom:Z R֥VU#m4 cug!8%*5wM۰߹nS]S(BiJM$鍅2D6$`ZeKWORe d-x MR{Sp2qQO-Metڷbj漨/Vjʀ4dӋ֕~ckaP[w,7Q=-k{i18V RBT*K.߬,d .mNo$vJ%ZBa`+G9@QW?틧OI,hF`z*p8пPП"*pB)TxPEQJk(][\C'^X߂O 2c}1MH!uyPV樔p70T.$AĬOyt~szg [ bнˮWCCSz1]k"8o.Yuuՠ6Ht}s[ժcB0v"(@<[Y:#%F飶'K)NG6!ӡu~^8rP[-HJ]^Gorf- ZKZ>[9!bt .sZ)vvFhu1JjeIS_Ӈ+2C4!P-5;OC/$5A{轝DKE`NDXȍ)D|JkkC{Ĩ#^jἠR6wDQUnq0"8yQ&פ[hMXs=@=f=%T|NKt7LVL-.pvSJ6ç9FTDWjm7( Ff<~EGȶ$$e-sn-<݈<T[b_6&gAϣU ,X\Au_pWmQ6/0+X=wV=|q?0_tFـ\*y9%E9<p'EqܨkRVG? K!z_<&}nJpW?yotqz}nc??27SdK/:,NIp) q2 E<@ W;\،-zbVk{FTCjrQt\M0C|K^YwJi)3rcUEg`]X"u3)ڕ˄a F%Y~Wլ2E8Y*AfJq Y($F_&w`Fkw~axXvC\5]D](tޓ!Mb2rޖe a]߅E3Aq *w!SquULZwjI"z]K̐z'>ٯap'Y7v-|od H*E̵-78&Z1s.(w9So6_ *nӰ*g@aT4$oh>I..ZovR]f<Rd4/ga<\>—>]s$x+J ݟb  [?BH-: }&Z1x}\wv<҄Sxԕjl546l M)=$\r QKtBRRԄ31ŚhThGs3BX8R{`8bK#dN-9A w%kuPpoQ5[97ʮ MS`}`AR QɁB+,HBҜD{ e:\kZ0` {o=KMT]뽓 =)9yj$sAe0:*3 TsSB9-`(PY]'\jEG!W(<׸BY^;qU2m+(Fe(=߸}i\"WK4,=JmSqXU%n*>BwCཱིFhu"F*v˚s cq= Y5epK֓S_! Re~ 3KO4C'dT.1{9f9Na /dΔ#flcI«V䟱<&[ken0^"õR^ DNQm񅩢 {i_ֳ OT\5d!s=ئpRv̾^ wn}.fl3RQ}ta{=[@~xq"҆ݐ8 - b{6\NW?-Gϓ.E|]}/͂^GZ %z⬯;p9khK\>5QƥG݆ }P*u$սX#Z!?@}H FMR%9iݳU r)Y.Zqe[WPyqao!6$\V]Vnˋ .V&1wnUPbd].NPi@KEo+( du1!#E;@4 If"Rv'Hp4270MYGh&6=)LC$ e V+-#Y]+uNv@ |1唳7 =?ee[ɗ=kId:u8H_g,ybT+q2"0.rŽ&d(6Kyn&6'jc4I7;qL 5I#NHR+p:`;#Gz̲!#ТQ9>Q`[6rA7thdϿDH x6>jgE0ߗSЕ^X@IӠ7\֭l[}9GlI}I(S̆ =;뼼0alK-p/x%P%N:6m8quU2v269eRʼT)i c\KNj i`,$noFۛ>|>JZ8tSZYU(À BY]BAV|$`v΀8X8J9U[$BYӛmF뉩[zĦrB>07i*&Nؕ]_o;@TKw`\ާ W>H `&*7f*8nTK|m^ 'MX~Z!Ĕ@-UWId0:)"tH'+nq (g=PDcYSf`Ozb鹳Wk&㺸)ʬZs!Y ɉ= V)ٚĐPE2wZy+GJOsrg*t_DOvGT1ʶjd+Pd/U /ii #-tɨ[c L2saP4m7@YrȘuBcOB^.'qNL]"GzO_ÚLϜ`6|hi*Mjad=}RQkfח SWgr,|95QiM*Cu7?_7p/vk,>l&!l&q^YpfaaPvvw` t3u*ށztn_0)ZDD=*Llm璨 (dI:۰EIІzW Qv̛gtQx.E,h+J.Je0.4)ED1* y;PR(ld5q\D!'>m_=Zj1=1u4KM#0t/5ŲzL?& D+m$b !,~~q=S. ĥfBh'wQ T޻a[,*Ath6q4Uz;%+O'G'2 ({IswU S &RPC[hbmPhAU;i8OӋlM!ckmM!CvL`Κs6.3oWTwaSpzO?7=Q9k y }rUl\Bx?GRU7Η~c]|Yƌ+K-0=p̥8ZjAh'~[j`wvbuLȗ!.*ND}淶feΏ.FY~ۛA,Rw trhN$ :7ڰnY6;ލ(-$lw; Rݲ&S[ MxiDZ7|qS/~>-+3[0ZH&cܘ`<_-8Ը`lg x!IXp iGSH7ڐ^Ҧ".zi?7X'9GZ#&a/E=+y4(m ^86&?*?yCEHJ~_dVJ?5bt͡otM (]tPX5]i/+x_ҦwX4[w [4BҮ\KX4'Du6v?kF!Bm +$-6!V83įA2*i.֒ ZcЄш_ChS_m6@HS^MR J)z@mԖEz7&`ݎn淗߃GGkķfyyqvg|y\Ĝxj;SJԻKof˯dlk[_Q$ˮ v#3ƔIeyE+o\-|%rij(缷ysWt6 R+Fcs Da"{Ėkl+.kNz֊5QFݗmS'4]hQ2F(+jߟZo-EV Uo]7UoUHi۪dkܰ @#p$+ v V'qLX) Qxg]gZ):D KODH%Ђr!Xs{L-rmviٛ3ߑ%H){6[i"6dWWAHOg Φ}GtB/r|e,us3_g?n,<6dcB3gǺh\*QZ:vH!t[;P]}{~PIac8*%`R % " )KR%9R +K N).^Xu9}ZzXZC{!bn%DKz%Vie 4td Qv`qi1 [DGm)ٛƈπ ahMS}qה &mOG??}-$ m X !ޣY%8}o2Yj\<"֯bibhaG߯)ap+,p>*f־O힄IH힄I֘+[ )#*G,/5AJڝ:W*昕RZ%-}> 7sz;Z2Ķ8$ZVlKy>uٍm ւZ7bwk̷s4o)h =Fs+uW5tJeKmRƌ18P훵?X26Yk,vVJ 򷸢p|SA.WɎ ^ۀ6Ӎd Z^R'JK"{9?(ѲkDBۉkvH@bIԁo vciK1T13a Py1>Ɔ;eR:RlUI38ORғtiԲCugVE8qNRQ"W bBy ԛ/w*dK/A1ecg"T#h5C7:&(Z񲂤P\Rʾ4[/p&q6Æ酦w%P kw!_/'^)ߎ9%kS0t9;dh5ѽ58K@ÅKdǃ X?"'K#Y%= X'`#JIt@v! a6-ޭDӵE`$۬!W3$n׹;p$50VyrpƔ~:Cgͱ$qo/,> kJf;~ "Eՠ*z*WNU#W ʁ3ڊ$mQpOBfCeC\>{K8:`O0hNْKEY l\>6/[9Rs=R| a_r!4VGm $SB gJQ[R%:*u{˓< ˓<{5R0-`i'1-,&A/GV{ e#mx֜6TP)ra,.dAfxR6aKI%(a1,|2AnCoݍ_T?}L,:)  CٚtIaOG5DUSgO~nB^fT6r׻)̇r1H16xuk޼[vGS[M4˦|,!x\ RL'&3itҵwhwkB^mSZ)xm9j@ۀq9 q93.m@}|X٭(<[>3D1]0a*oJ1ۀ em6i1Fo]McEe'"w3oBnG7a~(Z7ukg|6-WMI:4>_}y0ni-|;\.+=\J`ƚZNԵ:YZTh[:ݴ OWx|_qA*0=ygoSAD,TB翽; X4]U{ >wկ;3X~ʖK  ق&07lٯ=F M K|V/~H~=/p̜*"Ϯg5BOsߐń5>*1il10{&ɰ ӦRVu׋+[md3y;!FZB?{Wq / h;X>4$L3m߬DuWu!RR$2+++O,XWϟfzQ5>JZ27ƌ.c8K^% @ZEzw3e(y&l L9 ;(G3X)gqZFR؞3wR1ozE{lv_gc]xCdBbl BXs4_cl3( k,2*2puWKpHÈ:pJ)n74X謼2)s@Y 2FK–,oUeP0p&TLj-աE4x4/>?VGƫ?޽@ȗVah"]<:ȊBYxq52CvSX7w,¹ H~vUs㎺x [vݩ.1T1mYgt020} LY`+CrF EhQdFIL2)`#EQgO]!gUBš={d$ӄ#:3]  $0M]OwRzw滦W.Q2ji7J'nN3hZNS{n/RH+2%?⇮?DKvvJr۷YƝzO<cˏ{g'`\!jO>?V'{4$kxKp4tHFpG$"'{},6'0'R;`%ɲFX+vdTbDiu$TKR0v8hiX;paU)fʧk0d(u5FXO^Wagàwpu&K_sь_bv}uD/vc(>lY$l}E{nl]y+M{ȭ[{Ob &!{5Ϣ1#EckjX)y!~;̟c8skbxAnS^2pc]]t|/I.t]Bfmٛ`ya\/#./xܩD\0mmV`cϵ#S kߐz谚Z[ ϣϓԥ,;mLǍם KKԉJPk^Pu0 'a(`FQFVZZR0-5BQ _5R${42t5uZ;O˺SyTT2WޠT;=Pz)6"^=(՗HHMA6eMM~)(mZc(+ح*$]w`*b4IfC7eMv ZˣElU5<2/v?]T,~87:Ma V{l-ZwG^ߡzr7A'/lG?" ~@.A<)K3Fֈ5k #@@1*Ѳhs5QAyfVU8BY* Y2C#0Hf9j9\D,xowTHé~C5G7} J))uivHm9nlWA >w۪9y{@DN^jmP^I_tqcq)*D2Ƈ` Wv~S#%]di!n;`l?Zů']!C.(!bR7.1y] js֟lUAuIm|рT&jyAg{\йw&#UAgr֜F^^;]]z3 G d s0:}ۧ8tvBMD$5},& Y];P@/,8KWÜ)  g 2#"H&-UDBt{cS.{eGR\"gnp>t!#5^JOO"TȮQ-T*X. :UO})̴yŀHS#c\x ^]K0SX^ݻ&#UBr,ILfG+gM2f4X0t,hkDdW=õ `J CU xCr|w0\J>һPXuRoTZS-b")?,tv [eɴY h;5 DP1C]m]ػQзQB}µ;pi <@C,dr1x՜#$0\&h2\#ކˤW.Q2gjk7[,!&v.޴[_'PW.Q2E1Ĺm瓮> =Zo{kҀ֍',a" uۤ@wJ?d,~WiŝAd4HnD_Qc_Ap5 )dn@tc%伷"R7C F5XjK|?UJ,|fi Z iE5QdK}2_y{hz _h.M4_*\*rQ̞IQ "Y/[)zĈ'^|G흹{tvw4O~3dFOQL ֎ }нmba 9ջ`7/ ΰecBb0c)Ò׈(-K^OC^>C8~ydtr-لt0"v}*(cJW(: ث8v+~S`{2pӢƶ?~W5MX('`a>ƶXZ |PPqbLJ)|2c6YS25v Ac]j45$1zMn!]n :Kab,/&%sƂi0WOo1(2f&?0 BzI0!=*C $ ¼CfB[4f4vamr;EӕDd v ZcS-NZF7rh%:lKRA Gf% [7+J-f.W<4TvA $CvT'mZ0Ǜ7;@y-=#4^n\ +nkG[4zf=h-M'[x{4%Z* .\+V s % Qg[@\.>9b!6.ղg4[S-҆J]~Ρ}FΡD K\b BB^)I‰[XVAT@iHSHqS6HD;vx@Hbn=!0,$0LYD|T >0 :bB;Zqa6"/%Vx]CnZO#šv4&)9/Ćg C&g "Y%Ah0W`Fߔ>%f!( t.̛2}Yif ̆2Dv;{jo+p>wS9l8Xou᨜4n7o.>eWI~z;:_i*|XS^>/u>/֠pwB{[m٢ &{΁{Fڮu^Χl~f0tŢ1rߺ rߺ ^ݡx<0l9S5,`<%GaXٺf9/rύʸ," ܣP!}^;On4}h`xĝ3+N臷Zxv^rEd VTxWz3u ofxoJSH&`p9+HJfs[U`]f<ؘ֢+1A:ur{凤XhUP9$VQ\LJMhE\Aߍ7 s}FXISbºRHQl2ʁ2_Rf(gq#茰뢰ޒjB#!$ATE// UҤ_]=* >^߾[_+8Xo>O»`^ݣ>Ev7 1c'~ǻ /?M_\!t]gYe~aOp#mHJN8O.))#],-oM)lOL$8PTB#+'_Ks1|~T:d` AtX4"s>|,Ia@Q@'p>&BP*C5ws T`O2ׇ WtqͯXOYkܹrCѢ6k6g ˫t{S8 ?!쵶k=Z/Ts15c 'Q݅; Cl\󌵣00> Zty +(ZkZp#`ƩB\) Қ·ж0Wm0ÌU̠`VO}l(F͖konϖ[^;h\J g_]n MZǺ-hqKiŭ=]'=*1u +%*!0J*9%9)!R9sGKZ9s i :fPwly{P:VB(< }h,BX*MPgy$bwaʅZlH.6[eXQƢhh&P͵8/G7Q]AԀ6%b!Q*(8Hf 0'`l0.\ H& Ȧ+]}#\i3b‚&e.*24hJ"ҰLs H,H`&2!Zuw=Yf耺?A6NQt=#\[bBdٽ`:hp: t`39RBb1&BNwP!oEBOŁBRH+(t6vyjX BD'M-^[[?{Ϣƍqs#h w؈4]g:&(S)hhŖD1}h#q}'RY'C},q?5Ĵd]q>5cF#\ !WivbptK39v Gt)okN(~TiK0 VIv( `;]2@ٻuwڭ[K%F:`2  LN#NM!ZuӪL# |iJ0yMG64Tڳk`3 ';lU+a5/l7>ZqV!ʼnrKڽNևXNT1)fW>^@4x3!&ļ|M;|#}d'u 3n &Ł[a,Tk4$U'_)Tˈbަ<2>#2 TI+8Oi l: vƩF"·GseNxpzs6Lz{L:F!uLڎА(3xK ||r2yZos cP> .҂ʔ}Okb3(ȽobJ7ԙOT{a\qޞҡv-+׀ t\]O-?ڔ:H F .O*o'ohRaJseʅpT Y9 &6X$u5[y\3`עU~dhmoHirȥڌE; hrϠ(Zn7c .3XQc r!WY:A:Qy)UWvhzQL gB beVJ)t: ?3қmElsiOGyAcWB/ΎUpI+ЇFu>ZBT @@O-5o6|˻aЎ-'e}0 0+%P.o&w.YV&o)wP֥rmG?"Q W?Er MC.?gR]nS+9I3u&X@_Ҋ5Yr-VuWHBEBDe#劓n"{.U E0侁Wf$ss2qb-p2-*yl\ʥ\Jj: :b\ j6"*8o͸JeIm0G1`)R;ǭ6(B66һnMnWBι^*!IS7Cᨣ44K-RKz-6 Cq&2b r_*nj ֩t #NsFq4ޓ4c`2i3]EZE@6[޼ $b*x ~&K=! bnZ%_76ϸDr< 9ɽjJJGw^-]5^⳺de'4ؙ@9L#ѶaL/` * ?dMeJ~X's[E0\3C:pccD1{yM%DRr W}NsjB+@Q\µVp]s_mTp< T 2r)86Ϥ&&&#Rr:TsA\\мؼtM5z˶Gl:>Sͻ&hƻPwok"H'cW k)H⬇-G,݄G¿&˃:.rir#|jB =ÿq%փvWkLLYq}y-,8wyoEWzSw lCnsoD73+HLY͹v+pwxVqnT%Ϊ+ ]ȧ-E`ɝ&2рFʗ~vKA+VZt F;Sq!ѹ*圳Z)(binAxEƗK>њ+a`qo R _fx ԝK9b58)T g/~Bf‰V%< 1ߓz>x4`%36w, ~}p)dK^a찜fHk(a\n^wb .52՞KPQlSo1JٹLQJ!͝ AǛĄN2J)w1Mͮ /^ѝI ʰ8Z-8@UqqU7x@Xň%Af@3cDXp#\eiRt7R[a@5L`v:6904_/_pW#gpop WG4JZ @+\2GBxF_ 9U|gO{G/]< Fqn.-i!az)K42bK>tҜZN,s&TrF*f2npU7 %Yp%:Bbn/`m<jQbI>}ʴK[Vn;~7.f Pn*ٍ 9]PN)9[z/QHmZkӛ)uߟe@ ivb;k?y;`n]z\U0d:NzX)0`!y :dzyy'x{W n bh&~ЊY7s=EMy&F1஧ cv|UH25V kBhNZt>WAh z+[ևMQJwƲ1OQf`*&V׀~ԑd(MmqAfRIyС>PWJQ\y?~$g"{vN&~ ai4rM9Yd+yFDZED]^-묻:W{.PމZԻ&d EߗZ29KS^)仙1g|v; } AR;! dbsx3=Ͽ j/?Ѥ>˿8;鯿?^#+:z/_=sDrzYa=:4y7|gv`]%v ^8{/W0 4x5-&yJW~ǽ}IO,XL m2ȓ&(]ehşW!o^=.=w'(s6IT<g(Opu`)q8 YI2<Lpxa OrLk &  lO}i?_,MD#_fd>-'Fː<vcCSX"6%cU7/l%YlUu7PtK6 O9AdleyO^u[R'뤖$b79]!ӝ|H,-kJ^Rz&ޝ%RUN0^oב]#$N/2|?n毽{+P(R M >ou/cFJ*Ym1a vdӳN8!t;2.5>_1>۪Q7O"~you]عHDzϛ9-yoO/:g"0ONM,|^6|JTlzarMW cǝ|?o. ~n+\nzщ}tէb[ fC5y.quDU,;L5!- Dmaٳ )7>=>SXor;XͫAs@W0{Q!qQ@re{br z*>pV~8n )ˎ~G+wk웝4ٙ^<8 f>oa3vczO]}~FOcJfS4b7oڬj'F4$XS Y(:0&gKZ//sF ]*:eʈ+ã޷O'w;ٓ +:rkF|\sM c0wgՑhX2[u\à*GQ:@L҅ͬxSu'&o7ݜFXݸު [vcϷo<ۍk_~,بtOE-$L\$(pʮ܂$a9Bף] 0"-2Ӏ52H(II APQD&w:%#` 2F9*RTBF8kLajdw]k/2sȠ/A-Erd佻l[B;8g?6JuOY֮@s\G[6HA`h0AP4`kS3BCck ƈUgX cKlֺo)I][-홒X|5Ңj| N' Υ˖=]a 3.JF+m&I|чKD`=i uSĈ`Wu=+<S;@ҵIiv;(F7kϠC|(|ts%M1p))1{hj[v @^y Ow>''=9b5?؇ϋvROƘe,ˑ; Vqd?Rmޏf?8OJzөO>U0J@@ LSlĵn= /Υ+ɾϸN hm"`SZ[ڳuXcBlWnMT pYίѴ_+s.-a:p#V*s hT4 +1)b i#*5β.)׌1o>-e @` \N #,Bw@:C/,NI9*ɻ2L^^I^Y[BkiȺ"oL/]kWή%J`!O!0j`4_CLgL1&+⹌ yZ^/yXL+|6LlUXr)^:Mq0=g ,.P8a\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +$s1|\ Q!puǙٱ `8|HOpgn8-A!×Y'0+һbz4~4[~9:18afWQyP! mӔl_6J-M%fNJ_+;ipj>d(-}Ef7P( 3rJ!Gh\` N$ 6Z$J6R}a3r^:pW7UB~3{tْ+iڬ"L_a0yIh/g >mᬷ%\h"{ڀ%x5vLwàv*2I9:q@`M{o4Im>"5G7J*]MթOn %]63%b;XBj1j%$-Dtz7T(,-0@˙ BC7 \18DWYJLfi lǙi;M&: 0ou .qXSV&y0k_jҊ,`ˎW4 YGm\0Fɮ-A9&\'Xq"P;R};gUcKSq^FC05bFJ.WG SRWq`֦:ih-[sg:?EY )V|' -Os481%T)6\ ]yN`Tʢ)QAՑQwL)Y9mR*z9p׏Wh.>P-92֧vz/M.9,K̆,J$Q um[rsv^3/h\I'8 XZ 9#( c #nQ8Z UZ<$NRH`eqya @ D9#K(i$g40&xP$CpX2bJx(b. [C9cQ''20Ȕ+"X^龙[: -`z*D"& fPD`*%jmF·Ґ2^/b$<1I\ph^ok&y rs*|C_s;<2)Ut @ aQ X[( H%D\Pw4zB0'V3gbkAHf>AQ2BSP TrRC60 E'P2ځ9]+A|jlQ8 \jK6 K1 ^83k22jn8xm;Mw;F*M_=1)4BRWy;(`DwDLL9{ (pbYE4@9@e;Bv_ MM(3&oƠ)jeNm0A t `8,yLyn*%`,*壊Zhb0:0 eO˄aHTtTڄ/!vF ?M "5Lkm%clsM,h3-1:y`.&ņ%hj P4Ġj-0W8E`f8F;<#v`,h%y)(TU+J[g)73XNrs5/˹tXmU·shs ]?h`V;$Yy[en$!2WĂp[`(R6U )YbvȂK 7HA! pw giJTb)H V-1kU؝@4@; 3I"c,K`%^?COY&@pƮKži^EH$fzqB3 kyIүwnoL cA7RNR{YmT8|9;ޗdٻKwLSUY-KU<*}1JTdNi!K~lyd2EA߼aaNwXĂax$ƎpLɕ"R{ RvkM)񞜘0|0wSʀZ:XJt 9+XOR~GϞCW_8t'ǖ:?rٝM0V^zxtwz`Wo};n#han0h9|l h!W?7) ѰwM}pT1:\oG{1[zѻG8i mW4a{~/ZIoPaeZ 3T)3In$ O Yw{I棷-25X)} K3?'A=Qw@]Iٷ,9bQkg³Vx ZS(-h,sQۋO\IbcxsXw#;@m,^#{P8VԱ5J1w-~j,8’b⋕[\ !\ɿLdRo^Q^POTO(ڲp+'L?qoė,'Z)thTw/[K?t/f ][XpJl6p昵-ԡ[T>*QOwiH4hZOWi-ȷ2EzA6Be6j5|kFe`C@^6gKWK4-'?ŞF3m5{4 FjbY̷odwwD;"]9i#ymݯ蘴\" 'F1IxiR( x|4b{NU3Ʀ5fLJJqFFLmwFX3O؍vBv>< y0 1Nr9$"c`v`*4]|JQD:MQ| 6DJ.0B , cցe]z-G2Nفh<(<"**& {0(ۀ5M6qH2,ތ/,`ylٔWQTє\iuW䫇>}-ZCP,xO@ЙH-V򬷳o*ȮerWd;r̂S\GT;*9b Dw,d XYQv_j MOiH(dB+P DSjvZG}AxIN1ŭ1l\L?Zzp1&CQ9C!EX jveN_|`X -1Vxt?BUoy"5"/ׁa$q=w5^m_9̼(UwP'SPY]Fx^9H%sTRХ/S @L•eyWe*ǣ]|8G,`"\a8BYwt'WyNKUĺu$gM>_v Svwr3_T b`V.^&g-L*}{pam+QDs2Tʐ"44!DY4 *͔(hT,4}T>L[dfKρgdz(::CİWH=&5%qAl,=aJ:,:~u= G@.r&8IMF!yZϬ2m1xYn-% q& @^Fgy BrZ{w}"`dg @cleei@baē͖yaDn6h$5N2˜ :9@^DKNy-PZGV;(,e,ٶz\xEM)g*J"/yW0@8j6Jf]6BHu $'S*g]֜I#kEqo hՑ-İA6%QڢNρFiqmUɁr4җ͓= 8UJLMje^ q[{cJD)y ~({^UhZPz虰>7Agĩ}w>>=+w/GTa4qڭ˂Vn_S,FBEr %>Yv!y.4 ^76C!ӋjkSJ,?4.~ݺqWMJI|Y/_ͯFJږikJӯidxQ̹ZF:G ʸ8'g=l{hUIE|DJ\T}>~3jKb =07TũD/*_F\K +c(g\I28O0 c澰=!'NjCSrr܅(燱 UAd*3Ʃhɬ90~LKl15m|j5V|8uİ"$mEA+~8Me 9sL^mI#Դ54/ s@uwsZZ]imoBRp7y(iTc(zez1H#r20go ү5d_6J헓.4sΏ9犫.NaH.]Y:'kZ!k{1YnfH7[2}W[_qȬA۔!r@wr!Vk|s8dɔ$iJ]Do,(3F*1( MP9.*'~.>h!9#y˕zRՁ0+]Wv0*uUb7IWC݇$/ZqC NFutdGw()>$k-!)}~Na5+6m'Z)/_~_ݯF?^ܽ=OѲ蓯F!?MǛI J7iݚ~U]\lU۶jmXk}2+Sfh<{m۶Xzm3Ӈ63CS1LNoݬٯ.[>24.hHPK}3wk;5UmRt DÏU,YhR3jbJ @O+(@fo.~S=>9<сpxAPh6GH߮}:{jѭU:TiHB^ю8IUp:$"ἢCGz[eFq4N2[ 爊%Vș穐X]P\auv"%P\ZlJ4xlO)'6~BqS;Ũiyi_>7ՕBU-'q&<2F1\жxhg9)NI 5}ȆhkI=@$q[B2Vm *CB.Ml r(b1I hK,XY12C\6$j"- 2)q tFgfOoM 6|zxȺ8_ -pOYZhȞ RT*}ZkwAXcvo?:E >6z$J!*y剘LAUE5[ Vy*1CO-C0r91hPUlղb*+C}"<LIqPKudRI9b*'ɨHLUt2m.8#5[eX\@!6-FR Ȝ CO̻0k1k?揓ޜFN~t%7祭IȀ?M?{ЎszP So{ M<7d0t!XWvWWl} AMk+[|܄0+ec8:nѧMj:&lGRk:jvRfa$$q.rJ5$B GluPzX"u*pΤIP|i#*, keBAO$99Cx{Rpw@[u=VF#`(bO Kv  CJ ã{@! HQWP FQjiygi%pbYE4',"}b؁AL `^S(20KSHh$C.J4<ؓ`[Nh - X0F_zA Qa`H`83d,-3u7壊z<Ȉrs-|t& DZʰt<0(6idE,%kJg#:= 60X9ƖMk;H0@qA),hL41 3@X@+Yn MT{7JthzZ(tZ8{&hQ(Kj]i ;fho%Ll_j""mE֪Z~ƻFhEl[V [3{fhqj lCKo516A 7FϝTyW>58k$ 6~ˊ|(&6a$Qٷ◰V@rsq Xqe4I$0cD)i8,Q!Ht-q]q]K.jv) f+Xz& 2zCiH9]dSKO{zau-iLY1%uR$6 1_\fGtAS~{-57MesV1@(9N.[԰R5 ˲7On[_y>˓~:!M:J&{ux;2wY#e%tmoIȕL[ krY}t먵|)u~r @N܆eafbs9v:ry)-+i۫S󥭶-H~?DSˋ(Ѧ/@U4)@؎q*__Χz4AIKULp!eUs;)qTb $QX("IAe9YP";6/%,#bN*es͓#˱|_ ^(LFG*:bD qm*fޗRfש:ޖ|=EQj.2oMxS0EtFi!HA݌+y8JO@@)PM)D-֏5TpqN0Y'D?U4Q|:-F{9UQ7o&꩖lm2:g$tɭM_Bqe+Ĭ.J6wBBqL(?>v{':Kɍ?M&ћ>DeoI{m>h#U[f_8L6ݻ{k@Vn:?Y6yEk`5_Zk iglӺ6(0D 8FS &I`T & (W%ŀ4- 7u ∴Ng֊Y()dy4MQkbۑpMQ='Xe9+"Р2XetĊ0UyaJ 4CX:4YsMVH= IcTZ ҍͽYWFyP-I:=u|4@)l UTAq4"`=vAF0{0ъ kA@GEAF:/cX :x-49oxaz`Qhn$*=/e_@)s-MUf;86g5"(ta4AmFx&`n Lg1,З_N})>ث GSzʺ)SM )OJ%;3?L[l>GëimwU7lx\'*>N) 톃HnS> ZR{ˢò^)V]aN+BWȮeVs2>R2H3-ϲpAYκ̤ĕìsr(+Y6M4ܛd #JM)Sr7Iu?x4iaR="?aY[ 1|9L~|L>R9O %T.MK%u_M^jfg?f/^ɞ߳7G7ZoRE:={śg?>{$7_/vi&[Kr?z<{({"ez~˶eq+ Jh,f.PS ?Q'6B W1L\q߻ ^<l/7iNe M`}8''8kKF|w|;C ^Zي~%??ʚ!O*\6Cv0p~~g|/nSꯠ =;0H$Û S_ %O:W`P ٴӃ/)w?fe_AVša_I;_8M]<4Dl쨈=]|wǥո۝$_aRR`f[<*$csj}Xd/O ןߠ7OnNoaٻqg@Ӌ~5,'|҃pN۬C]hs:WJNö ig{ZhYIP}v486/.?yOc eSDݜ_7XHM<幘b/r3O19-&ϋ^D>L+:2 :P EV6ǣw3޷XjCu$,*+QΝQfBf-,$Blr+nERT?/l^DKhG&H;GAGe?؝vݹa ;܁vqWmU^Vs쀽.^][~>x_/v:_|wםu|vkZ'µ+#uzݹ^7zms\;WVZYuR8VtlY_ YXΝfi"Jwb.mt";=3XMwm[aoyKwҝﳛ*N:}548N[]ʶeJ R, K6 ?tNO$gx:^a`[ʭ2c}{]pz|۽%Yo ] ܀5+=ioGЧ)C?dL0l0|I i)Hd'}MR&)CA'pBKͮzzn/u%")ׯU Nt3:! (ASs_^٠mT>`hHЊ gm]Eè!{v*"9Uk ^ 7F%R>f:<$O6[s-b٣ܦM_)̮īoV,.3Yxj7ϴ)9ZIUdM?UZ!wlwF YR_K|[G)3}L2sĔglj33}L>s3^HhVW9A260SSt'mv'T06[mui7+lg߼>y6Ïy6}Mg3ʆc[6=~ t`Z{'Ǒv"sQjxjnSW'KYwx0! m,Tkpnj,-8zĥ7x7yCV$ͬ=zI\,nZ@J!y"C}!C C!>D؇9DX u( BD5DX3 q"| !’d}!!>D؇aB|k'}%DG6|oQ6a&D(OuRaO@~La)7ZDEsXXJWZko_6Z|JXX}\灝† l°-xE adw4lf-*6Bu_[DcUџz>~(3FMs 3MY?D)a*3ʾ܅v߃TNR7K1ar#%8 *MFsV%#8^=np| `c,l15,UhaYUw񣂲]6qq4;́(dqi3U/lł(ej6>;3icT"L(c2 np/Dbr6\ J3pM~)׹ ?c}_UcBJ–fnd|4y/됼XW3Ә:,F~ NfS4?m5۞?z>ɶ:NQ6|Us+-kCGnj04Fݴ {"[HMK"A%S4DIhqr̈iJP`'ք 6X_ng&-UuU MTwwSe'֯"#^i>¤.%jcE_ u;(vqb`qXv<3斧!7?M9!b %g]~wfu DE7o0[%F(N8 9o’T0NPY P@66*|+͘ssV1N::`ZXO1vZ6FydT1]WJ-{uxbuX\*gGAhK?׆hq;iๅ`(UX yzZlS`FYVߦ/wN|+0Lw,pޣG70?>J!3 39E=%N1 "tY E`yp\/k؅l0J96f޲9\ ~0WmyUuƹ1Gy󄒽}ֈ6Of/Nپv2~vL=[80v{F&v0[]O&-Fo7o9{noBsV2/SNǵ?YV(Cxh\fMNY󔡔#<$ x|wf@5%0TdVm6'!6'=F̼gqNp8(J9]Z{ N0m#8a<-tJ!D:䧬nՎuAH#4eva ٜN^^qj>\ѺHto2~Hfo7^vA V{f{}sn|qv{uNpo;okm)]o {}뽭V Xo-+}#7ໟNu߀o7A o’8b* ɔzX4lOքoP!__Culi<пh-mJ)+`Kbiㄔ'KYlI ElCR:EXڟcBm00~]\:zKJ%$RBO&( Cv8ړEDi~C3Nbw)L:O8ndLQ>G5,,_G6f0>9sV$\#Uvc&YV#G;?y>G;fx2e RKĹomQ"YOӊ$YST L^iijf1φU˰FFIjZM?F@܄ͅtvQfYu(+.D+@:ZfSؒd&ɮW5R`2b<zI&:҂m]q,1| =< ۋ)gtN4yÛ-S4;r*@ls<1$>:E3.B#uWGĬ`. c ]t ρ+E (XbSoxhl/r?jX`ZD^Nؙo.vBG0ш1!F )" ^S>@(Q 6J2 ϱc3.%Q'dFlbC曼 )%]ȝCZ(2 `_HL$:^M]p,8"ҁb1G4;d^}(AY9g $'i"kC曼55æa4(lwA9~M. Gu׽@`yte߼kϴTr&|~>3xbxP VƏǙ??YR~𶧡]`^Ż4 ~c tzqc` . l3L[9.@B98˱"Tl;|q*}ڏ)-]@PtIfCkݎngHRi,I ȯ >+Ƽ > :)$V QXX(,Y'T#і:'hts:1Z|HU>.Lj 9)-o֪C`J*C[vR7AyjpYz p=~ 5kbLHdN :9?0 48<܋M4ZX XYnB'׊ C&Ňd ók7pl風 ׎;1Jǡwx ?IkGŃ ('N"m=G QhI\*V8)ÀA4aM)Z:I^ٻ&")"3N{-+JFqKoѤWZ녎X/-4&$6-K#q8"we=nI!@2 Cg ٢nI7&IV3YG,؀!:*ȈVR$Q&i DzY611+w\F" DABd4rZkeR`Ed1 >zm0+$ g2),U[ߓ@XSPuMJp&sNJ,nɛ{'4/_gNP[~N?g~X+]TsBVKXN'N=l18AR[0(fXY!L$LxY?/ C4J.QTQX:p cl( u}AjzT_ž6)I֙u~cՙ˺gvړ67-5xʙާ`}"rձ? Ԅp]fS:_k3 _Vj\XmρGB!/]kYx7|c̐KA|}]ǟxqWH16t÷>"je nv|u{ۢRUKͯjaXܙź:r*>70ت1T}N6˧OD/֥͝En5Y'Y=KՒ&G&Ҝ[™SJg W.x"L60BP Zꨃ(4rƞޗje0RRC]Op&`+P2.DNι"4XsxH,JTϋXJ)*FJ@ e8hdQS FXIN0𤏸]+p14[pZd nWoOud%xϨ,gr \uI3y*8Á@nTguA[ "F1bz#v) VJ)0%҃,NQ 0B2=GH3CJQ N" *.W tJHV3H$U ڽ 瞬qOE0*rʝB e,jZ."B+u>?sբfw4kVAj6_v2rGݗAJ ڹ EI )tk始,1fRK[洵"MB6_>y ̿M|%j)%4.uR YxZnGPlWV0Ϩg]ǧQ>(岠2Fsr[_-4P[R\'\8'"'zO-Y~HLER[MÕE %5,%ރjW;<;.]|$S&>,ϝ.ގշkxCpS4N~d'4^ 4AK H (L9:că?L+ ,e?ׯ~zy\zzU-R컟 lJ|o}jӡYpʑgOz~Cdg- g70J) ?7d ^nJx}TQ\>kxQ6ahpԦDJ y؄Kwm(i83崦x/ڽP.B rMFrCXwCH м%d8;Ż3>k t yruvhʆs8W}XN;BvB"k3S͇@[{eTMQ7(府v+iY*D7ƬS05ޮK3F813wʿoU{G aL%R/-K6?6F,q.#Vbz3r켆ozX\,G ׊UhL` S̈7(&YXl(]&{HNmcv#c?d n }|kZ֔ޜvYhw0Hgr\+^=vfE}$UEkJ AnAތ..'O|I2nzn]}O+, ϺkLR6hr'Py9gPr:Cqi$nDq6*ztw|(ŌqJa !SHB$<7T,r2,3Qtsu-8B* n$`C>(S;mi//Wf|駩dp,RFYf_tO6<$=}lj^ [/AXTJ3/SϏS>~~<-kˋu] .ܮ~J ; )I:NSNM>܄+ >w^q]p[JKH)>zP K"W H^Σ!L9ˊy;D n6qsCҋF CrC,ڠ`;͏ G :ҩ9@v蠐dE8A>Dt6IGi,!ᨻD᠘d֚v醴>,R?O}$.~nf\ j-jX[UkiTExXq>k:Iˬzfn]G cD˹>*5ƾ" r8"?t5wZ"w"Yh7|c̐KQ`b~C6YWr %N Ɗ缣yQi6:j+Fq’N>#NrwhMbfњbB$/7g1"`fb<s̱o`mv3ohP9hgԟ VD\kF}}L1_ +մ29&aW$%:^xMœlI5x$d EJu]hiw_3(-ޟMi\\mәMi\\`݌.~%(*Xܮm6E7ǧQcM~LsPHTAE}skHp0fjb?mkCy3]>GfaO_o/?~5 190+ܯJr~׿^m?W߿zWW)Fo &M* * #*J@bI"v1S-Z`.VG*i1|߻Hk&"~1RvR.R n[ئ!凰ay `(J QPJ Z0bGO,ZaTr`p֗[^(T˕-fyhIGI}d)$e Q5!U@@l[#RIHx wDv;Itgb>_5˝נDR0Br@b.+ TH-3i¸R9aVTjDDҁ`.lo{|8(=aTN.dmuXR5 ?+Y˃X!N0H*5z5d DJG J c*PwG $H $r!"݉%"4w:wf)lSc,o"DN1LvHQgT QʃD:fM$B$XB-OpS]%#X4{Py0J!$?1˓} GPSDGaњ 3oCHyt?=:xLD;HUgƩ gZO=5rTpvsB#N&,Q,mJ0-1T͑(RYGd9_ Ӻ/_\&y ;L ſLo@&It6[jF ,ƅ |fz¤>gJ6_h2̔`nM-g\#[^II6\}AbL @:U%bRv"lu;Ű *PbχŘT2ZdA9?#k٬w KQ}bI@|@;*.E TȂ><-6P.l!wILuv3UA0J=qhՔ8M H+)&"¤OBIDAT\"U[σYPj}WH L ´ f+)L4JdBoáL#(JU80F),`qOʏհieX  7yDظASF rGW8Y%},ֱYC$GI0mϼ@A>m!6 TFT1|8._7 dHc*Y[I=jfi <&I(2֫*!= 2pԣZU.r :"ϝ5)KR#Lq-',u$#% 1:BJV dq@ dq0fu[&!l*VS.pe^Xíۜ>ӏ/zu @KM;gP5.vGSOvDkڌ&VLKEHO.9/SNVib1FaT {hǂ'I`JY&si赸m\ZVVDŽ ,m=Q‘J-FdLP䃳܌%GbDQ6Nkun=BQk]t[ {+6aoc)@IXJJoci'@RdqMKDD;R`£"1"p'fhKUX[ ۆX{YLabSÔVT$Պo(vjt|+a8"Q}q,7Nҋ0Bֆ!ցw..b*NQlEq'b /xn,^\͕ /Yۚ!Vz[S Fi8ZEm;B%̑{On3ƒww>9顖4ٵ;ύ bZ3$D^`S 8KM0+†Dgh@\$+ ꪤ} ,FU 8! Nj_Jbg"5GeW T~* J3 H6Cj\qWOD\껌 bn=HzIhSVe6 jc!'wn!(3U2ÂgƖՐ^.C.+(lk3bo0Fkj~DV*g5h*nLnS㲍ON<\rYs7WII7*aɮiatYBߝ`#*I1ogmOn4pvZM0IE(mVx3<<(55^hg1)O)U! Դ㠰wq# oA*W[x!/;0_c`guwN7_ OZcͭ)8֚蘈DTzγ6jsvO("v)>RV>r[4Ͻs{z dD۟j[&@I xvciHTqf H;.,e_E]7< vG.5.l 7i{+Uw}QO&v}9cC\1ŜZ') GҥBĻ6b-Tz΄l@%BgB-s?=׃b l-}46jHt9zz þCMZ"v?$@kCzmɋYn8䔤?ob|57>_!@ߟ_<^^D)R}24%H {}4O_&u[XoP3k2?;XZk`X5 /n?w[:)-_X ?a.o3vZ3_\w^>-ԽѻBzt^nv{ ,^Q"i%-mI>}7D~8t"q~{Һn'ߝg(Oۺu&`xq)m"/no?xGsN氟 ~Q20Sκ=}Y )=ЃEsA[n>쯟[6 8Yu `/0n:yuJg׭NK(/}y'߅ ~%E ](l=_,OuOS OG΅ F}:} uzj{%'CrtQ= 9yzf>^<﯋5n86A軬;-fxqMwFTC(zGu7~iqjПϧa? ?)(OA§!hʙҼ7bmh\gAozm䭏hb&劁 n),IkzvL(J`+XL^/2m?E%~8*ڻE} &_ajݲ4[na,e1>er2sIP6ce] 3CA"zWL7{PfvP(U㟁7o܋o(3:e1Ԭ*f'hGU `2UqdD|{V.n {a)ÊսYV E3Y]dc F. qނj}(}Kch'.@ ǎ1⃢HKYR,גr {J#H 0 +j\̐0eHl̔)W`)Hb;I2$^XQJ@%0 @z1J-\q`LRJ= )XcR@ѭH T(C$b p^08ĈJf XJ ؗ XA|C$=ӌ\x|U"" B i# SN@9K4,7H 1G@ծẊ~`TseXwU-SL 3t:VNfU]=6 o@+ 67YsVi|}J3ܷr+t8iEb<|MzȅTQM0u R2 8ڌܪɹw+`gE/{qc6{4 N tO ~7Whh)$?r`ͧ+ooOk޵2?﹭z/]HVZiTD,f!Qî~+Týt Mw[ ]`Jy^J+q AX*A\JS:(<%tkCF5 o Zv/jR=ה3Ipq'ޱ&y3j$OKrZxFO˛c,ΊI|}\or"O+^RZ( QX~h ^`RZ@܇սEMj#{MSE[hriiM1A) i^[LㆩZ{nIKk GIKHSdv>yQ>}{r.B/=`8JK#X ƈV^adݱp*s޴Ľe`˸/]fSBhW?3}=0 ]m[|Xt=BqäIx5x_ -O-5?pjCךlڎQWĪ͊[ةv8m6w<= :2j(-\4AV(Y-pB;MA[]_ؔ+P:9VRpcoU^ýc/]Y¥:Ky dRLh~ٹګځ;`i>5q|_GMf`S <>8J}sV'<:K̩/iL;e[ǧqN`eWeT>?釜p"̂'Dzw2,U( kHKgteN)thgL8"P0x4ڹ]:2O8Xvp+B#ͺǓ ڙ*cBy$U`EI=&xoy&F8NR -te!xu)rQ{ |9= ໻AN[,b37)g9b ~ΙVOcєՒ;isң byե)/> wT%\rZUjI3)MQe8ZkGdwebTՎn[bDЃʳ>Ts#"A>P-޹i>-h.|>SP v7G?`Jl{=_+W/11'NZjuzN00wţq (iMfߥk&X6֫)jʤ)ؤ=8~[`I]{֔ $"#k{MxK7<)EkҦ|C4<\It?TݡgO~~lO]xǚ)ϨL֊[hɿ eӗ| xo)/JE s/m_ojU\o 5}m1ז" s DwgS5I')OHIྜྷ+jW=WtǨNo}Њ`u4\۹`wԯP;^ ׻c|XuiΡ.4CT~QIѝ<FHuh"wDz3<,Ӫe ~4CUUz+W}]È*!x_<{OHܯ0b;bjPD?M76kOboLl#sl;AJTH*<;o:P2Hf `c|tzLqd[nKXM;pT N8pī9B$o{qK+~ōX4G *]nsߌzNÉ$u+ߩS%D"#y^d)L2nY4 B:Ô˥Q!d6tٿWt]؝F}(>?}͘=NR^`ǧϰ̔7V N6H1Q]25rzS=Ɣ[ 6<Ȼs/!2#U?G0-ޏ~:u9to1[ndBkA#[PQgBC* %FX ,3 @8|ȮoYֈ\8Wwt74M^ Ɏ=:R5vQp"OeT958c}h'Vo qbI4?tX>iՒOS+fTQGE(&c0:I9_bIN^~!җȀ,ր"ݐV7 @b6g8Ϙf>kPK6㜥<γ({FVELIhhp UbX4qn sr9_p7|(n*wc>g֘3wj39fc}u|;{A1e#.q]5(\wzs=t;qt$HY S5Vje2Нhu=9T_Yz٬n(R39!`hQPAQF 3H0`yGr2b}nkYV/ή j5o)eF6ذ@PS j*Bh'- PFq.D3kV >Nz/6?'ƥf 2Ud5h~@czC0,-LPҐEt2})i*Sl޵o=Ĝ1]KSO?cCer[y(RMk]j.d3kp[ ͞>Cu\RZ/?Y>ƬQ>+T&>7#fz Q+ʗm>ΪX/r)D*L6Fg'gyώ<gXyqf_q$4R5qřŇR6g( vmiLljdf1Вa:L9iRu}RL+Bge2*N1VW;_ah79Y!cdvۨw+]><q-i.>ӿhj5>-ܼ̲:o挊pԪQ9aט%I$D%b% fj ڡ/c%2&< +e_c Xˈ6e| @L>kj{}ƜʮcyTV~/=^M*-4WN˦v as>K}nG;M<eZuKz6 h̵bz p˚R hL=QÄŧK<*N.UHIb'ٱ ^OQtc@Ѭ* '>V?K?KT`ˤ>M%$1IiE8>Gy,)9 X`)^[R0PM%Og%Eu',hKBmi֮=ĵ7o~4r`I`5Œ铻nOЅER/ !1 ,ow?'= \tCAĄJX&nei/D(Ѡ lwK{5LΙi!s$6g+Ti֘#xycMT ϙ^wI]l'^@Aɵze\>gk%K/ƒ h, Ro8ܤo3C H+0*2l6͘Ta /;3C e7"`mlZZS`.R>gcK{Fjc"+NTӸWNԍZ<vJK2UQvw$A(nYpT3f(qTyy W +gq\9ffg]>c"k 8BNF8w2iZĢW+w93^hRZ0 $/lqi<(pPx 1F.c@Qqq@}ӍXӡbT,; vU' V"Ej@BFT"s HT_r~n g,7nM(%=Y+|$[88*LbtkFQo(qvˮ/ܞ =<5ܶEv0Ĝ|C=ܮ1l-}xYl:Q>,>xfLsX QH!SL?of׫:]|7EX]]!]>o5o.d|?.߿?? ]7ir~Znӝ; i_;'^#ɽ>reNN!oW1\\?iVW_]=2пO5xzv2':SJIscq7[U14X9)XzŖNBlUIkh+|zXOfjM4m hc]g ~޳dDG &`&= NU`t| C{A1E߮;cANFxqR|o7'STrb+ׅTa2"j\.A0JxWv}s▃WSNN<'kFO'TͫEE/jlJ14’*Dvŀ F9?>ev/?qWO?z 6KּًPBt§.'9wvR^l̃A;aozaYf~\QczmQV?E{{Wwdg~~_OԼҘ+ $8 )')O,F&H4_}T3'tJ|! AiO\cEBG)kI= B 0%Ϫ[PmCJ*;#\5ˇ6+0m*FYF8^*C=~ut" OtʀT@ ^&+ "xgc%FI]˫M~LgSG\* C` R Jaj'm>ݝ`Y%t)l)B33l9s5EKlD+2:nR5KK x`ar r*j/֨z썎1!Nzlj2: ]ͩcAMQ =:4Λ 86N$}`/oãPJ;!d>1.8N O.e1=j\=#;.D4B!fTzAD!QmTJ::? =8Q=9`댷Ǎåg2j]>xPRz ؎w[nۄH hAW(qvN]kv ,VߊEjPIӀ2\B]pz:: +j\ X,fa]HZNyf"aU@:XJE( ص$*?QtRlN?_>\-6Okُ6 A?gӇ?W9LALunҦJ@|v,yQG/ˇ6SF2.*.4ȣcJk$)xI(8 GJB 6MR+Qq*-4ӖN*en5\*Hwi[9sw ar׫oKm'/fvycWK^`ZF_/yI+`a_QrT~SH'|weK^@BsojKUFdž#sH1[&`DfOͫcq>|ITa0<$LD`D˛/Oz@'Y~yO)`b:qOdd[bpbVAGG"(A}>o~\nݭ/wh)CQeTQTJUfGծ=?3!LsvBg㘩-VMP ƥXF)\h"1I)]BSfpBc,n/GoA/dBq,J#-2H}ĈJf XJ88bpƹ&UӮ=p!aR9*&&ey`ػHnWB1°7kHbdKIn!RuwU7xTk4,#4E$F6̔҅jt-j칵iVzƳ07791q\&t!l'LC208$֮L$PQ.F9-L#@*gW1AS1{lRGؠ2D1Hek=/I}bC6}lk*^re <¸IcQ­<0s;MiE2sPJ ^xw1ii:>j^Pa17 Jݏͫ s|[K%OYC PaWBBJ IPhaIx}> N‹a qkiwAT$oÇ^e {1 xħ o"2}a~/ Dj{/p4k^D;hj@6ΩDTFX^)X3eCTif{Tn3)Q^9op )0݀!y2n?P>jD4c16JS(_^43 Yvd,f-gH  1<")*<ZxTBpıuf12}ijpQ=x2-Yx| V/_Њ E$+SE1`!yr;7JYSVS}|zzotmVw#;"'eb-1'(a AFNpMr$<^^>>:bH~~A???߼Y1V߃C[Q+p2n{-WMY  i'IO'L/I\~&J B)qUuI'U>n ͒5 NN kȅ=hq>:!v_[;C(8[JwEoGD9~CŸ,lg_+^hw1aȭ#E c]6.: G X6߽5ˌc$2}M;Va:nFXIQ|mU{LYLJ/\N-u(rwҚfVG'jI{/?^2J}uZ==A~ sUٶ߸gyI;sp~VumG/au[=e7_g@|8t)8G{r*p ]a Ete]0`Uh1XJZܨD2Z3C׆3Z։,JMk E10z}L/ƴ4,dЩ[@|5Z, mS$#[@!p>{T)dυ6<Dֆ],io.^zfUޚW%OgWwʲnULj o8ǣ~#,8#,HGuq L3摷/Br%b%hצPZu*y}kj& tvvn?<߫qޤDpsl5/.?.N[bO v'Ljq<{P7mր]lqjnV.˷_Ѝ%anke  ٌ ZUsds/C,e F(^}run>wب;3dev<~X\ޯ0]֙Ij`ǫ~bxuO0|Lx{oD@ ]Am e 5²bjy\IUs R cT BW5CPUJ˪0(z 5/~, {C 8VMLBRXk Ԯ8cGaXWB Kr]g5Zr9RD1Q0WRrz@e ȸ tGҥ [l#"T_Zu  ccpi7ٻ AV# TPq^Jx|IFN/C ?<0*+5`EKDg`"Š@hQi~e$i!&<;M/.`~KL/:䐀$"̇TzTT3oA%@(Va5TNԊaVFAT& L wU'jggIJd1ӜJCdLIS-PA K1KD_Li̗QAĽ:wpwX/OFz߮<dv#>KW7cf>sJmg!s̙",#>lYqi<$#͊ RgFv'Yv}>j5?}TT YN+HI [qd {5]t~Sv`jwN}9Ĕ76$K@+, 1_^ټ{Z}(lC?O\҂'6Z|fo#1̱q*,i6u142[ÙzHv>WClx5Gv?k=X<e)Co4t$, ua=DY<)9D&v|Y!˖EEpYixb,b/^E!E)eiϮ QM |zx yxE 12+{b P2Rt~>=DY<-xY !BETiyIJn]ղaE^4DB*^ `oi+{hHk/ާ(ox+- *v*f Z(ʊ!-±yC AK+r( ЈBYbO,Ò[)̅{cܾ*~jaBh{iRSqp/ާg,xk4$ҘUD ]iT0.haQ<<>>>dV졧yA[uꋫW~>Ó?)+-;3֫+0,ŠU% )aF:>Lj5@W J@*1=\YV x siZJ#zƼ{ᾚ-41ksZ׀Q.K,kD%@F:RQY3H喥Ji ̊i᧫kH VB%( gBVLQ-/D1%R2ոR5j ՆUu"3 !dThmɮT*t\q pkL] J^BPV޵#"egmK9;gve2x=ƶLkU '&j‘9kL8F G{[BFJ5u@S-BH-nq!<(P' ΍hO˻P+[4NpL*NSK#T3'(5adIc h_3BCh@,ՖX jyikR.T̙6tQpx8VA#U`ʀ_2H(II A D!4rZK~*rډRSA1^ZK w ML)^XѤ,e`[q NF &U̙"wwA@^ 1 z ##_ v*.yS+@> PWR 2[ =s<AGuԂ*\6TI 3gwTr#f8n4{_Lҝ!`6v4u6{{5{p,Et񇔕tw7}0.R`kqb _p@U$,PJ47q? >/3bϰ}Jku*a~^,gȧc&NwI5|◧WڳR%,jTW׎|*ZSM޷n!XR rT&7V<\ֆ|*zN) C/.' ^EpEݛ'YsRsoniv#\%7w}rޱ7HD;,}X<}\;htEvuc2ʲgڿ|~rn> } r.^ӃX䛹wxHQ/n/_X=~6M۬s/_X }UkdC_V{9$v%v인= '~frG{ |F)Q:$ ["? E{/yȭm,+V % {\ 3sD#Є -]^MƵӱkʍv(N,fxKco]~O. F^H{9_s/"Ռ՜2ҏnrCSwy~BwLJOL1bHиr21 jREa86Q6=mn9FLvןAp+< d!1VhxG2`~HX>9 Q>v\q`ICƖfVi^5AbJtAf?3Xly=f6eԢL֟Yjsަj>wJ[JNqdX=;ϋ)& &߱;, p'sN0%Nzxqo('\=ц |XnݺZm k}tHlrH?:$ՀB)T%wٙ(kq4{N5Mv:zJڜ mv K*lv eά(cS}{t'۫ |.oP݅l&)x`(QT+je;dbE>Hazi LP$0)a"3h HYbל<,}1Vu& Nra1)RIhn՜-8Id5r 3J4B8֍;ҧĥRcn3 B+xL+;(BIǡ=ֻSuŸasT&#nRq+2ۑQjŇQmPݳJ9ѨWxF[А\EtVM![)9SGvF8][lukCCs-)}b-eB*˅tDb&1:dKoܥ#2&;]jNl$kO=E)RK7Yn9O#J#a)*؜\ZaFRNZ!!~+5׿R rKiMֿsj"fۂ^ʳ_NLIYjYg*>N̗vN$u6=fsJItE*,:VX&, O۫'3Nx8%?9-} ^QZ|^-B+T|&;OLHG=? T/4GHPIkpDq8[=̦W?jǯ/*oW91KFGUxMI30E pMEYߊ`S6N ĄfR kdHBp"E‚R4'N24B&S~W|4Ymu.wUN@oZt.K? ' /1Ha=5 e[V5y|IS]ɾ[pȫcUnEWm[]M}5o*>Q`-Bk_3 } r:/KnQiQ!:kڇI5,{ FO`O `tw 'tv h@eȖ2W) p֬nCp2bk/Aw=2t0 .3#n2h;t;ud6$X颪b\n+q(VMnH_'O 5!isI~ْjACKjʂuyZ}gz"Q-ɐSe-=Ǡ!кl^mRjlG) Wn9 Wݾ(oG(7c$5$ha,{==\H5=ù´ =_>e>qx򺛝8\=;ϋV1kHESX]շ*~Hl>HAzafL I95%KcM͌@,uwϥE9ח<7i5;\.5L%`l4m'SVŔ^ z,v;Zw/Z*)XZMڸ3X +qfqJFb:{ժDQ'I#y[s5Pi3R](XVgoډ]$g=PY㊷=-%e`L m;y"Ҍ ⹟mfo?Ξ?|sUWb?/ ܿ1@EE(VyS[mHU:1b[ 4OAXJK,:>1 ̐mJ],[gii! sK>xb[\ ]VRXG~%זH`26dӘ4 Cf/9ʌR<+>7њuR[9C%P=Ǧ=%*;עbdCczBjMn)4lJlɵAfbht0!\J#Վrw\&Z3rWVdR`qT-MZM!Uq%p\bTUdZTxHuZ޾ 3OA ^Cu!1JrmBj`V]dH,a?MEy.DUs);yhÉԭR=Z X`eƒl0*~=7j.$hi{А APjT OHvEW2lG0vmĂMkc Qr"$$ \3nm4=ۑp~{n\i䛋vc {x;uy~01v]Snӊ.e} Ct SўNo6k%Й}cxçVnӊ.e} Ct SO~}ݶBn6Xg<ݎwb43Vt)!OSښ'svMOctK3ǰN[ 3Vt)!OS|'vͲS`Y,3n;9|nӊ.e} Ct SѝI]v$jY,3nG;=|5Lᣳ۬my”'#`b8 v>1v[nӊ.e} Ct S'sh;v>1v`çݦ]n@|<}6iEЙ|ƣhFOpw]n{@ M]v z'Bfm:x vswmZѥ=yiE!OSR:&fٟMI,2G:oL]|ky)Y){3_oy. 8}}@ϛ.^x1%|xW_^\b\4Żg(}+*'.S/1~ݦ6t,X뫉8F)2&űx)nRH\^Ԯ޺˚1l뿽*#?=GNl~/;1η޾D&k6΄Qu̡D-HKz>.C(];Wn(}H곻R }F'R%%(C"1$r(}X곻R#(QnJ .'҇>+5 F駍RPj ];6Jm^b ΅3W# .Q0L!H]R^KZ5+ϞJb.UM3yDNZ:ZEΚٔf\Be+<9U!_:NfL^iaV\AKjA1Sl2sNR{ 'Wͤjcn1f1Cof84uDM^?J[kI*IKDr!_+'ol:`>^Z$޵,q YpX/,R]daSd`ZHդnMv3y@gC^^{6J7)x=3Ib*6hԭ\$BRHu+P'?A XUOhtA5k"BV#2Y8jBjCWFcY U`8oZl]JLGg:L F8a:]-5תm(*VEJ@E\!&ף\;4zZ;1TYa`dIEݨMw?h&@QOHL֙kOQ :rch@J)܄Bgm5唲RNuEtԖȷjԢsHJ\Jj^-P;03;h3(#zמ6ӳqK􌤓$QL D w%bɝ0ʎFb-ja[%%u{UԚ'vbW{H 3۟c{ ${T\WEajtb&6kaO(CcĜ/Ԥ@#RB6Hc-j PiY}+ P =?Kh[&YܑNMcmX'8D4>$CɈ"r),6EOhMJFnGDLdY()qUFDYoƻoc-%)!-IC|fNzlm`8Tu ` V#!eQ= E?5I.A)bիBoW|F}oT= QQԐ&MI.#RBR-[ Y91sV1g E*U9P4+-,jа@ɍPj_I=$]>}_~al9J][(P4N%)xX곻Rۻ7?SC)22FNI}vWjR>Fjz|1i:Ag¹*z[soWlT%E=td3Z/v7Ib-2$5Xkw4tQ"APHVSf26Ь%ړlp\y t9BT6b a4*3o=M}FjxF[Fo-rRȆ.B\+$q 5[F/9!5\zBr-sqv.KldNճ@46׋Hͮ:kߚnձ(VPP)4(" ^zh 66M6E_܂&i&R{1HH*-dNyCB)1)8%kOjXWBQrW^/륎dxyƘkeK6iJNlQ!`}R (A\t#t8_zETVQzlS%#:!X_Fh:(v0bv#; zHJIԝ-;o&rW>[RߞR'҇>+5[(]vip~^Q4n}AZ{l_ /GQl܃[LH816Ilo~HJxsyd|*R]g ⌋\AdI#ڨ}բwT}>ٳ(=Z)߃3Z%:@oRicI5e^m^]^8GPlfp:b{{h(!2u&Fцhzmٯ_\y?/- yu3={M9k#*w-q*&,n7Gh8z{k4M=mqǨ{WkK'㐫bO'-v%)@6ڊlJygEc7UjҤOv㐭wu!f%ō2(1YmܿF^ܱ5ٻ6dWl}C  N.KÍD*$ػ?դ,(&-m(@Ksꫮ[WW!GT@iHㆭ4M0 @q D|$< CYdyܬa+˳?Uv ]-=ފd]p {@ g]N[ .< 58$g )@bg7jpo2l͐~tRE/V97a(z=YtCB!ؕ 7M9-lhw6 Ey@ke7y F_7d6#C 츷(s)nroQRi vJ]RwPZk c] f?82x+wamwFFv13.YeX.QORѲExk3rEJv. @K]@ vo@7M ,.$:@,:AH(\aJVzCѾ#cEea6ߑ =Wƶ絎Q.(Q$2'f2Cq6lJ4JJۙd-3xߍP Rez6:du ͷUn?  i;fTO90  1pTmsuxX>t\ ;|7`յ2-2h+5.$mx$SQj `6 l:oj&EW? h{dJKUb!6T teܝ7a /0펬`1jf&^ 0c$l%ǴV3L3^[Ѵ[ Um6IPccж =WYʑҥHcnkatKM#{$4hL5 :f$ǁK6sbƵcf yH`CdR 7$l]HbA3v 3-LٛRm mhǎ-ej?ή 9'X9$^=MC0Tb`~{r#.4:Gw՜ͼTJ.ތ@,>Ⴡ&w XcG'(QiwZlrvĄx9P-7k4)Cןhi(펤^[hu3r:UAA_0p1ʳ*_ي''wIb~ uh;ϖ_hu7|#Z m͏/5&м\4^1{ɊEܬO'/b,q~:ffX&'?O.q{˓R%/OF_4~4< ɳ .p~:>Y=DIOŋqN FSm9G2\/O>"%=/-{&l7a[umBo>q]V(?pޖ p̑ޞf݈Kh'QHO[>ɺ^frū8YQSҲBy⧋T#<+1tylC4).9ȊjE3nUo keyR^ZX;#^؅:f*hٯ}q6vTRD ʀhs>ӟa4L=F*)lFǓt\EOG|9 dEo#bW8U|M;*OѪwri`q|e)wŷ_>k<n{Rf UPGfD6% 20 6KsNGߎϦ f5"6r%VKʞTo'=?y@:)~^X~tt<U\~a0ȶvޑRTfz!{ vsgӻ۳GVI3p˕ۘ49Ex5"7yE>@z"]">ty-|,o|-q~|dsP9I\l.0й<$eDvy:n2YOGߟ]OVWշ|;N?Kz9]⁐[S?Ƙ ^^6?}p7!8Qm`W"+7p:NFkg^ߌ#~)~yR}q8p4Q8rKSgV=h"X!{HLkGR{CB&P^xR_7%ZXoƧ-ٳ]HD*溅D5A'u)ã*1G\>!: UpE -!`w:|]қ%q"~U>.GKj4m|qA?joѪ'˒5dw /O+JR!T943) DcRlx$ek4-d$QIG0'L9fiQXd1ya#KsPzO GxF BF%AG[fwOXcyT91@TdH,E1D(6HS*%\bӇ[oo-Cܩ.؞!+eBee004BKPfqp:yKYڻQMJ41Ơ66SR &Ԇr2K I)aPG%js٥e*4VJs ^BQ&YryN)A41HO=H/d9BHCRTKP>B:z8 R2j ݨfܓ"S:Ő#)`AIZ8G "P\AvoiNHZL.όH,PN)9Ô D<)\TPz3p d"HBRb|N?Cɓ6r9kRymd k4H=䄗s%,S6"YX|Q2{F(B"ӋAKc1YZ+~\pґf}\9tjܾE(n3O 6i=Yaґ ,kN]bi`i$/c(5cemNK)7h<yc=%/3_ΚM/'ŻoMɻbX{4rt#V 򨑾7e Ύcv e@>f|] I8ҿZ:&I< vx$2\Q\:pIvٱ(*YY&7wc7l<*EF<;[u;>ᇃbk,Js & =Z=0;l`ף Pk^mz֣!lkvVvDk?$m(I[+N涷 \3hZ+1yR[FK! &tbè KQho"r'|n}o%B52tզ\w Y ЩL(C"jXEqGJrL8]Mca!IKSOG3rJ؟]Մ_-/^X~gkd-ɔD5N|hӖ0h] NQ4trmVJ(f9WQ]D ޕ57r#ĘLښKo(tk$#&7Q"UB]:ZR/ 'Bk?{ 0۹'Roc6R+Qi͌Q9\YY~AH2^)f"CT*J2FZ_+۬rɻ B:bJݺc7CKFWywۥD[`*\w:Ki˪?sqcsf]UQ/j/gsZ0v[VN1]]Qi*X coϾ[xP.Ϯ/;`I!4>y.94]n`Mya clxb4_|KKmz)d0,A%$+lbo.SO?gտgͫ׻s\m>(l v5` ρz8Z(`E(?tAf}|1[].nNONCvW<{kIj+{GX2o {: k}R r# 1ӎtYr01ϲ<2J~)-pGIر XH?9ч<-p\7,F'\)QdBS7wq<7jF\C'W!DX΂;h+8&9z7:]v}P-F/c6RMw0Z_>*l-56VNL/5_^Z?DHm<6K}Rt6[WEznV=8sU-E/2=}SW:}0:oWwII^9f^^]2 xzh6JXk] XkZS+xkeB!PF=n.C &{JҀ԰46o4eX!@,:/R,ѩ)BEJ?9/M ++NWܒ 7V6%Y|m{t/{~(@i2|"nl!4D7wcp=JA%;~c&F);?RG@(3%1/Z.߁ҹZ="%b-f %G>RKy{"ui_yn RNS+V Nj ;Gd ֍S|aE6>SF1 %;hјi>DNPF jEW/'Qy%|\ieqQb&,J@LfRe W*O]2c1xr.oFJ:~4e`$Q `9&eveAO͒Ij=1^hFOq8:-ih=1K\16fHPk$&F~$%#xfR|Z2͞#I0r#gn! cJعqO )N2MKfUȤ, K,1Nȗ!OuR蚕f˼0*10f06W2܄qd3m V'k6c (|iBv^[Wn [jckNZk@.42`Ǘ5HM TnV!I=hAc! AxJU\# B%' `8F`J+| s KlE6BV= t_{@÷L`8%걶3Sj=Ӄ\2-plSav S^- ޶Of1`{ Ԩ_$pZ-LKIWlom.p@ Ah }y8ݠ K8s^%'p@uZl/T=kҖ9KmVl3|_jQQ)>HMYr_fl#?s{۔=0|u/SRlb %x댾nrʫ6.kͧh a mAgl%w:u/ou#J֛xU)rw EvU] GrU[9"Jcjfρֳ#|bSS# y&eS|OLbc:fH[|=wB޹FTg&F>wtb?#Ļ inޭ y&z`SӦĒ1ko9:{74Ӡ RB@Ķ*ޏ\n|<' 0L*`{1+{e2_j!4[j TJ}%7R A-cyگ@D3>mwއ 0>w7߽Cd4jta]H!ܳ*0=-% ݰԓ)nDƀH iY*CuZ$eʤ59lQQMY!K4%d(.Tq]$d x25(2[zd ]cq q\@Xe5ڈmQHi@ oQ+}Ҷ n#`ն ҌNόQ] :]M5OscuHnK4VWRa gHS4!"+q`ek˔K%,En&RĢ!$x2C2+)$kOJ:!icPÜ#RFG -5%FLYrUnjp`jbZ"g`(Qn 6DePQD=߇K92a!Jix4- CFk6BvȸX^Ct" vg(VM"%z)u?ZJ "+ )+kK=Ns'8Nrʲjgv\Ϯlqҁ+zox0Э1F;\^0$mIӽB (="o ŧԅ%1Fדbpt r0יnv6e B5[ s ]%jv-c Y{nv}zp-6I rƭΆb/xf!Q\ukX'8E~^:⽁Zp*_7B yE֒B(Bl7ȃ!QGk>Fjf ɦs[0lGF-?y=I}[d`Y\FжX$GkЦ3Ur~ &,@cA:5zh'E 4wn16e#6#zX |L'3Bxف) (^wB޹FTzSfj Gm\_Wиk\;*tZ;qm,+wx y&aSJk3rf5~z Fk6aX J D1+іTZKIBUT#LrԚ XdT^g>WMڂ%c{WSm\btUMQZ븭_smm}6lV3 ] z 7Ǻ}&:;V'aÀW?@J)@ղ(!ZﭥFvRGeZ-㣉KJ-#"bC!#z);Dk]X;7`'Iz7ZMbc:FHJI<[M4Ʀ c8x7)k7FnNgx1WRwy=[MӦpI^Ԏb/]vץk .-W#^nY-Wj \ ֪^Vy ;Dk>{#'Yrԯp:ҋhr%XW.$;BEPx%b1Z4?5iuj+w{ŞξpvO@9`)aJ7WAB m_Bp>HTpbպ%ۍH$jHPLXR)K慔ZVsU KOL|`K}Djԇ~ۯa{nޕ6r#"bɼ20dddaIDo#4Eݒjdca,V9.SJ Eu JE#P2՘RPJp@)b$P-JTġ`8"'}T_<Pz(e+>NIEՄ=3JYd%˫`L E&pUy@9CJq= r(RGOlXġoN.SMPsD)q(e"Ocg"՘J6Q#W|&:]TQ3?EYG&q$T!8H5 (Ct(%,e+>@)aq(ͨƌR(RJQ}]3Jš#oi}T_h8{:kJ"m)=IDvhYQM8 +y8+5š41QJDJII8fTF)Uq(Yt1QJUJ31~y4.zyBI䗊iE5#`J22z$ΞddsVF)q(<2CL)\[)'Dt`Zi"4&45&% `qI<ZdJ7)J.曆*BZKyޛ2b)zg? w卆dQCߋ6BP̡ ܺmyq;z駻۬rјV 24}\̿C}JaPG4gaK% cܩJړr0,D[ĶZMm֥((|LUmBcϔ.TB G']x)X*r=c =EP*򬔌F@ 5LQk.û]<(`;:KaᄅYHDr9[/.\ۙeOӯ!WY]CVMDVE_;!,6FØ%e|LιީU=DRkwu/b78g=JQ;ZU=|pu=yϥs_Z@JVhS*sKaF7mnu3̏hC͏aosOd CuA@ OaIq&Hײ HI(;E)|QoD E}Ȟ~}M߭}!2=*]!ADå&?6eX)2rWH )ˣ7% ~B{b_=ɦ <FXgJa<5AXRї_臮[,JQyp$\.fȧ>MxP~}yunH3i z H:ݞs~u6"/VZG:J;E1Iz01xRv匽x_M6$9/wG` Kw~xp&W̳V\s dwww_=-?Mwp|QLOz#h Ck Ssգ{b54$H x~b2ey\Z,Ռ?%z)/Z-ߍ|/z>O} } E AC w_`uZN3'CP bP-N9֮$6mAcq*Ȏ-QoBmMJTHY*0+)+O^Fv}ˇakUs&o..?l\^}n]޽bF]egy 0tTcl:xuOm6nػ+_ "` `F;Bs#f{fB%R]%՗fD1TҜmp|:\R8Aǜ'0!ëǞ'rtL Ƅ(T4'ߛ»oCI}XXK~$WㅓؘT^;r4'd{Zci5[kT+8a.XJmr ާvz9qW?$"x%r$K?şEip;44 5e*wMT1Q!dfIS v6V8<TEpICLpfQHuaAT+tT_l0V]+ΩϐB7BK X'VM5^X5CpLB90(@\$OAd5F 1.<벷yEW. 1_R(d bM(1ABVFH&M0&T%iǥHHʋ !TE-f"JuwW,߼%З. v^[VŒd[c=ݔeU11R `H9-VAN#S ъf-e)+ID)`VNqMfHSv %8,Q! ՑnUuFQ^0zWzv\z/|g 0' vLBs] $MG+:cCAQ2H(?-CN\UvMjH~T6kO#I'H CJn =dp5ߎ~[Fꥴ"{KKU6;)\^.M{*ƨUM UM;kl~_E'wWi9ŹFTsG׈Gj2ZֈjQ F[c7GS[>ScǕi& @P.i_>W'`PHQ^f{[~6ޞ3nlz e^Jy;ޅ/y䱴%Қv,=6AP!!k~_` ʽg(Oij!ZN_ZK:~D~]l%aaU9< WGy.25zyCf@(j W"k]+(;pGD#&)`ڪ [ĎFģ8΍`XVyŽڪ822t¥sBMk.gȒy/=.B֛=Λ3BG-||}7FƫѷSoBBsFϺ]܍?[_KL񕒠.?_\yhaF#;0=1dߧ~}FȢP{Yy {iǽj ۛ**gWlj)i% A"urR)o>l_|r5E4^/Ɋ;kNJH)3d`.)*j˨@D%"4)#V۲bb9 G4 Ӵi#M&^B5>5 U7D'mĴ6,Rku-6D[Q/<DiWi2 <θ>4;˸1@+ֆ jo3ҁMkL>ND7HLsq40@}f! 0JJgF|"VwrkplNt"u&{w@_SjYrN-Cz;SQ|+UkFW ѭ[S2AP5"4e$Ac %km A*i+A!GtiMXi(3<59U&ш8B3F%[OxՄO( ;L_4wyv% 5 Iq=3֬-HW(Cu;IY;mPnݦ z[rt`e@_=jQ"y=^h;TH!Ʊ4 3'XDRdhi@"S}a}^Z`kAu]uL1w"Ј PlSs:dAL#T#RTzZP*mkΕ \J@r5J&S)` IUJL(滏tyRVu^:j-?Z0g(­*ck9IBH#AhF=O^gkK!4s 66 ;r1ZILLq/I?_mdX(v]Z }P!If zNg[|.5@$Y%lҼ>k ټ5ͥ\{n_#w#~,6@@{Y-q{`r"6j&5 #ӫ֥W j<0)hЇM4(zHc46<0%*^o2Vā^h|!QYc78 {fMrU,>>%*k3&thrKT^E]hYvկN*K&1hf_wwh`'U+Fʘ9>r|R'Tl1kc[!tqp\{|9S8<7 q6Jg,v;U:`fuF'oi00%x{;K,1㕑]h ŐoPHة.1Ԉ*瓻h b2WI8䐡+cDjNYn%#y uXkՁ )z԰ZGp/}DCgr>K#H\h͡h]$ Ҕ Kєg,2!;@Qr.N2Q?>bbV U'd"2N_m<\Mn]v쳚2PL94B %ut\ =UqG3DXͼCIV7b6(A@ng0 $5^M58e Pq,z?2-ȲJUȽ_QmFMny xkQ؇jAZﳊ0rݧn-)BgU ʋ*{5$]{XH'T3cNzxTI_#?j*IS0B*yy$;{1[ri'dPһZ1̏/Ӈ &+?8ƗԼpԼpԼpԼ9A6 UYa2g%;`sW ¼3H^dP*=k Nȟ'*'ct2N8>L\J$[> /饭kߘ@;04O']3f48cF4F`(Ff<0-ZXKp…q2@mO0 3v$ihFZ>CuYIK!ƨ%-e)#Z.w=d]dyp'!6$ !&Ⱦgs -5Y''x0K-vb anH\]aYuM%stVS(È* @吠!x5i_+V'F!TnKMa{.MW\tQFUF]4^*|`áIAhwluAft֙ & lA(ے .E9 P^9:rWb[0J( vWTHtg({d6X$mb+ߵ2uNے*¬2Hy)Y6 ZlvG6?j-Yeg!{\tmX`9.CE,8̀ N)Pw v+Kz?]1+̊y]ӣ[)H[IWjn3-93-]JX9*-kTJ"泠`VWw/J$΢n2Zu˺;_z7@hՋ'-murG8W?6/g zfg<^[x H t` NNQp{6nt7yQܥ陫91nzƗlص5])@ɐpLoښnh7Z)rEL~a!ziE>J%vܨ%އ)spz,ѤFia?>! vD^uB"؏?sy)!bU[7ٍK݈ѧ#VP}ф~եL?jS}Bs)g`b8pzp̎O*s 99ΐt*Gey磓ztwg9gT| iof`S.+%]f9LUqZR6qQFe{ _SeYC8E8%Uuσڡu :hbv7u:"mNZ6))F4}?nye;nP'M]Ƹ'p->[uk!oQNG6n6ky*7Z .DܪIf.Ѓt~ե!kRJCjҹr+Qnp;,v; .dzyzdj=[a cx7{vLSƇЯk|V4&Aa'x/:Gu`>+#X/XNj\%!)~TF (H/XNQ]j>tXΩ2ե;|`>0rG%K ci)50/o2c)+G|JaKj+K cOd/XNQ]jJjY,7JRK`:Gu)=ҽf[Ku4iqoyO{, tYIK9A}KX/XIKK2X,#(KU/vHK=KM5Kêʕ(NzuRI ,5H뜾=`zGu)UiYDKYCjD/XDKYC bb U'dnj6j'pg%IY^7R2oFPBI^2 ԁ`Z1Hʌ$I82ɄdiRy-/Qɂ<<F> FF#ˍFs|Fu@ PhFy%C82x?ȋ<<~"Ib-RHF K4IAr(\*A3m~}4Ǯ~Hˍ*5AP*q?< PL #pWUߗwx?G?]<#LzOJӏЦL3?\] ; Ř=}{p}WΔZ8J٩'b& 77x/װ6%3H%4z9n$ZMk䓁swRR>#{j9icWGWU*N}KBFϺ̓76^NT`P!쵔o~p^+͔:[vZpl?m0~/hqWE?L}=LO-~d"l:/st"]*:}|3 @Qԥ8krVנJ*c\/Kw$azK58mU^"5N^f䝕z1 q9֋$/8i|x^Ն"\ zl T'cue3uN޽KξE:rE!E++x /RH2K$% )xzT5l)GnH(vL۠rdyKu5*$enֆgjF߽Wq18lp[ br&\q+OUMњB'}A.4m'y>9)?G(QqBJw%*/AD Kb>.u9 fm=k43D`t1k WY6zL mAؽfmB 0ܭ>-6|+x7n SBhNnne!.f$|k#)kxF+g3[| WfC)MPh7#Z2 ćTfz !5onn=_=|Ths\=8eVe_ݹNsTZ(~Zuk.@=+xl4mu G3rTd3.Γ%9Nc6W7@)kMUnfM4K u MvHW.  2iafj^Z6glԊhwȋ|^|&B⨏?{y7<LH 2eŎ)S(rZ'DU&X&*F?{8n䒬d>ܬM>/r8Vdy6s߯AJ6E= (y8lht7ɞLhg ^.&YxP$B3L :C-bk]{;se0o&>C8ϦYY<8[)z/r2/Ey+W"3<՝6Ƹ/P ]s"4?e<~NϲbMlC8O./)S#^ieeQIyqI&y:>qlyctCȏax z݄ɪD4-Zcl/9R4eY+%\.A 5lX ߶‹HYGRWhd %^;Ux-ڧs$.j0%+ %,?$3an;4]Ƀ8s!T/ywE9p+Db)f3裞'#X@w7SCÍ:tXcTRVzejr-k̇ͽ2uY \b&1>NeQdZ1c5ˋ ;&1I 4~ -A+p9N}~L4p\MwHND߱7CC36:%hb##? c-c<#ԍRmhq0\8"ZyW_`d'ʻ#EJx8Nlr պQ՞K'BSڒ`H`=iU8ܽkK%fxLF+0u%*9Db[Ÿŷ:.O*3)mZx~bBmNkq}A!uĕ#f:pc%])wt.$JK=!w~rzϮ +_ShXG9?Zj$$='!#'!״e g2#͆JP +0s&bްj"O?\Ɇ@PC,8TU,L5Pǐc9 i"O=$jeIJr pt Jbk5Y&AizD=i=&K/C-Si2/.M>Rk2٠4pQoJ-t(9dԕ#4@Fc[7PD`d~ϊa߄h~çt=NI_f/n%$6B|&K{癳כ9hq߾,сU-"מ f]4͍fuYyu(bw樆8. ^ɘԢ1㩾YZ Yl㴔!.t7U+T8-JKuZ*HT ^ΙZzZ 4NK)Q$t7kT.z-=c-U\^wZ*մk9jiܕṠtbZ*L8-,hiK-E yki\S@x.N*IhqZSXhA稥/|_jJWu/ͩ5kiE-%p t77kT+g[KR*GIh.ZJURN㴔<ٷe-4NKs)4.zҋѥEO %飧3ҸI/RRK㢧լꟷ+tؗr =Q)S"doK\KR|'.8--6DƝ V4t7U)ȾtZ*"m)!1"LK7~NPSI8zN6itj!VaLdд 6a[D4> 69HV%lOG)N P'I{ح BH7?%Wu~?O_nATLqjku9O2܂'^ fS.+ՠHg; OtA-5D? GCE<s0諘HcxYQ.<@ydGCL5Zc@a83M`b\R\JkD.7:p%;fRxghb hFF-eMءs 4ŻrF4bCboٱ6f[w༻d=⣑(1mU >T{6, #_ }6HHع.b)dL s8~7yƻ&HC ־=EA prroi9R=r$cJ, Z(E(S}IIDW Yd 0xY]Wzy6m۲^+Q'2HHD9SV(0)YE N̨L$54|oe֊ |ũxIÜRє$$6+q*Ӝ85.Q42K%Xg vxk9L~rz^MOFocfֈB# 2MWe$T"pٿ>R"4;` 0-uZq8 2dbd[2ҡM< tFÌ J[ژni[$fQb}@4 ,Y|8%-c4!ZSK~6LXh}:o`v|WauJki)Ίߘ5b@9pʼBBs8@[k&誇4%D! ,SRuԤykh2%2@ijIJEAtG~ @WCژԘ1N5*F\ed*.Kv m0lݖ [<ɔ(T{fB'!M2ͬbg*Pq 34Kej.9ڌ*XP#?5Zu)YYաHLQ܅%I H̜jY#kaTj:ȌRKSE1)$<KIBЬϾ {,*.O|u嫮0ʵ7/dN>=/x3o~#lapm|_~~~aQbU{LжoN5*$׆\ &O}z όp 7/X+Ll`dQaQN Um:z@팃~Q%'G28vlR8;e; ǵJ@0c3W]x(֯Kɀ*$q;(q!O5e)HʘݩQJ{ВBЄ$122!fA4BHmjXA#RЌ׬ItrLDƋp$o0 !6V0q]Xtf89^Fލ/DDA"f" +w:Bz@?䅃Vڇły&O&_= םâeDWpMTg5i7t)h#<N wqݖ ?z h.ٖ&Um%&> ~Ⱥ5FSS8MR; ax]Ti'GS\1ag ~yQHzys<ΧgwWp 9NCK(>00DGm'-懛ÜG&yx{$@&cF >p9լlQՖxxuy8V]>kVb:#W$&ϟj(ms.lHfAel+K&UkHU*.<.<=BG&#W }>Ql80M"d6)9ˏtUԼx@%+m>wo~Vԧ`El/GpeУ]-3nPcdO6͸8ڰ&[%Kq3$ĕv%%#_oeKJW$#JKGM؉/<Ź7Ww; =гއh]a˷ːNAes KM&H&Mg/P}S2-lrVAP*8!Ԅj&O7Tc"Suհ>/ ۏ7yɕhr7ߟj l_VT(3{0o]$}gӏ,fg;/r2/z^.(͕65DqեWJAyOCj^a;yuQѥnbQ:hBMXřwI|ugn[}Rٚwwn!C Sw~»qhҝѻ:Ϩιּ@][|m)ӫ3C/ 7o{Q|aj9@krej]}t_@Ї`91yMe7Cri?k xA9)cNn-92O"^-_#N[NR6EԶ R*'XZndWXKB!+_my֭X-FUw2D3V-Vqԟ5jVqQ0|StʡH1Y{~gwq8>cZw=@CL"A[V :_7 yG K 9Lj.>RF#,@2JpL{-I' +/ xfC9;)@GZ DVV=EY!G6 DFc/]k\=NpS4E ;. KUI<c"l^,R/Z擝X`|rF:QN7>Wz XL>>>{3 $2;I՟ҊyRO> Xc(x!TDdϡ /XM|/M"!:|yIQ3JX%1̜MsF 0pvO$0ID Й$29q^)V@ 3J!@}ٵ* l;FMhν*+͹KJ AFYAP$ΐ#B}W6 DP$/A#\HW5d_Llr>|,'PzW·IƧ~"Baj=(!@} \r )m"zf4^|^~t/p2Ԟgs1H3W"'UuX`P 8<JtT6g&G ڣF:dL\߯))6jT +#*ZB19"&%gFD8BJJ RBTWƈE)p$BQqwT )Ŧ `5.QQ(@zm]G^#F>BUڲS~VOF|0 h5>g!/ƒ—](R1dΐPH [!"CR( v"޾)`"!$0UeG#KfFzߛ|7Y?'fyyLA*vMoou GDr$ѹh|+˯"śfOW 6phok/o;=/k[DU1k=jH~{p ֦Q`kOtz~|$%zŋ 5}KyeVu?cI㘐~.k/-OooVoM;nD'Mvg"Z›hy[0LG.Y{v]G ӛa"N544>4خԡ|QaX0ԾXks篅}z*`]^ 2d_ru5FbhΐpKXoAuQvop5MYwP)Νm _S 9][k1mv&~+n݊b"~aQjyzgzձv߳Pu.Q|TaWPK'>-O7Р[o(2FI4,yMģJLs8"6 YCRZ-ȉ{n!@Yg-˜;Y-Vee0m Hym3')d+>4- O_, =xٰzKy}5Z#mCYV,ݬXkUMWAw=b{PVCSjYo4`(%e)f4YG| kj-vy롺P &.j\:t\WLDQ4Qj2$_cE&20`!QP"ceB EWp6@,2RH$؅s[(u-~v79bJJA|S}+w=RsH;݉\/)Rf ѯ9 CQXrۂzCڗ\pw F 乢fUE†Jjk|}RKdqwWR_HM,6n(TJ ;R*PJ K#Ddt\gO @ae!(D"RTtR7@jnCRRCҟ7pRR7a𥄻Z0ԣQ*ousȀvz\RP JNЦQz(ukd! uFjӣQJ(vX42УQJJ,P;R"PK RPJq>w}>wʾR6Wha%);,oO.#l+ RhOv7ѽYga[:yg݇j9LJLC0u\&QV Ip{&i> X?6ɖt((A_l`J*,5L>.NFGotw:i>(sb+ 1B֩jfŪN$b`Te\iȸxtN1M)'Dd()C#ƺHg<اË́c6-m,q{j3dV.@j.|r5=w2ME5 Į(6 Fnޟմ'L[#~"kcoXLoi|7'wּյ\Vgh XKܐs9!cNO h\B#0)C` i($Gi '4URJ΢TJrB1RQ)$=WCHFl9褍<(#U"@CJrbh  1D)U)bmHq@=L11ynx~1mW/tfDIcM7_MU˻Y򇚿tֿ+#i:!sfT$&5 +Ө'Q*ƥR(8Q b$ L#`<`Vf/TW1Db&b%4MM;%M"T$HBB/(DT{8z"neu#(BGf\Qi޹^WOH&/&C\gfq+Zb_qǷw%#%Bef_2y0xxB!Оcso7bvV9KLw =B@L\z~bD/py5uQ ~Rh00 ًS o!({{t-ߔ3@bsF=U$Sɗdub(r^vV! =Zw#cUfS]tY.O+(P?Q)zdك su}b@oX&y}fiO?paش[m/9-9,~>+.FXԻP$T )S(" Fҍ|G4g\R;O{HZO|p:Yp3^%tkb?DOn,[`9U7S83a8,U}j~ g69 =_(a{06U e. zK 1U@VJfL>~6-b\_c_݀A\I6?\1}o}o}o}o#{}d> iL1Yq)%2A'09V#hvIl2>fP z\i/,~ߖ3OL>I4 CnfdV}5" u Gsl5Xrpnn6$B0NbD* isa TbKIP?JqW[okŜKD1ctnu&)<ҽŌ|۾Ż|pO9j3'Gvʐ2k!tŷ~'T<;1:Fٶ  ~Qn{@jm]-ռx'}cv\tuF/?~4x7]}<3o]1I+}Rr=^^պvlŕm&DY|Wnj1sHdܷe#ᄃc֥T9ͭyU<S'z-?~ST9Oc M=2ލFۄSVlYf0Om.JDѕGgO P&xO3O[np7q|#JAÉ[-Eٿ^{S98ZJ N/ˇ,[{zi +jm/f@3s՛LRo4͟>6A:")!Iך0ՓQqCWh~=G_0^zƃp[Y4q \[@ˉ|{mT+8T8l-2 /_A,꫐-}-#58"w~S>\j'״)FTGG3uQ͂2g`ʡ<^i=Wsnu}76#voԱjyUquV#TƪE)ǢakoR3,5JJuc,@~DorӌN5I#kOd 1i0DϜ#. Z\Nݮe"!_9D}0'nD.x7_@ubڦm[|anj;xF>w;_Xּ?Kw %Lgzɍ_r$<3N% IHNؒܲIUcr]X$)SzF1wRdS;"hd@ ߦVo/u%"SyQHKT?ZWRzxRJ#khZHToSM9ÖR㤔03):NJK)TJ<8)ee5"(8)]QmTEJi2&[j{U AX7o]Lkaf ĞoKE=)/*n9%2wZ3d#57WۙMOEM8NwzN=?$[,.\C͡Jf jf^<H4[h׸|"nC@C"  $O)"M@ J;PrH[$'`itUO5#(!;9?tk@.|5nlS)tVNQ$ʦSCO\yAҝ6Ƽ?G"*AGH85iՇKvUbюwCDgK]ɌhڛV$ 1=u;G"6BWW};کHkAhm:"BЎ{׉$Lr0Mz"ҡ7&^3\ TA`U[4@!\^hԩ"s2H;soU1x7%vCY/wnߛPq<|Xs !XSh f &|f(,%` <@T8f#qlt&% ,J eԆRƣ\Q 1z_'h8|&߭i,0߅66A$!,BRC-$9Y,J.PQ d^(i#FI5SLSxPr#e9*CWvnnK7/t{r.e,VɗOawlT٪ǟ~5rsW!5:R?FeXx_` QiۓwB Tj*[,X!q=S0(OTN&kftscՏ'ѯ%_#uH=Yu21z>4D; $9>czA_]N0~ :/afw/ݳ%́=u0j?ګ{K3ДkM)Ă(v5eە!=mChdVxnDHjsל 6wi,]I0ax l&_f"qM،ΘZ%{7Fj"z?}Yuw[ZT={GVj#Ry"4B_jI54L W|_*y2C~)T{uX:cNY{lf;fߪЍ7MTw0x% oPmQӫ"Lvӈz)lOhmEZ:KT?zQs=ěQ!#T Zq;jƌP[KHT<`x.c2TCfQEOEW8)Ug6)'%Քƥ@"R㤴%Q4]fRrmhg6zP 9/ Iy9ܲS3 !n j̰Y#yԃY,Q ͔ܺeה"Vj%2M[39pF+·>zl SP*ϓQvJEs"6Ȥ2%'bDs}УGA.]Th=@13RY,/Bʴ29Q3e\ \\?l>&myc7 5}SYo Gmy?P|j-wNjt*کs <-39>+|&rFQIrk!TNgLȌgYs,H YC?TtTm bkY{#d೸3mB)E^P=B:"+$ 9K2Ƅ&ϜTAvx*[zsyS'bP 9CeIJf #AKztՇaº)IQ{Ms]#^ m62FFfZKh7f@6wIQR( (yƀP* x.6ÏFδ\ҡ $$)sJpNLrmx)r-F36ji,НƬ_ !P cciCLY;X[7EC a3ż20s3T8y$_-΋ !{t;㚆/7߁oDA nǍ7e^Rㆈ .iQs:4!Sךgpֵ&lԎ<<ϴ23Oyؙg BrI o יd<-Ly6$n8(%:@)c36ЙA:J֎FP"O=I4Mvit h q˴Sx4$@DqR"y&h<l3"yB2"I %ɒ޺A5CE/⎅ŔNTJ41V;S.`>;qGj03Cq[}KKT3̪Zq< HzbE;aFIui{~4j`G 8ZAGrco5$!=!{qy)i|Dn_WVSziL#xҦ;d%r]Vu_ficd3jLM!k)_47sf kyTZj \}U=PŞ]^O5tWȞnȎ׵OT)}P)譣d_Ld,uu>ߦZJreW@ǐד$\[Y7İ{7%Ƌ/n,uZ<]<)?haBgap{G#,JHم &lujPXoen?Aynʳhnm-+P[7.Md *m*F)! lГ)@RJKZZ*$#ҡU5jҪ %,8#=3 OH[&x 2OZg3ַ֫?ugՊnTPWX2x >4YW:9Idw iГwS{z߬Q=N }yh%-.k٬7ZUmۄ>3[ XS \Fٴg|- %$źuK-oW\DkS@N iʅ\BO6F~H٩eW5G'b2Gsˆ@W̶D^b9 Nq(< {(|?!ǎw8jM=8TׂגPoUG>J~Ȏ8ǻ+;rdyGT23v@4Qwq;-sh™EERfjF6Bq^躟'?9?w_8~lb}̮:_-?gACM!)3r+i0BcK Nx>#^" #yf!hɱB N$.˶KdT@ۉ<U=OmoK$mϥwRWzeC5w^^X͝҂կӁWR ŘG[Ur!ӝn2)Vl{Hu}=;5#[SQ`~Q ր\}\z~lYS :^![rWK:"f]-()Y") LwǭDĒt t';x_-4Fc cYݰshBo:Ղ5KWaiDv(Ŝ\`ZnquB% ZXlP(jJ?ro!6] oX`%B[Nl]ͱ/`aɡ2Gb4AMm#QI8g±I\Hm/#4CGV: EzWAoII"/C(փ n%J3F_TZwC%h"Ʈ)L {7)DK7sA]+.tuj!@Hnj˟Ygs_EnEnEnEnRih s( RWD œCNapEiᥧSz}|o%:m8 X֝ժ-hxO֟f̩=Q MQ&YexoXSVu4yCKAqzC( r*&Ay۹\뷝IhuPC_wmX2tŭn˒;Fӏ#g ޹٧ΑcxJ3ۈ5ݓ}, ǜ^BJ 94;͠[E&=Yhv%V/AFhve]1ܯ J5}O^>ݹ&  ).4nY%4;γpjc ٝnǭRJWfKGiX^ y=`|B2ctaQz$+I=+d{%&u-S 2ղAf`W9`GoIu+ /٨+ |E5Wmգ.^Iq֏. iem^Xq a^ܮByx)}놓z5i/K<]pO~ߍX_S\==]$`5 v r?g?\A6 k/&h!֗ݡ@U!L6 2Nsn{:,N6TnQZ { a2!rнCTnR(RҤA/^H!o7F)K"RZ{N)=D&G)yKRvsQJ%M)p>J頥4)"J) 酔R& )|QJY. M8VE{s޸{39 ?3t2G@Q‹?ײ7xIcxkdYIz{UHTieF: >[k5^j@U^>RӬ2Ȍ>Rv5U|o{|;Y\ %0S U$jX e\tBpu 2dXP4$؂$˨ _4M9+aRy5YVKgZ̎;2fUcԭrxeȥ"V1K[pQ _-\QyEӅFď o=jx VNQ'r FjsUEܸ}.U9NQ˷1t[W@z@Fġu ,DwO\E\")C"M z`沿n$>\N>K]}ĕ=s Iߞp! #`NnMU_}2.|ҽݏs1Ѻe¨8 y)~8u l$JJT~HEwKd$.I\"),ܒ9C gysʍS8 xrO K$ܳDzԝ|C>dBِ);O*)hkTB[1ktF F <C(([%tkݯ"Er9G ӝ>OS_R%T:+k֠IEEKa Rnkm-fs&3YhG6lsgw ћ2NAe۲XX> '9C)a33 8cZ8ʜta~upkǒYX8@Ɇ.(m]k{V?nuX|*m@ΰvq[OIмC/~1O.w}N~}æ՞9ed -2:dDа6Pgc N/ͤc+h &Jmq`4ցQhhڪf4mQmEvZ-ᣏz q;ƶv\7c]u/ڔiq R4:j-iZ{Ʉ}kbeHlzyfq7ۗkӻEЙ\}yUM趃ƥeA)oQ `:6qw|2OoUDΛyShF:[8a`,w\ڜkv-SiQ+N7rsozfmmii0ң Amѵ &J]tctnΉh:1it hUʓo(;Dh/GJvAv l;{z݁rf1Kj"л?kt#sDIXB&-)4*4lxTL) B e`QPm<*pҮ2]m&p$yZO{ eSqW1 (}OGJbǑ;a0CS6j ĈDhy5)S8a& 4p%2΅ GLmV XcɆʺ.2M|{wP|YL~.&â5D Ӡ**%˭9:KUm7ZRn~| 3ҳ=1$hOLD&c.ϭ0.QqWh98&vN.%M7NTfO`S^t ~~& 2pVef9אq֓ Y҈l6/)4w4& ,0[QgÎwWAIL3Ǡ6 .7+-d`r0ZpE%1mA1 HmEnE}JZ=c:rg8/B.7c@bE0:“f 7$և 米VUÐXٙir8x9AI[;֚^Щlh]a&<+kQ-m`ޖk hxډ~|Κiҽu` :J-wrfh\it l Q^Bֲν^ .bTSR6\JcoRV8Db-ؑĨF r¸B>l ZS]5 tg# mv…)$) Ln.%3wV)²BȀ\Ķ3-$˰bS+odz]TF?Z|w o3]u28{w] 4kPgKZ$X`K=piX΂ru^I0*@<L{¸/T6 pe,qҢ 55AӐQ Pg,pNYT &sQk 뻋%;Pf,76?]; Bb;\=}>.mJ2<`7_}<;*Vmoϟ_-<?vzìY`~De/$o?*OJt.]O;4~g_;wf6۪rtwVXH.#H~ F)\\/vYkre.e0})W{S~Jw^{>A;1dq6HI`[ "V鏇褋 @Q_D "Lwʮ@щa"qҁ[, gs_v8L݉j; aQэ"]jA hYN.D]8pRօeIn ES+̑xۤ, 9kFu'%u'?YjQ(ElC_9ğ\9][W0EW:X)C XuūXJ$˸S-P+Y(5VR.=߯BR{=V:,Q"{Pi%֝\RzMl.kP` $DNnR^UlciE#?'7V˕E 8ʠV- -JQcGb!l,:M+H]S<"#2 em`0ccg=O6A=Rg}#Y}db ,u"2dwq}Lmh"+Ctg}&쿼yU?:?jlJ9Bā/*+qpn?d仺1칮 Iܲ6*4pGlua7~3r]#.UHvQgN'mWˉ34!BC_/Ֆ9Uy6D=az& Ez-:u)yƎLgu'8qB=* SظD$|"Ɣԍ O{dWq2~2l~2ȉShZě5'%Ml}p*u>!&3T0SF%gVXL,{i2Ob|3NP/,a&iv!}yq scQ砥NM*D(#{s88;N/*`H!BdFl~*㔖JNKtdplnjgctNBrIIЯ"Zn2Lm< {^'U[oPW#'CUa`Tn[Sl.RJq`4D Sr2c/^VXϏ0V/t Kr.(!PzPQjiTqSѻPRNJ! q "IUY)3 ΙJ%$]EH1XLꄙ2鋵E}t-"V (5*FH&uu֚q^$FhvoCnC(,Ge1Uh2dUUZUT{d$+-5'^v߉DH^^LLb$[4Z ;ᕏ,6xWOU Z_Gù)4d"$06|CM;ƛw"#S 1wjRƛ\liH 4mD97IRB"(XKAFEMEH=9`3!gnjYԒM1u9bz9dAD!0_% VBuUS1ZRl!#CZCN"(ZI24ζq{jPk]w^:7lz^C!`0w=[NT)c6<'} X ֘@'֠lC+>UJU=xyDҨ7ɳIvgպgE~n`_v㆛2wYfPdA9xRdPMHoc jlHw۰6+-u:΁0`?;&&7d?D6n+w[?mE|v^{SsyCUOO"޼jzEWׯ_eXHi$8ٸ ,G?0$ɜ :2G]4;LnR)k g꡺d:s ?iMޒm(ⵓ-.Ce&HXD5-Y[7yULCs5k.^m5`\]l_*wJ6puʰZcGu5CٖӅ:ĝYQtSu?6"ݗtx_?`f#^..şVF& n>~+bsOlÃZ&Ƨ[tt#0z\C{ӢɄ,ulS,j2_!مsV D0/#;_*$C$?}4y1흠B Gyu7{/NWb37UbS ]eM!`其A?qVF"X@z?X[U#+ BRU> CS8$_R `CHo$}>^y_oLp SoW靏T~X}\4Eo.l[]ol~9,&d~ś\{9\fz˓7?zџx| 7ۻ(SRK4){湯J`Y^>hkFyVOw-V.^$!/\D{TkT-ܺvÖ]>vA}Fv};PXU%Fvn|^m@a& \Ũcl5[%O6r @6YIRLb*sT]Zy*)NEBh.-Azs}}Kس32 VӅ_޵5{c]b:3VZg]}JɨċoYT>ҽjcNŃVy037n3Od%it٩&_ ϓ򩺹5mz,hj! 5x dpfh&P3ŨQGQ3vaJ-y&Vfە;;XaUī-}0L g4̝i;ZS U+N慕R=FZ#W@vѩyNf:W\@NR3JdBRcc̍mvU$t2:X r*k]r:"t!&Aj'|*(HeUT&FA*o!X!0Gvu2(1kwcZ6d-=ZZÄ;dfjYtsAfwS=8ni]'j?pfe[m} ޺gk=A'ArJZ3r.Ro1k :F ^mݳJ-  W~[_./-6Eo+]`N,t`+d 1G4w!\n}֎N=m'u8| Zmkj{3Q)1'IYۧes]Yl5L rid=GUh5>gb)lnݒWao.:+LjaGms!Yn"2";l)1Gf`OT$[2 wpq}8Q+wn֦ b+ f+.99Mp *P"A}gQoNG"2NkRIZLytjXug7A-]LmWEc[ }`kP('ЫG.QZ5΋aD- 5,NʶNZ;/~ QSsiY}WUE(KCJy@[!`'MrDw̽Dsrfg{Wta3ѓ3əd ى E8x&A]LĮࠒG\ v@Z$UdK 6' 8s5KC9Wƨp5BU6bI>) |;IuRA1JŷM%+:Ʃֈ/Lb>\)zAcPwdM7(Rii"9;d2-se{2c-3 9^>H2䦛Z' ʌ;s-xm!r._K*RIsJ 6as]P 6<`zlpFMBfvҧO~˛)R] gT=Œ5̬Lѧ Xal[k-yjd<򞻼LuSkn闟ߗ+bЖgLYY}oҭaKX^v};gζ;7 `.;ۉ䵓N\Lw]8ɘ Pȍ@pZfƓBkQxvzgiߟtfˠjGBؐ6'p2$#̏\IBDګƙe͢(L5%[Q(wL<5w=yfrA3MO$Cuu9krX5WλYQ$<v#3;7nCnي7hlaKΎIt a.-{(8ZI!zrהb0HLFJ/Hx/Uj-!<%+sm=!A 0O VjJU{!)BVYYRt&zL?j#@becBVBPP {F_Kio:%ՑNL!f.$)Fp%l@s xo I-8g6lU =R*DNU:h@kl+f+hI.7l(ɿ< o;ռ#l#V&W!pV>#]74tf 5?LZx7JVSP#VqZ {Tp(vϗM[k6E O˲߇[7_]Xbޠrn|QY2'-u>{J4jtI}֥F-0"*Ȉ*T1v)>҉ZXKօf'y nͺA}Zzg5Z* m޺ҺelufWJks퀍n]2+; d7tvaEwۖݦV<'ޢ]Y9NsdL 4C7E wmJ~:gټ_`$AfM֎l9d6nYjWb#@Ɩ[Wdu gEcK x<#^Fo;16eh0% v D)-',Qu}!`¸+tw!N񖜌_{bld:NWi?I 2줷?Llq<ϳ8xȌa%%);I`޻tgh08YUNF<442.DA;Ro XZ [9xx~hRSlh:U+l˒hznVYwKīP"<" 9(B;FA'7)%*]Z$ޥ2rΟCz6U5Fp뎊*|4l`C8zF3M)ZPwP$D AїC$d8"Zpo%{wke{m%71cvE[.Z{-ZE=u-a8޵$ )m6,6_}RlC4 }=W2*քqaD:LR Uc؉rJ£;EH+ͿP[~m:aaƌV׾˻wÝTfI%ݵ4nF4͟hi)2$Z/ #Vqƍ2 ^^ly9kh{LĽ,>9+wƽޤ:%MrAt?fN܁AeXgh&?R=]L[QA1VM߄ۉe?@~&&T _Ϊm2芏Ul9`\| luV D)Qhy m ՂO(btQF.(Tv'jO4V5B^$>lYvE9Yw䏝 Q @h1_/KzLv#oq1hאK/_x;}}WZCĐ0d gCtơH9.ވ# 2b8 ýDS[{)F_3QHo[{:ԙ>v G&\Pb,U 4 D7ym@BwzBkCqVqǖN='TtRU\Cd'MjqyQuw!5T!qQ>DhdϥR1D@}݋UNN5GfRu^Rzw|Cz(S*tΥT#;'p!&$>$ݔ2V,&˻hj~l!og֭wן>AjL2__>~=<_^^֙GB6,+D}4i5Veϥ"gkHvi#)c]W65.[K}%TyZb'0HN/F5AJ0O6,)$%.gx)xQm&a`B^}u Zp5&tE&ʴ M 'A)xd!NLB9SIA4=mn) ג eGbĭ&T_8 C\o'|13*$W++:o2B)U}]'}0֡UrZJCZks<tIh$D=QEۭkU⛇s=b2!-_t俆;m~bђgm-B]bEC%d9CN9RZ44=J(wֿ8 A.J񸵿3tw Ŋc΁)”r*8u]?;Agsb S'%vF}s`$1<Bfu^ڡDvV5sZ{?3߆'R3aeIAXu"h V;чvU=@[WW.YJ&q[ WzVDQF[BU;U{)F4̛K"hZAU_o7OW!,ζG7aw0 e~>WI_5h3BB=WҴj֬lDՑdR>4h[.+LKa}>&0fۻ݆LMC-.["v3V)s nG~>x*N߄fDaE Aѐ!ᇚ:9tsuJ}]<ԬAkRL \ ";>> &ӇBaAa #j1vc">y6Y zC3>J3i.`}ߋM~4uXA _ع?F {jAP薳gWV ܓ?󰔢}y;(ywG{37qmpip1z 5盷IXČMRƼQ Z&o8N)jpQX7siu0GK0Aq&*aaEHb$b:rR=O;lbV)ąWBs cvQFBZ4!`2{$*HRb-PR\^-rK$I bk4Ic: Fx: Y^?Ϟh|' [@ۭ*qn:%Fg^CIkK>y-EZ -') v(IqVb:O"v+6gx5C*E@љ,jYq8p&| D%:<$(^/ˮ'9ay 7fkL٣>rU`J5@&cɾ`Bd7g]+ uLGtEk8ʢjñMs2Z+kX]e/ gqP]IQ3&-.tryıFX,j>a*lj~ݗv{AOg=+3bV}*2L1٧V/@^2A^~!.@R/γ|ȞˁswtcpڜD[k#=%D!TO|If1Dyi:*{9GQܗb*CrCX1_O-KM"B8V.m"LmĖ%X8{OWRu4i(˄!HRjCi hl7e._E(H˶Kz %x ^G >Ds!B{p$S"$8%Z9PpO (`}Q L!sb(8)P$rF5F^.* ۍ)O@[8:!Vն>UfeD$ osM{xyGN % TyXȕtG\xoU=hS;Ip9ƒo0+Et9ZU,<}<Ї23*-A(%Vщӝsq7M%„ў ?uιI@;RqH]G%/@IV9Z88%A p%(M]io0`2ϑd!SFkՔ2NM)4աivxBRF: _ű#O)d5KMD &(q.'f s^uF/:1csD8I\$hK#XD'\# ¶TlXթNf( ;I Aռy! ?LA_uwqldPc 7A 7,{}f#q$ܧR$*ʼn Z6 *1)!כTXO_4gE:}V <`ܖ.2r.Ow&l"lAmI}J%pߡX DWg7;2|J4m#ɿ2KnwA@"On:F&of۽UK#R7$D X->n6t 5VJ $"*tolǞh{Kw草GMR>zUKA}5nW.`.!0[)P s"LjwoR78j"9#Yli陷p$&K$5OϙFH"$$T .M/ )Z(e6!:Fj_Ztѩ-^FOQeQy8h@ &2eJ6UƴjrP}C/:)!1 Ps͋|z1=2D=KN"To)%HfiNkzVcW \Y[, ݱ zpo'=xY414xI=J-_ҏ.;8wS##ħn=5?osapfu5bx`kU| 3kEּyY˚4oWڣ W[ТFjz$QhI[-,ZZ+wÚM(_7<IeDd䯶O?[?;ܡY$Se0F,E/`755/XKNzb/#q|L0EA6M]P~YUciHUl=Шc\{ha( o͟2Y=u_u%<|m:L\tMmqYs V*S9 udܢb8uأIDYIy!ٛWB_fd{)g6ėR-G2,T&ЮWL;.L]v*X:Jss#쒺yэ;] zϮNO2f4Wմ cqRjִ"@D kFZZmtė Z׌|,×`jQ?Y3x<>Vb|*E>Ȇ4d4 җB葚tl`|m% FzK:ㇻ{5f||Fy}&ߦ`cܸS X׾* W|K<?_ xV}I7!3noI}x|o׭yEwXww[E[w>ҥ7~:veX̗4ZڸbO~3"<4ι5\G -39񕍹 #pD)&ڍ87$OwVkM6ghg0 ]Q!#DH ko݃D { x e/vF[2BC;̣sIÓL߁<̑dvӻܮB7J+A)R/RG;E>Wdh'yiMd]VJӖDzBK-ul{'}2_㻟EbY{ Y@"~Abh@.c [rH >MӸIbxrܨxԗ/YX35̬,0 ҽR3Fs&oƸ,sD8U3礼IgZ!7C!q3L4vnBfdR2f 3L6ga00ɢhLFufo 8 y#q4`;q,g}Moʜy8΢cxJ }t3LYRAU#ߑ5Km[1!|6 pE_ỌƐCyic8RppmJ$n^BQ-Yvj4^[/'rbqSe&e@Hv=ZL ar=S=Ps4мʁK"1g[ %Olx[8@\$QdE%2fbِ|Uԯ?V?jz_̿ ~vM͇vbWk1} ˍ<F4UUIAj6Ns A;e)my<GN9[jZĕNJTBNK\2|u3V[$8_v$dVV5H-`31 ۴5"k5[;?8`s^*MBש4mx)hkL4l Ön8fBaU-=)  ^Y%rٳdZK`P{x}xyfyʻbkXѻvwX](D<"Ge@4$(l5 RivRY5VN5Ñ\YL34ՒISk^7`y͠(3_39aɫ@ m_4‡H$t#ƍj d 6fα>(ў޳-8`dg^AdXX伊*+BBz3|F)I!Iɋ4~PwvqS}9 xЙ}=' vp_p tKx.:+Ұ%u:Js[Wq_)Sx+O٠vd7*OE4L@8ϑ+0VĚ9o=?e4XV;60_ 8BkCNZ+'Z`΂A5e~\jD izP<= mBI`Qn}. fIe5uCd vĪ HHrؼ2L2NݱRK A)9pZy2YZMI6Tǿ6]π :A^*DgqMd*U2-pIl /4Am)MlLISL0 2(!Y<4a 2e  gMTf#nږ .eZO##m ;g/33:t6BY#h;]6EEI72ZnUu[I+cW)*} %IyEYc^4bžniEW` kV TX!5fTH2CSb20R3oMZ7(M0Z2Ś"@]cUkmC Tɡ{y]LBIX}.Wy> ~qԡ-Lzt\:I׵i=Ċ($kڒeZK" BZc]y1RH,.b[@k:R8{oOk8rvSs /`'`3n/Ñެ:181c8 6Bs9#tPc@#2AƬ"&ӝD+F5uހbb/ |+Vϼ..Q6̝2oV/p9`+-dAwD}9wlxLO'mUSh}1U;zX@skRYx(2>5JS2[8ifeik3z.NPr8ؘEp&JoI[%Y Nد)wϭSIRsp>k4GWt;{Sh=?G.x⑻+й8҅Ս IATX+FZ#du׮]/z}f"AЇ~F;5cbT)t7Fՠ AYk!p'Ӵ6ˤ JQe@p3YEd'dj,?AJlnYłCA}סF=+q] otac.$ }R c%g4%cn>٘g/VrHӓy=9[nNs8=N)6W4d)UEUj2*<1tKhw@,5)n9j!5̷&|)*\4Wb hX@shFcWykK*[K m5h[S3pS k+@쏗?IB6#i ڴ,uY#uUF(-|42k - jizo &`ƻUq3zeZ&=CyB=eT00]_zHL/+"qxL/[Jͪ(W<^6m,˒6`ePT&4^M5mYiLtv1üg8QW\Ձ UpSqVBI,}qVTL]Ώ.fJDemוƀʠ/eݾgR^3*p\ۙWjalA)W*J#ޙQ:#j;h挄%&ͣZol\4XIߨ'$}QpXJI, *!@Qgax]wQua\J9m`OEu F^f[3>3Yh3] F]@u30YUtfl89( {t,2-0/+fbʵgA%t >dJyMJS X8Eg2uJT(ӕ&J=iu#zIHDt&׭bOfϠqo^V׊"KIJV8= s,= ɩ(= kq[s_4@@/Slehj. kP$x##Z^ 5`EA7T:kmE`1)C@?,y&$ VI }*.]K%W ݶv8zZf!F0o 7Id0.a䠼QqOҺu?I~^D8LEZe>ű2LP !+PvZ鋊J_~FKSOEΎh %ƕi5kqy핷 @G)d 8^*v՞Kb( sL X&b,L?ͻ/r1E')H ӏ!ޘvK t!M]ױ=]JHcPsR3Ǻ[oK`8w f[KwKtd u4dn( AK |pJq;q_M2yJ2N%x0f}kmy~֖5lp&0Od+Sa9Nk`I%^zjo[Ji^JX  o"%ijX -:ҧVE!!ZrjBLERܺ?Gլuh[\[ymoӸoZ6)tThݞ K)˼2O6kT_oP?K雖R,g'QUHi9^Q3hcť U<5)`VGMPSc(%HTH##+ƍ1C hydt#6xN} C).]r+(@g~fVoyfFrXjCѽ5@w:#Kd}O>8/fӁVJWOĸXqwe]t^~a:j6ےD F?Y<ᔊ~3M(->EbQÉs2*XiN4j;lUs޶{m"r32'WKly\{9 _4s<=+gK&:RƙG} :P",mXK|p&L[f}?Rg@X̞o cx>zcHWx^%Μ+ A/iS9Rm$Ҙq(x|Jⷱ xd !hҦ:B 9^ b(0زڧB Nu8@k-?!b-h-p61L rHQ ɣ |yki;ԫ\ ? !T] x;SVG-S4ejU]UߗR4RqR#cs|'VG ζc[3oO艍 ىٸ^;qqIT[9~Լmn>,PzN)!=:wnjYqQ=!J:~LXӡJ#2VxRaի݁XQ w5׶Qm*'q+V2W;x%%XKn*UQǨnKngyi=3Zmw7lZglRV)DOLnPr m!Y*UU}xU_C q&D ReAKe3 *B&園 wdb)hߝ0B9Z!#UD\B)1NOZ%`޺{-U7YuRK06pMI~4gBQZF:(N(ZCD&bM9a {1`sSכu m8X/a&$Qm`(31[CZ {^߽;A@Y!X)J~P9FWKA @'Ԍ!C8hb4<6&( 330#Vbje~|_&NPg^Zߛ}^j#X!Rw?>G+Jkr_x yogۏw.?~"%bɎO7۾L1= _Pwgj)X%"nAjMpZ+z[oG7GCU|1R;z%L;./f?ԓ|GV!'g2L7.Gan ߼nܷ^@)dMv*;L0 ɻ~]0Xb  r^XTy/k] HZn ;SG,ӲeZV]iYϯޗ4SpC4K;|gNkFd\ׁ#)Iq sln (Bzϫ=8WCp ~f˗Eth #IEWJlƯ^2 .Ɂo W8d'bB\X]@ u^G[x蠀7rB3`Jnہ+qw ꛅ KwmwF P񾥎0)oڋ?ViԲStefta=}6z%bXAx2yRK*ɔZJ9ۣB ^ۥ+ԙ(8@5[먦:j#XbF`;D!8SjK[=]p, V/p!Kvȯ]QekYċE+pp*6$6npj<:_KwS2ᓳd=%$)鱌g)+(76/O$^\^8tjL> q*{)P~ƒZ:`Q϶ W;V1Y5vPK_,=}og|\y$7eжiW΃BWhZhEↀ,(}K=|ТjaȪx+,GRv^.|v統? *p]9S A=yD~r=~uTCIΗF<?'x\2hA"`mw]@Y\nFA.LV^Y, RIj@ $vx+T[A1=Qؑɝ7M&烹'mc!NmmB&@i5Hm+?_b,TZMmu?_ j1Qn**傂 vN;mZ '4.(RpB<j4kWUe鋮FyaZ}(Z-uG+~ Hrԏkm쮺yr7; ">9+Q4( !U7OgTJ QGn mŬHM uŬZrY\*~wz;9EsVGq3EX sT)1jLEԹd^mrZ@4NԒR9^7=4̠_V XJGjy܏vcC宱k9߯oJ `Υ|yUqL3-ªU7Ay4FQ8jG/aDptzQ&I&I&I&mj;&$(8J %B\:*E{kXM/?XpJ X1RZ$+FUQu$- ek"}4&80,ų+`3BDl}KŰQŠn6F p&^xT蝜32I2#'w7MJKL&6B8 !LjntYDlK0'˜fOJ_&4Sp@%V9̬dJ0cJ{ I-S %=nh/7#9_e~Yw`LJȎ跱PKgS${; H35OkWW3U =ʣߚr0oOZ^/U:I[nO[iҞ*q_{ڍ^1Vé@z&IQsG5\];J,w adz@a/ȱsGOEtbuQt;6wtEƶrG+C GO*,׃4ܠ1Yʪ,=eONUy {jKf`9ARU63 9l-{e@2zR(D{txjrM +̈+sH,=~e E-y2=uʲLX" 0_'_Cͽ\e:Xv[#~H]!<,!OwJhvI1n4u1Nȹh}L%>aVߪze_>yxh"S{Hvރ~('QXڳ+1h:RAh`{AU۝0XwOG8)5ìUzT/ |qyWb2ɉGY-@7?/?{}'',8zs3xugϞ9Vcl3$~ atiV10ٻs`Vc7皔7ekA&P*-{,.LQY+}s<֑yQp`4QrE{G2H4KΊ2>^GJ YDIHt `3Qh1qx5cF(d2K[Fa2e58~G=LhC 1JNFH CmԭB3Z=/@fХ;E_!֊G+?84hYg&=<-@>qFw/qeP>ut/5K84{qrV\RI{+cCsuAܪ-/1ڽ FO9!}&WryH$MSJ?:`JR(2:AVm(9Ϻم \T mWD-tc;57?bEvFe)ٻN )w|aZ~5c81Ul3r5cUlj!ky5~3vG ,ͽR:TE߈]EYis%Eoz:ЫsB:y,ȔPl&3Ѳ:vcZH_YWETsCڛ \]R?.WY7W7KϿ52GKdӨ, ր=dփ%d?@K&Ĵ9ZvbFH-"VOP Z1RQ>b% JFMS`atbE[@LBTǂ GHj֞ A@B$Ka5"fw18ʥ>p(W=!7%sL 2D4}6q I<Xb#/{'FfpzH+ol[6)!hdS-PΎo#VS_XXCLUmʘ) >HBk`N"[܆)sM)Q-jUs!C4g2R e)A#u/3ˮۈC1iA _u?/Ne>../V(+Hjo*!Ua oS/qE Rv.Xn"~;wsFt 菂ëc/)/ Ϭ?sꗗ>9;Yk|__CZ G^43tnj~M⡒a.Pkxw*^ i}iQZ#-J(Hí`G9"61^l{W9h42=Cޭjzi}YY-ݽ3pe#TPb]UmP==ܬDaCO! 6BL$jWVx?{Ʊ.v_a{`9 %X6T(I"}IJph@bSNWU]UU]sA~ҁ.~0+M<)=[4I,>Xڞ|ۿ=B<;&%"y| ckyIW>b[@&t_}Ԟ04!IwWX]1ЫBVQr0:51g`(; k&]f9:<,<{xNh3Yx!d.05{8oFqDq/i$%):}Y\,s \F|PC{AlJvw=/Rc٩^I}X&kgrChFkQI!=mY3v iBqUdAo^g2zzAuX( ͥHD"bZH+ՖxWW˩Q E7K@f ]x[C) ' gV)|&F\c4+zMR7Iԕj=FF0. BhS2yHǣM%$2(+evFM@E5`8ȉB*L5B\0YDXf!QsP0^-V ŭ 9FI\T\r2- 2m77'M@$3^@\W̸pÌ<Fґ+*~q}], ˩}kZRkUpp0`7 Ayʏ)m}A0OG! B0z3kXlZ9r͊A:I -\jy Kd5.rtAW.wY{ "p]T)H$v=]5ZijXOWp~)AN1ͩ: `jy*M>iu(s3h /]j~+dig  !3{<z2٤coo?OفʫZ~' 89ծ9zvÇε\[@lZw.=bz`L(7YδOsSz}ݥwa^cҺ$!/\DKdԚ/ nZ!hR rDt&~$޴[=Lֆp=^87[j}ؗA%AZ1ReIJʧ=$ȧxS0z4ho5d-û0CKO%M5n>˝|T$o֯|ytoOkN~==S-y)>Yl70f庹~ h%Jcq~8ϕ,h\ih] Vtє)+vU{0VQVҬs nNjƳ9|}u7etdʷk-gO!ѱMRV*T;2$yc%61!,YC_,Ӡoad`CgsYU@T"uܥITm-oR/J@_\=>2Woo~ =W:KۀڷGQ$:\*ԉS3+"nv̮oz_-_ﯛŨ `:KK )Ƃ#3Eʥ N]l1ۯu p,Ϯ]#qzŋu=qOxQ%lY`R(ӗǻ߉ՃW~5IvgҬWcL `=\=GOt PC*nҊI+&rQ8"# +Ʋ TCGGt@1 VDN%GDX*xBr^>ܮ'(ӜR6`Gcy^p1/BhZ"ͼ'k1a NbbSԔ:pEL.JeYI[L[}xAnjB>cl%x,3\7?+?;|:]!'T),p1nCxo}3%1_|uY"̛`˾r*.]iy P7vH}v*zJȋ֨Q] e%7l dL'\ܯ-!G˖J, i@2MenfK-a@?[?pKVaY1ES1n# .x[jOp*Dnu2 ̈ ' "92X- N=Ҋ)Py8Ϋfkݚfq_]X #xw$ ҭzuyÅ|3&%9LJՌPrVxSBr:qJt@f^2$AM(RVO?F{k bD6pD좉Ì e`#İ@)ehQN[\m%f眀ƺO=gJdeB:8Sdž88IF-76X hiA3ύ f Yb4KD=EDmzOnAE$1*'0EN S1g8DJ! FqrU.T.YRvr? >~Շh>,*,k|djj:*dv" *tw}.ZJa)HILDY}-UqKs}uAkW8Š 3ƖGKL/" "ao4 S"H7MYPk1P0EJ $1yG ;$cNpT b<̲`FZuw\ۜ7u`=bTA (K@ \Z"N7q+f$ jw@~^Yp()U 8 X7^RD7]y)I&-Yjݧ(F}FPrzih]ȴi쥔Aq>K|MAE~ %@Sz+Ew0!WXGX[dmaWUW%ke5aL  .h`J1ƭU5+V^V!$,6"TQFFZ۴S~܁L77|^'Kvq9X<"5#ǝO6"'w݌I"5rO5w\dT&mzcu#&^=*8|VY,F U.dfM8vfn-pKe\;8ALZm` i\LM*su&)\kS?_H,V^Hx6*Vx4}]cb@F=Ѹ5w{2u/O4]{3*{A5 ֗ Oqⷑ`ρ'h୧f\R(7`bkR-s 5Xgіd1r)[b1Qt`o|f޵)@3R Bh^Vi,@Z65K@T .w=phfln(s}# ]0κTR=>V8ȃ$FE=X/M g >Ï=T;xrI-_20ZkՉoj tFUi*B /rӆ;$cNpFLފ&(*1:k)?廟N2Gee w\xJja8G1UF-76X牏QfH?o71:tpDH5OxIK_D8/O6CWN4gON N:]|S$0w>=28qٯ\1x~2ۯK _4}*c}zy+pXUilK()W9畼Āp@/$â2FƇh1UMʷmT| [S rNwԱnfNͺ5hukc!dLU#qnY7&*Z cN`Gָ֭ƇcJ wFֽ5DR11IOy5p`Y'r" 'bR"Eّ"eeA^d2 qnH)FK"54bwfbvC״X"Ѯ R K+N[9V/[[ CUNT b7&TI~g(0zxOb{nC^%9㿌YJ[@/c_&p,<1Op5ϟĖo^N'^/4ۇTZ9.sܥpqW 稖n?&)n]x$QFؿKbI@?kGqsdV0=ıTk&-Ӹw :GQ:=mO#{$w,ƨQXiqQRVr`P,8䞻C% RI5%/^bqQ%waJ4JV)HM|+NDƫf}٣uVpSu>#E?zB*{sr!b7lb*Nj.d=BM15D]N q7i H%U_|rr/6kɦL`'cBR$v!t8Z7*<:B^FIcZ!럻^K+vKH#-GLJ=R!БkŒ|@7Ư-?W!fE1ST~@RS_闹/9fmY'?{pOI.Tz ԲJzުQ7-SOLFlɏPA5%LuTt2wKy:?񗇟f~"6X3J avd8J)=b+Jy\mAAY {DJQ3Հ=nX&)غGFJŻ؍.7 ^ s&3`-ȠsMcV]NBEU.ۊ:3UMsD{X_+3(2e9PW TU/NH2&7J[q0/ҤE!Dx5=~*lx0~4O}1wcnw)=*ڻ廏6_LƔf4SDzC)H[)AsY;!u!jNBQc IRP$9x %3BRt:Rd&бJ@c_Lj ƈprk13#16V&yL Yi=M\ڛW< u `Qh.% clc"h7iX#!&r5)PY ˂ń` fbD5wyzKi]"(ߕ߰l}Jgа?cDϯ+ #⣿XϏ>!<aK%|0OqSW΃/Ypߙ.qi2I] BpAo[W]3_`PYdbPegm9g蝵XyGi8th]4̗#AslF{+ߚuu#;#bאYIt1ZB*REZ"D˙\(!e=s*8xVPr6<c+(9`]D!rǩQ)QC(@ D4RbwԐz`ӵX8A^ p`Q#ЩTYjL9^/pbӺZRH+c#%o`w8L4z" Axa%aR1=^j,"F `A^a<ᱎz~0,t}a(֐ zlCǥ $Xq5p" PA`"`1&~h Chg0t 3r$N[(sm5ED#`ٔxJ,l1њ2GqzcagG6Q2mV"'Zj^Qzԃ@sٝmo{S+3a7J .;8V«河zF}\IԿh)=դF B,1ݕ3br[Ytr:IJrU4Gy|pHPSuwoujeu<~Y,`՗Rfeޕ[ ,(CJ i=>S罫Ž9ÜkﯡѮwm з(F\JsUQcmW-*B>uM|&_ ݞ@լAYy=+ Pc49[/KPuZ NJ:ń:>L3BpVZGւjiMƽk4&:;;xu6o[LO!bbS/t vX,թuݔO`I, Y3.arKϮ@.p%CpF4.8*Cot i h]-%(VChКCzigZ5^^k֑s.:Mpm( qc48#pbnGFK=2B E4FlemsYЄt%k>Ke'Բ -\&"K'քtM,-ps \\ x攨4hH$Ћ:)ـ5Ɲ1N^W`D2|qk2շ'쇋 ӭ1 ذߔCv͸9j$NJE&,;aD۾5N"oiR+>7pĿ1ezH )^goD\4טvX3r$H5cQN2T 5gNЊqmZBhѱ%^Ac({b[2΋y8 {kta& ?a#ӒD b) L[V:"1\JEV,*8J JUX\b2IP{,_ S%+  M9nKXk-5[ZtVSED2|R8㿌Y=x_ƾ4 4}]E3~x|TF.x2=OlqkWx]d^nc^~- h؇L~]Wҫx`ʧAx$=$-q'>ۋJ)9%˯M79؊U@.ܾ? {VaQe=Xp R !PIbqLF 9 Äi+$қG'"61bz5;/}ԚDx'6).?iru`$ =Ue߾/oд #e% ܅W?\Ua_HLf{˜3e3*̼[, P`Rȫ%'5ga!)y χ7 C ~v,TV {!KW6RzIGi-(2,bİTCH:h#۽Wz_S *2W6?@ӿ= xMf3lrv8?a֞?]]/>-iII=pOJg{N/Stw +z)8][Qgph uR'AuQ!:! `*~v!R tZ/t nSq1N%"{F|A $M$2(+e309Pc7V+FsБ8N%9ð S*Ą K(:VFZ v5Z'4#r4JL惟zRp"XIB]Y^W_%?|"#EPѥLgiE_o ;~vӧ_ߥպ1)TbۯK{.˫XO0/~|1s :0)adb+jYXy+a+w͍H%UƋf?\ۺJjs2)nLd$ _D0SH4xFhtڳT`oݢ@2! UN"z۶nº Fu>u;9-tlۺhukBCr-):!YQ4$䙍*CDžy>sM |L> ^Q}Q?2J&")]DuT7/3-_+_uŻ?{zqXL./1c+)s~q!ewa+rYx)g.{S]:bBp) %'{JI(0qCk^^5 Fڣ38T}س1D(4W0iD϶yd].*lQ&E1*m;nro'XP0rpTpb^MdjvHF5׈"r)yW康^z5f{]60$gx?tl>& ԫȽG-Z(&_i,Ɉ%AQ$${ bO RFXb-!o7Phδyf>7PÅTt-u$P#=8_(M#-[}· E_ۊGq@EX͉΄"8X6_$KJ{xo|XH@mY!J8,> I+-.B(@ ۿw"З^_Opn_+"-7||kt/~jFYdUVE]zf:%` &!shGvBƤɄJ% 3 <7pG!-zh|x>ӝexA0EB$MA%Tt|&XG9)cvWLǀ.)mc^qEXIJ\JT8) (F$n1BJb/s NIr<Tͥהz3&D%(#9\sXfXI0؂K0 Gp#2` sLYJ1J 6\ L+IM Ж11Va.4DB(Q* pRRyrV)T<H?Sy^ ٔns*ŧqM ,KEcĔ,tÉU6eׄ#R>|(͞Ogy'ۗkbs[wKl̛BԛzϜ&9?sX;8ZeLcJ)po/$.^Njsp<Ի7Cv2 >;r?'$E.B!p67+'6[̮f5/{ϏQ|n:Cb\u/ևĈ䁰(wm^k:?eK?'uSpY<{(xWs@S,5pbF8!(,(ZY6Svir` ZTaZe$1ʧ#;an3Re%6JAmX|8B\\or!V441E![dy6f=ha2bWEOE>L/|iX{ʲ<V[#n"x1z'P,r1 F0o zY~^ loQHjPSQ^17e#e{-tgFSke(3@#B>z@V+L6R_ppd0hAi)1BH~h[R_&>_-p$5|z:Iree!ZÉ_?}U仰k^n a4݆=z fo?nUoSu#;͋"Eeam,w{ùi6K ba*ǥ*R?0M4d4fT/޽}FsճK GX 'Ogߖh 0_Xb()JEpZJXR張F:(Կ-:(TH\A9M[ϣs+R13\j^S0sNo|*GUd*^mdl..Mv5m7rt9AI,aDZpQ*ٺK> i1(wO qNY. i3i0G?)+Bj1س#PxUc3c@4~rPYiA^mu = #('#A1PX?SeIEZ"8 G4Y&~g&jk}y!PdfcD%j:*DK)dRBSm19sa!c'l0Vҹa b Ec1yȬ!Rb : KgR0lѩag.7b%%?ky)"TC5:S=K'5%G!zT"ڮvWbwȪH"]%'}_2Mb&-tJ9"&پU]b5--" -3%YfD^uaDTI֕ =8pu{Jp5)R*Բ-jHhN#[ 8(j}"KXFY) ΢Id8rH <Are,E )faپ|p ky6Wt:#SkVJA A; e7^E ('SLf-H92ˈBadS\ZQ/Jgy'\YRNR"J119w#FVB$K@)%)4ekFpw7"zk棩=M7TXT}mb%4ʢ9Lm |k8NI8Rp(yj-uYjLlqJZ"|~g|+Asn ѿ![k0L.`B=]. U8 s琫Z}䒢 ?]VGfpʁm:_g?}zܽ,~n|OSg4m9rerXaяϷ{`vџ'_YBoE6ap-[QiQ΁CTp?VFκea^,e\h?{qb܏;/xmpN  }H9wUIx&y..7DZ6futc/v?_Gf+gNyhjk5/E`Gw8[q[i4<.~8AC\B$:{?L!~⇣?-W\Ψ;PL Nts2wTϨ@XE>ʵ"Hj<][ c q%e5p>y`oEFկ ;ɇ{~9c;A: ]a]>/YO6 _w +4MS%[QPuVB7vwa@ג~O|>}|6C٦vvNZDa"kճBs1 lcvl[! ArZ" ֚Y'xoZڧg!4fv@. TtZD53&F'{w(YV)t\%Vf D$P*鎻: ]a) s71#='Sb3{qN΋ycSit@A$exmD;?#3^نZ 0TA)4wmmz56#"@gm%6{l/^7D)$'?CJ^F̐CF`ptWWqɤz\@)(׫DRL<{#!ˆ#oDZ' XAb 0Bi |`)7fqi7Xƿ9?O f29*v&G[ԤR7}~]ѥϯ_<%(c)V& ' B=]w ^1-wy&*H0ZN ì59זjƔ@uWnSƷreyIW/$z ,Ek jq  ֺ,J8y%RbT\{,Yŀ HuXIJk1%~Zp6h|ջJMҁ)Xuy;|qqՒT2qͣ[BPɽ.;ja9-y7vDE'1fD֎w@'B[릚x?޻MyGɻM{|ջBd".M_Ndn8N{}ⶸӡ p=V#7||bm'ǡfDEӛ+hzֹ:{G\t1MIn1jtPJczpAHAkC(K`"J,@TBMng*xmK,r,#Yk5V$dT||cޜjIʭ-T/{՛Km5[&`n6:,Q :o=&R3scLXXlc"Ǭ՚M ɅCGB9#9wNz .XZ,zf(ۜ.wkQ*~B5/Uq-RvmˊdA[Z?o0o1~f#r~p:ߛׅ¿ݕ9٣1N|}?x*QI|WoMSQsR{VA_lL8?kv0=&#~Zb,% y"Hh7i/[S RDuأ^oO5hvkCB^6)qh7RCi:F(FO4U!!\D7)Poymw)pOHgc)k3F-P1g` v|{6>/2ZuDFQm83{:MK|;7'nRrML,Y~3;gM~ٶ̓=30ֽ;?s`ڷL1o8[ጰU&p*nQ8kDރ_0uZ0}9>lYą/2EsD;>;轨8b“ߊ^׹JcSqwUpt4}SsQTkY;L?D/Er[5ނ :>8`$r#bKJ}|фG{atdB)bf`%w^k!E?ǼsS@]׎yEhsMMWkz%jgJ!MWkj,uv/ 4~`tByقd]<=z9JIen0y*)UxQᆲG BR 9WAKnjw 3<&lGS?t*AZ %BwubD0=ˁGgǏ_n tfƃXO2 96<ڐhC6d;dK-xz4$5NȰj%!7O'诓x z% Gn@'n2`A>G3рc4`$8Q1 Gfh mLcofwg߼9)N.+9ΔR*E!\I$x9vD DMM0RCSE1q5Ta)sZ^ <ճy4Ⱦ: ~w~k>4&a, f\s`]#$+Lē`9νB993ISPIr=.tq%LRG)q ނ2A*4 s"xH'‘i%ǁ4I+$`jpԹb)τKri*`X! 8V qBJ#E 8 bqhP8=")u=60xDwq,3DεVf>qNzxʱW qnst30{_*0d˙ $x; fV0k:QG, i‡\37Gf36)bӣfیbψXK6 .yp<_ΧcÇYlC~V[xpSR4e3rWG?= |} "2?㻣A'ӹ.NS\|sO_wQRs3 ?BXp94{׼"%Ѵ0K~JT?h,?rdh#|xFJ\[p\R͐D7%ʵ2fBz, JH@3:FK$|jK^ʕe!\@+AR k׊ 1ځoiuמR Z\hA,P.Pb{!+0pb!`)ULa^oG־Th_^4Q dvk4c`) ΎZcFXKGaF[-53,hNRHS Cnms<#Se %xaμY EtSSx-(rOG?Q?]~8\F!"1݄0h2B/D= x~}6ך,<5KH))(cHLS/38\+}iAz') % o(X Ii[e՗fJ\#Խ$tY-mQ^C #En qBgo2dɻDV5, rF>`3> k[%.HˌI3=(sMXn%jN̂o9g9B,8k V΂@A{0QB06؁!!=kGd0bnb1Tr;& #5_iCUH=$(8X F ЭvtejYhRWWP|;ҭm^{uWR䢲XCʗKYi4su+$\{o j.Iv%Ul͍ eCI WgU0Q:xԥYK zaB,`zo~ԁ*~<gZv':3~s[ZPJ)E{RTCJ  UUCY$sjdc3>DDz!@hoKP=,%qBHYM.Ɯ ~gQ)؃$dx1_l'䯲BasV Gi#R[-1ݺߌ񧜭bXSĪTvk[)(RZVLwF=w.e~qen&vvC]w BR˗]J _7@_v@ʛx.s~u!iOYpfo9h.4N'?el/ŃRPQ~ydm"juT[^o嫗s6XViL~NTp&WX^Pm,a.Fbz72HIO*)Z=n0@n>yV^%3!})r~@Yp@X.ÀvybtǠ#hL.xeyosaKϥyXgc!Yi2E3 7p ^q͹z2z*؎a) -rO<דGE!Gń&Lw/4׮QKHPA!ѤBqCC)1=x-yh+_z{3/[o._z|4?TnvFLN~Gm&'y{8J+Ԛ~Cwuc:/lDe^Q5zLXW}׻~s4#}jy6CLoMSRƚYxq_UMe -R}q^kU~7F5ǨFgumn=܎EW:^2MCqO3%R! gwî~V*2\y܈RH% *ql} IG]oDJrO Rnu)&PF%1,ny5A]Bi>k/&г v؟t{Vk /BPjxce'x*7TQ6VB_)#Leb1J \Ux9iMPk(#F>|ϴT"-YFϾy֓g"[btYaT*,{a㨖1qVt2!֊BR> v҅S2k k+rJX+kI(߻"WCǕ|kEъZMwapE4pF5]9ʆ1NsN!c1%9T;o?G yJpS6٪'%m y?ggiۗټY&|HT *HGw]hI ')CwRb0O|Wvű \o(Cfz:0-ks.5qESFq#L$JSvCI.4si:5kJǧ,4j@ 0E( >"1 Q.FzA1z{YNX/0մs >OIhsHAIEN?^FlZ4i,?_~5&yIܧo~tsijf7}RcJfɢF&yW?}ur~S}_6>UKO:Pby*D)֡vHj 5o9=%^-qch1 d))=LR@ig@QzA#ɭ"ԙO 18(!v% p}S *I*`M5 "&}%a$\&(;U4HD&!jFM2^Ȭ$=%3&s% %{ц'/sllvY' ~}אhCPj%K;h@!<N`4K: q1.ܨ6XA L_y!чZ;ԢA+Ͻ3]]hFO Ua<2(JjE .}&ୱ^ FAtAOpJ`+Zw}eDQ T򒘂oI̚T .W"tM*9g3^*XZB-ŠBNݔq".m;H6 CFXIQޔֆ 9K`>2$69W+zPZv;@ ^w#$#UeDN4hP햞lȓa-/&D'\F9jOs]e,MZW}ny: GHL.آLL?%}+3/O^F C>J1_ i]*zCWۮ"W/Ynz[=|8wI3aP/hU>kQ4 c1y'?xMA,i>r\_Қs2ziކ&y\{x^/oAZ.KˇbZzlt-#∙w_Ƽ3̣8`d]iph+#8oCA)n֠g. :Dûn0PJg$ "EzK*"lt,$5ACa19=RaVDAGm[(w>oB5gℏQfv+ŨeR$rg("ʄ0XwH;\t܅j!7vҴLVn"sF~?_2 É#=`՟szI¨ sas[짻`/7uӶ2xyLnWy}Wc/LH &3xv?/SMYcs9/[PN<<|hbwnVvșpMlXB-?6^ѢK*@>j]$Kޙu>&3*D!$J)k*`A Ϝ%a#t0+!REBz*ۧ09HzNjM1y6إAKd#<-o`g w=y{)k/^a_,N}aJo罺O$I/t '^9ovI,;c.Dpz[AJt1]>ENnnSd[fK -a+3uNÕz> I k@ŢTpXFT0 LjklGoC>`d[bp"ZQiKcQg,X1(cZIV02R##BlTYuFHRY7_̽ VBe¿.mS ރ{)R_QLQT;z=v\mHʳ*AWi[wZ5N c 1l % ݷ$H9ҋzfnz4+.O%x:X#fo>}fQvYި9qi? E)i^8ĻGE{z䧺E>|xYTo_^5o^ J !;mȓ9em*g OU52,.lBbG~)߯8aw>C]ϝAґ϶VNP{3 ؍MoorF@u:*rt np7C#gμqF&}ߺH#6i+<Ԣ´wrLH&51H)D jn@NJMٙ`9\ZQr`zm]S^ߚ@SB*mMPGjM79d)ܰo? '*eڶ+7^Дj:L< Q :(0^6]?E?WѨ]RJKfdSe5ow<(_.5:mygs//W LdvwˏMbRLO/.uv v] d=T(HNk؅5*bR!)TW82iPF<)<6]pS^|!Ҿ4 Oߦ<@GK1/nr[y#SDʢ\Qb]gٻ6ndWX|ٓK V6{슓S* Fb"Z~39cd*e[ph4r,g-Ds>ȒXgc?14N-dLPs4BkzKt@A:|O&H`JT%P>Վ|8qzxk/U *QE vf!~BB:ʼ;O77QI嘣8z=Ɨf2HeyL3ҐhZyFM:DTNFkPb:-P I֛S1!GV9Q^V~U4V`v&qeq4f?M~Ɋ>xvWYC`D!" \Ys.Bdr'j~kNB,MJ;2_k?6,DN& m`Ud`2&2;^th?LEBZOZiV:10u ]uvT: ;r qz;xy!3qσ ,;J4uDY" 5WyuK#E2NR4`JR5 `?jD  6~=z]g=IUCո*)//Nj-[8:`͡q/ MUxH Vˈw򉸀u5S9/7XgY>VuU>9U5UxL2\XW]`닝u0\}Ww΋Q T9+-c)WS,VJ7Pi̋'f)~6,m)̋w-g?mDÕw"zt\LFEe8 mJ^+;V^>8W h~ER1!ύlA岝K /ШY+8ZӸ-X_`E!'ij^>i{{vJA %9j}$"u?+˖ ,ӻ?R5)@Hcv~kҼppk5.7?evB3{L'ЎW3F/^zr6vsA6\xdkQ ]tR_&aʇ`u}efی~"֜Ƽ-D_vrjTmEwiɊmokV*M\~m.MDr[`x,JqڜpE̟nɨʋXZQ.v/3r /Jd@vZT-P)4nfN%%=yPP(s~+7j FӼM^^`Q :$dJ<.HUWδy?}К{س#RUd<(36r8rAX5'Y+`K|F "'=l`@Y'!AUR~ XW]$A,)X )`TegSM*"J ]`NkAdl9PmW-G0] x((YVYՐ`N\J\zy&IG # bP"IItiu'VA﬉SsאJ_>[os ڼ^Y<y"ds|Awd ^ im1$;w>*F5+CKpW<_N6l mƛqRO/HNq!:jJpBIog/74G)@ 7>_Nd2PgTt-do;L>]`D ݅;kb?_EʖNNj(ty'fyjo?6[)n#L}jV8Nˣ1բnٚ)EJ~ɞq5!IT~Rv]ٶ>j[AVԘ" 50΅Ԝv}6n?/ט e ꈿc;qԦOq7~x?0{l\SLbB,5ܰI1Hj*"Ђ"0֘afzraw=>T:5 :g VH r*bK%"#@>nk *(±L@(QCu3DI%Zq0":"O\(9 #L#b0X I#BS8Yde\: [QO u  ()$86qD(9BXWXǽxR2e`E֘HXiM"p$X jelhT61Jӫvq:|y'ow~Lvlbۃa!}Vnʃћ}%ۇljW M:zk-XqeXs Uo $!bƣ$dHEX'6&mfYBiN$sΞ%ϹϾ  ? W4J0_w x ~><] "bM3z?Kt&<3qՇOq쾓_νL,|3a#<Dcg`^ŗ{WJ i] 㦗J CdGBdgMMx&.<MmyyqK%&gîAzY@Ah%lj;mb^IN"umH7Q*$!(D-Ax5wϏC%Í-3Xpl#3fxz:Os4ag^>?_mb/&gx2=gmnIZ>[X}[}8Qwg>Ŋ1}1?lesu]L'Z-RAn,ӭ,t LF kKknݮm^WPɢ,!YȤ߬&$]oJ~:FNdd'ٯL;΁/})I}~^J)9U#g9U3enzo:F֢=1~nUHȉhL ul>i;o`F!ef8Am"zR(J t_mł|&Hanu*5+s) 2А)?w98*˪CBز11,+-c| 9V z̾E ΛK&{n-O'{1ea ։'d:Wt=!KʡD[O 54lŖ /gu7>tO}QM 8>BJFk OwUYY g3tڊ!tOE`6o?V/ޕƑ#R'e2xzyX7-uMUU%)fV5*Cb/ >ΪaOQ=X\P\5 2Svy0dd,Yx:tq0~:(8JYA+y닳w_yh]:@ݢ;;e43,Ȱ2w\mZ|f_NJ. MaM`7|9cV30=Ši@e@% ^߹ʩ4!_ϸ^ ir Y]6)b3٥*D6Eȯp]J"ȐPݰ'>Dhi~`-׆=6 ;ʏlȒ$Yd/* #K 00ҝgd-N;NZiN2BB%De;Be;!BTl' ,(]P(JԳ1'\AL@"\b#d4Ėh*P"`X*8P[edAfT&&GAsP%TfPN^6x%~ݲU1s$E$Mg?˿>~V*0Uagr}T=^ \~@|8dOO3򟋳)=33z2q:%X??I %M8>=!;T0lMOJ_pP)Ì,cg=8@]8SV J/fP4^㐦0$P)[|+%|析薼 G/CZ!w!x&!NAѷ9,1ACߵkwA q\l]|`)F8|((Yw*@V}ɴrbwEq_\ 稬,ezca7?=5q;=g!' ڻf Nj[xH) Lw\kV*B8N_V,2XBD_Bj -%}%(U}/y:p;*M0lԓX1FyJcH c .@{t1l$\E#aؖ IwDd\uAt:+@G$C nHEFTsd(e4NR+ qMJkBs?V-)П&#jcqܰF&sp8)'VvEoNw9^oTi4U5XGu⸸7 ZA&tM'̆}tРJs_d,1\c wGt8H Eɂ%p.\hTնF Mε.=i=  gVuW@֌e2|y|f0|6;Ӹ>N+gwqzq3"+̎|~uRF } !Mxa$.~_/{Abc3ojYM+j+8'B/ #L+%7w;y]4zR..n'[ArHNӡ Ss![m}b+)lE >+Ѱ[Ui9VTO_rkru(Uޞi=fwrzm>G#|}+:Xn@1n[H0w"\z^[?ҷwL^KbFG8u咡3&TVܦD:xi^_MQR!*.P`L#꟏rn 9MlLH҂OQ\GowߵgigKǪ"Ƹd'}dC̗U$^(]cmAqW),qk/}sDIg9 Y{}T"iA‚Qġ NNS,W # aL^V]bS`ل8"@3/FԪ~d (Y3i$mK罇|+C1bW9_h0\ B/G9 Ye`H:5A.p0ec H"n' 9&zVV@BXJیh>:G-w,P!Tsa Y"HĬ5E0#dB\ɖ.T#;qb(ӊr'tʦk,:F($˴l[;YU#1)]c:Y>ht8Ap+J!B9d84Fa,Bwao ތ?^'lK!]% $=ǀqi y/{Gj̗o/V_yoM%t=!B.K'~ZϔqnOtZp=#/Zd*5ɪѨ¨A]AvyΕYF|U(JÕYU}o57}4T4ӹJ^pY?Ie0@9hUP9-B]rKwbtPBbk~*B"l|^46k QYlN5 ;log-uV)2j+ 6{_UÍBFQw:'İ6Y,G ڸrFJ=N;3lNtwwz|c ս&SWUJl ~m)V(վ>Lv߆+*?vpkH-oH !%}B}Hܴ6Vo5}ITJXT"n \bɘoRʼprWtE[JNi+8 99*zm9m.#MfN{y*UO܌;Kۄ?p[B>^ȶӦN)Mt=!qNxVNxZt;O.9-)߃9qrڠɁ^J[-5f/wOt N`aTRt]2*pvHU %'[ץXs^>4F^/.Ne5Gٸ{MrOno0~2!*i"컮J*ZB`jMe y1{'HA:iGQor=mOY/5:c`q?؂&.1JoMnϞ*y*{%3Sdp|p; :_OC *K,ߌk6Tόk6~ @2C491@NX:b#JX3Fr*aBhŌ)FASpյZ聢$ `5@3);}4>(1H{+pBR3VHK1*AςP8ISvด/sſOW~5ty&!]8EtzQz˵|(c <H@ bpbgr>_n//.˟R,=tnowqZ=33z2qc$&ْ*$DH8Gђp6F>WiS! eQP|alh:77y4"7vb4it,<$u;v7 gˏMޏYW RgoHARӽ6>ŝ(mdI {]UQQue<a4^,%d2ʦ{6N 2L3HZ/!™g*H}&dMfY%2M<>^Bgg)]7u+3"r]YoG+ آLڇ5̋D%q@ْ<odwXYUbŮ/ t~_?|Hǿc@1ƴ^cm3<`vI|Qr'gp@?\yWQFXp*?UX Qbk01_^5sixTr^zBr~de ~J]Z@@oЙ Z38C2$DPAzS(|,mh170;:$uA; ߽-|*h0{Xo_-vY&$zYQai?^tJAcw5ͨaC=]_HTݵs}PwL1Xh)ӭO0R9JC3RrA#( bDQxG,.6&SՕ}]') tS JJMTdd/%JX)O#@dzC< LL3Hвw38K& KLl$˟\樼/&L C:s M0µ&# C3w' ˤˤˤ˦Z˜18p Q TQI ]W|`VJqcnznqMwS΂t|\kJjaa={fz-\seS5@E ok|iZLʫ7_+Mhi5N`"<9_}^ZeYh͹)|_-bho)9iG(llߜ+&Xoœc/[-r~Q=}@c8QMdCGZW)CAbCpB]frz*ۛqw)})Zp^p}]( ZEaf*7]Ny<Ԕ|2ZJ5i(i5GWCOp$ɋ7G7G5JXx;,.8ci^ZȂ {1@G ?]脶5Ԉ!QZ2'xGp5(9!Rjs/Z 8j(g1y#sG78T5ujPpt2ҠpU K@C j]o8 QiY{vLj8<刭V+sjERd\"5S'ijzk3/?hbR$jnKj OYEÚJ bv,UU[UC;<ژ!qAÃO~Ӕ=>Lc:mcJ0&} a0L&:5g-ʐ%w1kͰZaI6V\dm3{ U.ϓd!/DlAk3n31 Bt+ tJ%/kI:zV<ևpݰ̾YTsRmu'\%˦*סNnv{azpZ t][ӂsFjf K ֙ A1u  Ai~CjyE $6e9]ĖT5֊+vfgzS%$촾5)|CYLJݦT)uӶRo~L&yڇK}0VX?܉fF宯9rMHp݌tBkprGտLHٗy&t-C?fw!o2V]pF ~{V." zhymǚ|=8UȘ..cȽ lJ9ͫE:h4Q}R#J Mܳ7.6{cBGQPAe܅kҐGq]$lw ݒ.8Ȟ;bč?^'9@[b24*G=zn5nf'kY1G}ͻL2]YTdRw)dfWA?GL HcM)RD ѯi$NK}3" s Ӛx%idX탯 y-&CѶEi2cMϩF1=9@K~LF^s|d;NQwp][aWЁYТ{y6gd} 1zm\Hb^-\*"O!V <͉muU#2\uLRbJ&,y7Bz/NzJsq:6ry$6K'dh8׍~' rJnSܿ8/HA0a<[R0L,`['Ry+R5v>.m VU 6q4*cL\[r#\pR)2\njs5R}.T{1<{¬ma=vty1F+s+֓Jk}Tuyo;:]-P}E"βV8%҃crK[9آm/ÕIN`֪巤<)Wk Lg} ЍYA&PT z_f{E"#41fsDC9|$r]&},DLBN䖑0'4ph˓紼u!;;4eNwжL3L]QBrCUɷ\zLC=+ڦr^ږw?;)ߜyYoYT1'{nkW]=3N&/j[C6 /]t&53#Igu =V5'R?΀0\uR5vkM]s.!h2x @kI"@"Shy#QYƭ*4ODu턜 22+:ʭ|!L3RmCjIcپ#ioG:cp9R݇;]#2[Ӌ)i,W.m?)XƢxr'?-<<:zD9bq}F;|fNc ڣ[0N9/R>%%pzΠvϩ9T`9WK̜p[ɧceSaC8F傰!C4`r9Es>EP]w9'7% ̫`yB,8r3굣D2  4@k:M'3{n*{ݚksbDL~8nyQ1j0E%83 )Rڔ کҔ $JxeWf%FaB8v1gIaDclQVõVhQJV1DP() 6,ƴȘFr3Kk"01 xfZg:fu$4ؠ `g9uM# e"hSƠ(390VknOYyf1EƤ =a$<-S6!J<3f31IJAa2OĘ$- uLM-8+7X a,rkl{L,X#CXd;LnT4N(N8L`Ÿb8a2j Q+Ndpe~lZwY9~{eb,# mFYd;,ydt`z;TֽM1 ^B/whM\)a<{1xZogdD3t;O9ik@yo>őON@rPnLhZxi3n>/~ȏ~[iK+3(=77"bW7ˏ_g/ӕk} ,Q!βmL& +ay>q{3NJ!|rzK$ 5lV^ǜUw?=9}D_eLiu gɱ)m) BbfyݙK e&vk\ӽMFfG`cNRWz]H8I ЅwntRtx rdSF ?S]Swv%0pRئdɟ0^NrmoAyzՃ y^PBh6 ;0HJYO[@&~2B5񎐴T EѠCr !E':G$( 8TtVa ʕ  GL'J;0JYO[F6o/޼]>=?N54z.IFO>Ie>zSFO^4z&DFO"[chTri=l.=)-T bzG]NrWDeXÂ{g=7B˨g7GCzbPp17W>1$fx j Ih\C\./^<Q}:_$]ݤ_]އMfѕ NP3{) 1J 5Շ nl+6E=}SMw ţ+VSExM0=UϾ ,;(0ѕ NNQ<7 ]Gо-] N ʾʾ;"@LGW48m&N}Dix!F(w'uR|b?|#+ԋf ~؉_+ɓ-?jے\ZYHdWQzRS#Wǵ+YEb_⧙=?fv9?u #b\Ly<|؇ J~Qn72pI%\ ;>N:;kl ԆZ\5 =PHp5l1;\\kZ]6R8/`-T@ԭlxMLU|A,@j7%@$ȡ^nxws(j%>6Li-[L(0A)G zYA,.9gT]]ٻ |l&Kߍ}WLՊ^FM\[L??¥-N|\E0!_J`/.?qr _U @pT$TmR:tΔ * *VHZAR'j.s_j?J޶>4 0z.g%!Y+Ll5T8tjްtjV; ZPhAwZÅ !M{B-Hˆ|XG7b HshHASQaVw D'H6om$.#q^j \y*p! [p5षd~aVnHu"eyeX3.Pr/&<@KbbG-H#KcrXTۧGF9S>}nCiMϏчǵ!s8>gU=@-?z?tv gg)Ng2;. `"fۣoh~w;'Aq wPPu!Q3(^[&*4y%G3l)׀꣧TJ3 tN;'9]h2A]%53M;¿*Lvvn~~hﳢe|n 9r~:;IQkcJ`˔FpNi/L9xp\Iŝ8͓4jpغE i pv:Ce;6Uw_wCdiѧp#6@f{}:_ΟmY,ݶ,zZu6@4d߈b,IAVif"8ajHճ.TfH_v!bXh=5+@{{FC:{FqھmB g{7g_+:w@2GC"Jo<Fڵ[l!{zvm&gԪI,4@Qώ-)*ꎞRꖫꆱ󛊻Ua=zXE+8]҆hm\T69='4ROBkPyIuбDJDSYbHTm%o(Oyk*3&np$WYP>䚴•R>Ha ސB*l+HPx1t2̷}39j7u@ֺoRMȸRR`})j– p)ox 'O%2UR4נ9(K)F@`ddI٠=sN\H}ac@ |cΤyȒq'<&sg$N &QhUB:@G|CE7ȩpp6x_|P+$Z&rY}5 &CQJa ;B׆F50Ln\_CkYESYᄒfkxCIn'eHOo_y³;% ^'B5S j1>'T߭+o8W(5!X*z^^nK ICm}hy-IʝU YK_݈jݒ_v" HRw.8eIݙ)tUB,u0|ҧrKZ?\ÁJJi3u8iNN:M<3ij":pۂZICt7Mpa sԈAt}[;EPaݸH6|cҡ$0$6hTmDr8paZy/䬤RH-UMVmH7[Iߤ*,wMB)kIJJe47.hF 741cI ؃&Dӽ[<7MiA:GvrLDK.z-s_ndyi&uHZOE-e:*l0I=kЌ*Bb`B% &hOTkznhİJ_:9E@4Yc~J7^}oWv*9 sYjqU1}P2B v3v`\~buE֙.ge988_dJ g;++BzkДbuCgRoCA!y[^psfs4גwlJ 9 P*%6yIs %Rx@8b>.8b}7O_lz3:^ wn"0ʠZSV޹*GEK2Ӂi)ݦP}a``Ѯ:DzOl{?K.ץ'F p<j`^gA/ ЪO@ SeYo!9J PͦұHFѦcf  _ai5-$[Wlz̆}س;*V{x㇕`o4xڄBo @Uoy_9 M{6 kYw HR2rFŦ-X)@D);Í 5T"^߲Y6`ڈ˒^rvz9P6zWTQ781T LZͻQ/fw"*,p0~WTn=Wٳ7O7Vɇ ʻƦ(9 b }\p4*-y$Ad L[P;UM?fXI eyVDV64S%Jw!bMFR øt̰ n&*a1N D(`suZ+d*|]040Hsю} }ف:9ǵ gIyמ_ALAo (N[$~>;C)Ng98Y?^_\Ee$P^ (M#Af~3|;>.ۅVǐr m%xU4*q+7JKKZhm/6j Р AZ}Խ託ztn-j;v 4f7a߭EEʍZcۨi ]֨mT2 e>umP)hy˖KMbo^%}%mR)jp]с9(Idno٢k"!wNr.tdfw '8FO\WJQt:GYҨލ>z:D_:FFq69K?NC뗿ZȃQ>* *9T&5^h A٠7_3^)E (J}A#T($ uYt\Pn#CApՊWR 1+I@.BA 2 ʔ޸PА4mL<(*"AVE9[rL$W!!YEt2 V+!@fԆ8pEV6Bڶ B35}HTd7ae PٳR*ap. eFv c+N'rn?Ng/C<$mXdiA+OZ P˞ ЇG imJzmbG\`PKgT_k#RR֓0R@/,@ފ{FnESlNDITIUϥQh,?iѠH):诗R+)*r=eR =o4y)Q1(YDΕpjt -sSfƣmttq\J%ɋȉ1TN\ȡ=`FNh8-kyK4RRu(67)-{>..zWӶ\ib]IHH[%ܧۺh[C4Dkf&9…8噇J"U7\2!Z/V&4`}nFF_'q;7׻dDSKv%9$S)IQ3Xb ,vG:uv8@oI 1mnس ˟88ºúA "!V Rq[r 9EhcI K1۹6c"@Bbꬽa˔l6Qv/I@ Z%y cR`kØQ5S5p p?eK0\5Uq0u9,=*8SucVS-1Vsܔ2Jn mWͲ+![q5g0!ܬp,J`Wr=_H^T/gdCecp}5~%Byv+~5 5"->yp߸ ˘>˹,.}k$G1,l8䑻JciL=%5\0v['e_jX.#"Odbl=ij@wVBN={ [>X D!ؚ7FWwO˟ 뇅㿽&QY"x5bt*2.on>mJP~3{n>@ rg%WwβOY+0'1VqئsN[̝z[4=vh蘅 iE\ qg2xe?}->bu0&'Wpu²l*Qh!omR ٟHIKއ4-V@7Ǫ7sekA( SOc[';zrkB>\{ޡN777 EtXbl>BBXkH"3hZk=-O\KQQ3_,ڶff|*ϫQiPւ=mt2Q>7L( mx>`3I+6^` D8We}=|;|yZNn )՘P^h)(d ؏@jg2UmTֽUL1_sx k5ZU,\5h` MهjybG[W; /VM7Ƨ~CfNu`NGxoeylILQńDX聆PrzgjC '} 5,԰Jݙi~ԈҿcY zʶ2#pb;|WvdXOp@G=(c%{= SLtx Y[wr2 YQdAG{[w1螣 זw k쌑-5b!+uwO#-nH#5!T!eñ+[@,b _!h `9 CN^_" 2ڕ H8IK"qdU,I5K2MR̒H'V  Ds4kv\һe@di^,RɤlL6V 2J41V#BЯV)5iJrC jQl5CȲzaX81ᑇ Y84Fë}c½ :M֜s*X€*D9PQjE NmeWFܪH*A%RDK: +iX(PJMXѯ<8!(uxcOOwE(r kL;=߸lSf1|*]1wzqہ|5X'~:γ?3_xZW[43i#oK7oϕE%n`V I}W~hP |D't:VTδ[5vkBBɔgw%by\  ~v;D=yOht%PtVeTK kQ|@"R4722}u[Dשn]mSo`tUXjUmuجML"H3{VQmӗ큣4ղpP |\1`F3Oo y"$SZn!B11haݩtxvkBBT\4*?|+Yy=Av'힀=eʅlRTwykP%%](.R_%e.` ٠^HYB?0֪ [q?Hwdn7;cZ%o޽Ӊ.NC30*T~nM` LK43)aP!~?r7&Q–B&U;tFa )?!ݑ_ /B"S'SIwE~PJȎl=1Et!]LNžxb! J!eXӄEXA S?$MjWGOoYQP~ts xEw=8µBxbbkuRڰ JAppFrI<+XIlinjm:Y||G _0CP]jbYl~vǴ[?y,Q ؈E"#Ոjag U_kuuomܱmtń1΄>/)%A1 ObK+,e>UCagQ}d.sDA+EZ ZFPgseBy(TqH?oB&B2vDL^`YAk{ 2 tG/l ^T2$Iڍ'i7EI[`fQ,"ILXEŖ%D2Y O)(OS%Dcd?͂^^ ݣFP teJP-I.>$xYsL.xRF.(١&קL{Fco {Ս:8\}&Yr90`0Mާ&ω䢓D0T1ʬَOI4mRΦ6z'>%—u~~}S=UjFP͓Ll4B9$v+'ȯ9xF! KZIiu}U9^ נr܉BV1ۇDG(/+1ZSv&@,ZzYZwj\U޵}^רxzQms(~.o<[I6$O`Ҵٴ|ӞidVι5 Ǭ̛1o~bOhG yCVZs[.ࠈӊk,›ﯿ{y:>oyX7 ٽ7W.Njfh4a8Q<½U$B VZ”"S+(3Fmqʇa9akHE8~*tE8e&b6 jDaXH"41)J4Fpf1ib# ?L)b"7I (G96Y u¾F)Ok#.AKNW(Ѡ!@^ Ы&=S5Z"6MM!Gְ":fRhJ(c#ۏz)B9B/L TBA4bJ0sI@ELTTL$6T0RqFd$IB!BչaD&"4U%`Y9 %, ik,KUEY QL 8[E>!B7.|YeT,rfEc閭C~n׀ɿh`krj`u4!\1BxT _\K$-׀:Fv:pNԶv vkBBȔ~T- }}#\4_!TJ":V<(@&+bb"hC*pE*!ypFu'J6fZdҬd?A]|ƑS>l)(%mڼGh$:O5L”dRZ. Cx*Hrx yP|M 68ҍiM (e[ #롺&xTeBF$ίUί%GtJhJ:} T !y:ffuG :C{zTgX?3;9SN<@hn@])ct@R= hި ELQ շoP@?ahUBJ$:E /3na8Ҫ mi,eRn< L792s`мUe' mUv'O oO<3 :MFzxǻha\iY4ݤV~KV tuM w.p es~%=67EŵW`yL$+ Ub֖e*Tc+ߒo>a˫-sz{ DlKw@姰|/Ǡ5n+8qZoqZo-)P,W(IƎ֧0 Mo㽐|ӯO9ݨU  #NƠvݙ-P3쁝ʘdL0B6*`]]9Ok[z!Q`IqDk_o a# a[X|2ZR~Q_=Bb6~!B'^tI74aGkҬ~_vKK뭻L%}1cȏD@*b;$}.0gmk@0WuQʄYm3q6ڪ3"x), v a5V3yICv%i 5J )$1 Bʙd*Ʌs@#VhEx5f:f#|;oh&M ;Hhi)h\f"ͳpB0 n먩YQyz4r& +?*-*V4 TšXIݪC|_gPh)@ -UoQh69\'u@b`9FBpܢeDi)"sw6-H.FS-d30'#Wً. 6ÃSNNN}@K7rDUJZHd lҒ>im[ZN}:K<_O8[oޢl`;Q )9"Jd\"DrʸT\pXEc:}~E uwE!DdkZb0Pܣq5R %U^:˂ 6'!(BPAz A#:d8a<ƙoKxݸeؼU`ɻ hJKgZpcjÑq6Ղ4/>t!_u)Db%Q}^dzw|uEkއwG{*̇k߻sI6DZ_s wlE^}|\0BFZkJ}_/[[rˎv:2ۣx~cX6X.XHB^f,aO@yOsCSz{'(j;I.)4w.;Ʉy*h;I+:q')!!/\DdJMS3 행AdIM>viJn$䅋hL{^g_f ̌BS.lwvf#>6XVǎWV؈K/3ncΐ7MOMhY*fX@sآ&X- VTumŘF_?g.gqqkO>ݳi5YNK JYLiB٬8Sb5IwCT2q'ss+ 5{_xϥ v\ޜ|l /󹸧Ͷ~mFdP8zv PS2)Z }2ݤ3Jz}VڨF!WofO0\3غ+b%A #KȢDQAs⠒ ,H,qJ{Zw5\_pso[ ZVpZva,W '>%u[5R^JOn/G䁌 94 -cCs)L\sW kjH7)LTQQeBl9(z ,*M`0~Ǭ!)n>r}\z';GI1k&E_w0ʺZtYOH[8&to[]25gM6%tsk(%\]5>>,ܝ/6冗ؿl@`j=k* D} Ҳ{gmTU5{u "҆rHyoo=I0&Zs" %GcK' DʝN$VwNa Qkr*h Rlߞ[*<̓Gu)v_՛f7SOyCɫf2OKj)oȟ<mxKXZwOx{~ _wf]?u?oK6A_OG!+ He ]pJjqU*UHpBRVH/H$wsj@ UM%TWsL(;4*'.ţ^WWoE +iR 4aʳ(4(vYI"* z{|UAGـ⣃E$,5v2P-ǘ쀰ҨpJ0Htd2 kH@D Zhio *++Vjf*Dsm=C@M7:S˜YXqU| 6= &Е@K .ɨGqYS^S TPxbQF#qU!Ee T-bV-\O0xҕz&έ7޵5m#뿢==~*?NjV\v.') Hl忟)(II'#@FKG9@Q&%6qU0AT^*v+RB(ߒoVTȋ&^mwc0= ADDG?|A6%Ɠ9B׽S狯Ǔq~t wVb/`ۃW!SBi|[Fqiv ?T' òeEXD/FT3.5X>H655̱K *UNv%*y,% /G`"Dx \WX3X`qmTq0)նPOC?l޳4_W7VWVyun_:sϱpc2<ʓLw10ǐk"&6~Xޘ UZ K՘;Klw/\B)XAhsQ+i՞!ltq(,Qc==n[?*SNs?W ,uW7VzlC|Dό%!m( =-XO|] ~_,fwrR))m1H]0Xc| (RyUv_o8 нN^L{A j.Cȏ uҪ|7g+j>D L`g1]bco6QK4ѯ~o{4T;?JP[t6آL ӶN1*"AXl@c"D*cHD+ObK0p~,:"}lŜ=}\>zM K?QL-ϟ/9*hWhWN4T]%$&WZtxsEcݶU+5iWmtuAǟ\'7k\Iwym% \7?n WJ]3yuq tBVX/ݶp_whPt0@(yS~]IIII*|<Mㄛ&LH3s,;TmDj c2cw)F(bb1эCnv>Ɠt ^d{g4w DoV:<^ip[!oU`( +0C$:~; q*˝ؔ-P8~x~5Yx3u_`wȔd!%"(.24e S.Y"$)YFT`%tā Rl՞X )LSW \= i9ݻH9iD+a^ z ҙL;2,xֹs5#n]N)L Rqԃl֒b2ʤLbhŠZ(Pe$$qȒc2UYlNXZ# j'ϙJZG)*FO,{*Շw89Ҝj2;ޱNS"9oGH01EF~춧T>n=K)"a!N4M⤦=Fa!l5KCmIF3;$\h#|y{لFq_xa@D8!T kq~(8cK1bYS~.W"@y 5OW)'`-2Q2XS&,|g~ynz>71&IryK{eIљ1||/Ӗ_άe'aD6ÖK^.MO'-);cMiĚ0?&THc>B8z0v9R709=`yɳZƻ(3 :aAԖ=idnOccŋ{觛0cNO‡hpP I.;`Tx3YᐱC8ekإ}~bi].-}Y_2GܮOӼ]޹XOFjMɶtuΔ# Hw p%a{g[>Ϸ hKg{UQ8p&~Øؽnn?ۇqrI};dXh._O gU]H$MwlgNTDiT) ?ϩr,`6wl׵4vѣ},s8AB,ay>8y,NN@~ +$:9+Ɠ"yА]IC lsy7g>!| 8t.<T!%1A,a'aCa g3KYAL0O0Os2Y g֌)}&Kx^pHN=㥬$&mx,-V$ 7Y vmEiAL!GG :b4H WV` hhOϱDVeztR/uKc1Xxdn#DHPISdٮ|kLo1R=)̓Pl› Ph@ɥ+Bl>}LD57ˋ'ChU]ʆoUϿ 'JEILȲdD:ispbaR*Uyt:\ir}ei|8/}Riv;ƤM-j`Wj]ZDZu$WUu=sFW HQyaEiqZstBT~5?nH%|}e\7rH?aI,&JYYaACbZ"^Yp#g7rܨjUscĒ8ƌt:E*1Jb))&bk%f#AG7wϻ|ZkM7savt7P$*0X<͝-zۢLw ipGHcfGN͎95;٪|hmlMj8CijR `slfMB''I - [>B; ΏVpPDܟbb:|)*a-q2::.84KJΦ,DD$Ya]/2"5S4z'£J~)@Π %Ӣu -=|}`o_:.HD:&cZK5ihu*;~p![z. dⲓvD(Q)7Zr yc:D>B>@̉m`Z`&z@;D.~/"Vb,@HFv^%(XƉƖh5fk9Ħ"FYQPgcFǁ>'.#v[<~hr$3sDzqeOyXu<,o2;^H"s+#+;Z,j,h$E+ f櫔{>EEW_wup+Ws]m\n'qU[jS;x_ ΰV~V`ֈk2ս] \S+1CQq_Rўg*9Jӧ[IIIIڨ*iTrȀP$e)VJ`ՔHL%:Fe&S,E2w*ЎRɅv%S|gj{ȑ_̷8: dO&i;{FSdMKZ,]TXE!j;kYٙkȜ JCUt eVJă D%萩ɠj[bSVjD֑3,z,|)E !B 5/v<20Kl:>j9Y<-* /f_, 5<判 $kYailH6{  36U*v3W2~)n[g]BTyYAT#wN7ԄV(%mݲf{,\FyHhO IAhxĘkǞڽAo{4wa[?=a8%%݋k>3e-3mGl04e9OB*Ƨ DŽ`iHrmFe1} 6p6t<`"- ^dW|ʖVi9`0V˚u \,׈ A%X̙S?l&V9;[R3"V9Q1"Bs1‰6i Ěq ^I! :꜠{ѻr7 y 'xC0D/ri, TL)]*wh9elnO@+y7В%EJAV7dY-MpUTq+ǠZlK:W*2-G(D: *aK B hQ?]_-~{Ƞmfu8uz  ]R;W:^ҺT\``e [)6)q̚+U#djTiFRFUCO=$Nx!(ۏishClfKmpuSgt0A7gC~xDkxv.ha 04K0 58/jN?ono],)K3IPp]Z Bő)*lx9a6qpޯ_1; gLrr fwZ^a|zd]Aj)FOZh41Fs-V>־=xڳ vbzڌյuZz '9r=1-/?k|e{%!3a6;d){v!Ԩ!VFX{j6ɠ/U}Ct fVkYT'Dpvm8LT[JPH0X 8؋ 7]ԴNWerT/UHpt; yzPB1\{B(*L!}(D XzO7jU̸2:$#+R B*rCTBV#XݣS-*%F,.9$,gW;pscs2Ka4KQE205o8}i($E`>vkv@TϤJiK@L?ȥq|s@Frz:@ ,u3%akPvC l!UQNs,x+Acmhfw"'"!ՙЄz.22 '+NYYiIϷ?z*/t%Y{BGAIY'P{i6UZw h)"=c9W2rEo<9].l Nf)b+$#2!n6҆#p]AvQ6~ތϺ8(m)F_Ei-4JqS'iΤtYq-|1HU:vN!;K(Gc]Iʔer)+Q.U@AК`@i]%;VmivVvK~VYht)Ƞ@IH˗w:"me:j}y};֍ !U)iҽ`m1\. fpNL3rLFra̴$pL/d{V`?~-wSK*~F=}Dϣ~'VD|g: xkb5ڣk,JmN@ 2j5TSMVmKm~bZH{GuDd?_Bm^?>TK4o~)%U{utI+qAZ]<Лtsfh'A:Blp%{Ӭޏgܢ*|6i-hhZwݗՖ¼dOML27\R*zzt峅Sɜ~ eLJSj>dZ>}rGߦR*g\"X6ފ^}Њ 6~a Z֥ 1e{-iWOkuAS]cN1cd|G>Psb$sQrW?_h(gO|$P8*(]wA4`D{ݵجbj)Ci~tRVqqʚλ`-Iܡg5-~y5=wEίwʨ@ ,iLBjN8 / 9߬]F/fB\8E-S=??T-e C.A.T:_Vg$S_B_/ +\޻Qo&zJ䪿Vw P qBի켨juqn!R q[hT2lg:T{Zm MGaJHU(x+KxK[Lw?̗m1?:`ZU+1>`aN4`KzƷa9OٵC$A6zڧ Yٙ4#ӕLu4+gywv(=]N yPL8m\v0pR.1nv:?M/A]jq..y#b0/Sw jQR0b` f3B393BslԠqhэzB9lڻ|7Fuy$V2>}L$~ ?-x Ϙ.R2; ի.l{FD0}vLyPp)Y z&M%1%ZLEۖ\;L+9]цӞh$ݓLN>,;7mJU^ȥkrpy.]`5~7ONH_v]^dI vY^ξYPɞϾB6S4{fk!TΔc~~ RQ9hTP8T?UAʢ05vMsn\gRfʧHt\7eMm_eF2p(oz!oBV_>gLA8zezS%0++۳-FhQo(29ˤ}&s΀3<99Y>!rldM)Wqr«7Jpɏ߼Z$asz4"\B-CqiwHg]̖UQz y\(:DYm|Ռ' 33pb5?tfqIe]'k=;QH_d8 u2V<.lhűjUg=-QJTTaյZ_2+Z>~_ן.5U'9\lvy@ʵ"?9̞zƧGR>c~rȜo6oxN$ѯ_7.Sˋ_hHfCxٰP=qܿ0 F cqcpD[2xave:S.:C%=x䌁5:DgZ"^+o ̇C2Yd͇K`In,_2~Ŗln~e{ĖMU,z(%9"eN(\#N8Aun4oJﲀK.AnsiY3VI]9wTvʑo b":I ӎzsYc1s\eP`b0 !Hڈ!t uc%XS꼏FQÉZ(bF\=ߟFD.УD{y%mӊ<)Lһx)}徜/= wYZ]?=wGbY񿿻-G?{N6ϙFW?/{yN˾&o_LvV3..Wޝ෴xw\\ Vц1k?+( (ov>=JUrAS%ZH\HGwH Nj:m";{ n'gbUOv*  H 2bqYjedBC^@.Xr ˔p!N&4 KqLOVPCGN=>7# {+E~.od>SOiaxMyRu9Qɷï6_~DcLb ܅ ׸EfYy<+;ge峲u C2FJ12Xʃa펶T* ձR:[k`x>:q6yC/zmGgu2nBXA۟a[}Q $%:dQ?R2 #iF(%{h;5iyt|+ǷU6TS,Eٿ.8%35}CF.Øj&\y7Fl Y,m'Ia1f b_:$r#a$x1BMT;B%05/Q;Ty#T ;736/d:!dU:!du !^0A6e?$ٓأ].a6`rӃ3N7(LpӃkj2Q)AxOk 1 X J`=$ӺL`ݐ[>ܨ1[ 04l\,k ɴM8c8`eMk7hkLzaA瞀w2PZQOn 8]uat,ҝPL PFNa9Õ%6V(fusG]uAt Pﷀ9*WvH{uW@d%ΤU8K)8=N>퍪̺_} g7l')R1#pvqͯNo>- mc=FFF)fFe;9ͤ%jB! '"d90Ɯ,-FPHɪH`!4XL ץ3&@1|]XѨDu$94Yv1K73fB-#a9CCaQc5mZ ^m6KvK >k>>A+!J pELq2]Ўv"Ơb":mhE\wnݺW.dy8ujH5:.q-܀2at)g#< y*55ڦW 5@ewRrxm-TeE|C(Pt7w낒gns{۠]v(*ß_Mvȕ?u WCdAt'lILK I3JdΌZ1czf4Lu^k."׋/e+8aL1챨+-XvP ef\j :! (t[ĺz͈5Ɣuew겚ůheu遗z>%蜳{}ڀQ͛]V(qNh HGeme4=ii[ RpgD4NK&v!!\D[h7 d7m1H}4n;<-XZ4Uu!!\D)ƈ{|[\#;iMD.KS$geHB%JmcdON2tjeZgŪy%. |?&iu!g9Qܸl=ȩL!|S%,7>Q!ִ5 ]2's8v4K#8=b9=s%%Lr0Ώ Lp9iΈ )SS4c39n\(RZg@K%bo5O"uglh?:J}~#j$s269?9~[$+!W^9:7դ#I1^ ~=MW%w9}.*IKH>7_=iѷux^:͔.eJR1oƓcT/f39Za6⩸h(R/DrcX/r3MRrfF["ҚGd@+Bcʀe-0_&>P0EXŦ}}8Ȥiwv{1w4 ː#+L~<~-VcFz5g:QjT)@%-KPZ)LF0𹓄pc.rς6,|n|X#ʢn2E$cˌ;KfeTL _p.+cr W} +A1ɕ“[J4ɝg9b%ڡ8 pY0{mcw3g[a'fDO..(BH ^B0Č9HP$Rk A-slLIf`9b,\VRXKUp(+%٭U3BHyWLz~VELh:h~S0B3N@Q-$4%+DEk.XPm5qYBF,/qD2DjcyyQ#T//YSZ2XW{"8T+r˙MPtKHKTPBNj󾛒A6#;% -6ibp# "g a*f9eF[ rzXfmA{&o6TCQi7~jC֍4hGDhK_nt!!\D)&J[7oƔeI?Ji5i>nbq5%oXH?t5}r`TƩ\!9Z9s]L2; .Xu$'8IgJ-U+ZRD**E*5(ܭw)!bV3BAɮCNLi.62}T*dݾJGU08v'ylq'[9>d\´P&ecsP,൷8rTJCMϭ8 8o,FWPΥÒV*2+x1!,NKe2큡F$@ƹi+IJSN;pSЋ#ím1=i)4V@Ƴ Dv"= (14ԓ#ԋ,PNPqQB fzcEfbcbZ^6"~w9؋/5,P>ٗfHfEMfD+>t<ـ\.Ģbq #R%k %i$S2#E'_$ !m3j9GC!ʡ׈PE%$m̷Hb!~~+󻓏d&S(T:,nj|NB@731L3II*.#=cZ9faNbrQȁL , #7_7BoO$HZ5 X֋CΔME:L=) ~aSwwL-dTF4!S oԀ <,PV~ [MB땷] va馗*?LJ׷iLcvG:[o]΃& uIF]Z:2P= ^cG X s)LIjFre<ǧ@M dt岏U4P oZAZl/Ua^.-/ySbx֬ 7т6Aʮ PZ.)7c { .˙{ {T 17K7v:KVWAq[%d(f1cc FK+FyFYι%j fUZ DKA|;ĠfSWj4ek*5" xGI&Jo~;N>- f^-G@s"L29` iμA[%q$)5U:m@Xuf. 7ɤVT8wҀWx [ezʈfRBbRVFG伫om?ʈpVNLㆳg)vu [ {ʼMns( FskHµyQ JKFjEz̵Q%+-AtM#DGý͆ )94 k*A _j\b抨/$jV0Ac`Yp\~߂Y92ypn?9sW(&=ULiu}^t޽r[X..W%(jp%m'0ЃՒٝ h gVi%5#-6b.&N]HZ1BBB*8mmj2}?mk]g8KR󴥳q͓`$8. p+۬!IU)wΦMSs д:㉑'PD? 7kG9k7Ђ9glZЯ#LU-rs@52iO|_A%O[E|Hg+:8k.1Tb_Y\B7 iﻈdRCS:}D҂sp [RH GqV,i^IiEtakOy\ϕy6B-i#!s{Ζp\Y˾qTDJi'YMZD8$cISv6DJ;Ub)sԕt0۷R]>WG)wB%zQ^/'+K4B/>7U4~ QJܼi9Uw7m/3өj-:(=vҘL eRŻ)A +]JVƿԾMI2xj Gѹubv=;$">UUJ{xV r^stS4 1ƊCֵVĤڇ3jP~|xjycԲ]hA'C=ӧdj f2;TjL}K㨄{r6ur ʽKШ$zQY +rWzQ {IR#aE%5OkS86rw =mZ9- H!BzҚjR9B-u*T1}6`0j$HͳCaJ1yuN0'x˨)~5ʜ_K Bg+{m?%"ͯor@Z~3yw񟟿s̕}oO/'?:tl;#c$AI>FEpݠfBe $8\K\`}sԻɬAK兠B+-\6G|s;HO? w,Nw7:|t~~7gm6`ndy0k| Vŵ+Zwߜ/-cfTi?߿bu ?]>|m:օm}9,Syi5Ѧ@O6rQ QWf(cx*&w¥6Nhb*湓,),lV `RI^08;\ .O~z9Fogw~S5&c-εR쳷N-w;X~s5W>]"@o}z2() <Lb~mv/4JКꛥ.soQP. |&4V_jL(]DA\_5[U#2*,a}u0΍k`x|P -HrOz۞n ]k\}jĖ.>^|(ܕuqɔZtaCH],$ۏRD;0{go~yϱ ~7pBG+Srf<Ð?4^_|7n/uGm+o'Ue7;W/V'|W#KgHg-7%9[e{S|5݈ڏ"E8&큺GȻu>X}Ku U%o[CM[Uӝ+p/f5|.7|sEnA%P W/pP  oCRd`Y%˔RLf cjiN3,!_;-z,h!kG/Y'yvy S@&d:TsEZ.S"pcTȳ!A+1xu-8wLC@ip<77?gqn#lOyar=~ 8m_MDrSWkCPk5Qɹ RdiΨ"3@` ,M,h`NRZb*ƵUN38l$ Rz\^q<(f01&qRP':|EV<_|R7=,jH-=: T Bk5b I[O(ojcnAO/M?h@{0<ѽ2<2cuy~P?fSaZ~z/\~폟/?×/ =2woĀ\I4 n/ޣ@G?_~~3ĕt _^N?mNgđ_$83I%ξ]޷_iWԠDY5^*>צŁHz=2\j^EtQ|` \Y Eq";\|f&0 ^2Xi߭JMxrė7qhLs,1A]$T1.wncV6.s36.Qqr:_8," &T|vybXH<^1z\Lm0WJ0?.F7ͣzcM'?+rLېreSܶT}w{@V2SU!Y Tm{Zֻ5a!D)F٭ҎVϭRPH2 t-λbGW|_jZLXM\VGֻimʊQ艬Ftk6PsŔT@LClHmIH)IYBSWLM!OVLl8|wRXŝsFb{gb1r3۶dz*UۣLcJx1 ?f>V"+X&+Sx寞F>x̷ "IH!! eQF-(ע.t&3.(r0uB*;Ie prIM PxxHU  ,x渿QF'Q;_* u_o|)sލȈkGF+Y fzih=V8K\%BPt]ep^G.]޻ ;8tӃ^{Y3D.7o;ut2oc1Mi%[*fVt1H_s.8I1Ρg#rxl&[׀R/1L T!TN45"|R ur yu\0 k%q?5LJ&tP)B9aM@Եl3f>#ljW0I3*&I.7^v0mޤRS۔<.fx'iW_'z%I=N%LSZq:{qAS1oSt.7oSaGSw?ZČ6^/H(>>dgbeoZx} f);Wwo[6ȶK%5=/R Dd4r$@hPҵO~x47CfQP|f𒉳77F J 'ׅG?9 p6dE D`/CZZ %lUF~4ˌ_]eZwt7ϙ:$9$GÕuJz0.$SKH )QIщῈ4w_z75՜o(w G_guAw.RT/3` Y%ޏ_禕f3e_d&޻FR7pYfq "cQZYѴP(U xY: u+VtkI1 $lyU*rbō&}+sс W/O Pt2f% X wPB~*A 1& KSO aݘR5{j3 ɎX E3`w9[<䔀pk@Bm 8~7(ˈX /O=Y%hτ˓O3cDZ5͋ 8Y$g/"s4(8lו $F5i9_c_CPTH9t .D6~ח-<͵!A-Hgls>L\E\~Ƞcs϶i{`;kFҏ8H z}%)c%N2we-ͥ?MgM?+{d w[YhK޼3YS  3@JB8#2o4ӱ )w[N0c\Ln\[kp{EJ"{k3[É^I! 6%|>~8Tc;cG-% ,ؠdhayC{N) ҉lrPӌwW@V [|\`=sk̭iX[@HȖHVNK ANUq\PΕ֨74[K-|2 !rZz5KB}jĝ)?Twx*j*⮊e +iQ̉_>ek`=tl^ mq7;7EQO!qnvϾy }5P2v]A`=uUPGKS]NYntr| DMdԴV81âAH=A۳hA&toG]P>*0|Ȅ*=I65)/Q1(/*XFxBЂ Z痜v4JåSˉ+_B'(sYi'AO1UOnᵸ1lXnJB- HwVZ1R8(IÌ!l pH솥t=FfPrȼH|9lȐ:ϫ{3ނ{^KǝwP| tX+A׫.Qw ]P)ԆbiqD1>&) YK߽KKaįm|6y%hN"2J+"3"O<}}/&~0m%/P.:9ȗr9ӗ G[ɸhH ՁNLÓ[iR>6ZJK+V8u3z]ʯ%ZM BEivmq[Jϓ4QGWyQ :9]Y7F TZjI&aHX]u1^<-S)tR.f4ɨE niʹ1:g= )+ =N=c@-7EP LS띦,FND. NR!aы6)"O3tA$@)؀&ңEs2 2$xZ!DžE./Dc%ȴC;dB@Pxi0DI`JvI Jo$i$/8%y%N+3Uw-31J&#وQZiLI,'ߖlY:͵$=DRx"pLbb~`iq]G@gR!'ik jh@GLԮ^c[(0ь p\P:JS[!\"(}S+!H# ճj*Rhl?/0{hX"lq(U(9͚7ɵ;G%7,+k!sb}'ԚsI@S4]򠌤2hbcQR[˫6Mi gdZO㖀iuN|"ր7H¸qa:Uϣp\i=J3<@ Q̍x֦3 RH,DRw}9|-2u00-w#5cjD bCŭ"VYw)MVzⓙ\\_ًW_dtQ򄋧wi1xFe׃7[BTX]qm\eJ0G _Պw]aȮs\,k~92L m {`,GP|Uz߿G ۻϷϭ/i_}ps5 9enoEX#x㇫o͟G\?Oœ+ p8Ӄ+s4JՒ켿WX0/i:ݦ%7 ¿ \.Gj IkNr& da%gϙ݁ۻ<"]_×ai 8x e1WAeoZQ>7d-݆wiy~|IHGڝ9ϨJӜgR]7@ j%N&W{X!n" 1w" U_ߟ!d? )I  fwf#6#h@2RtCEOc;(IjW6; vs&ŪCC,HQ:`sEBɋm4oo11Im$B"xsI#f"qK-~[4 $5I3S!PH0AIG2mEjuVc s+ahIRƂOL_"SB=')@2$9V".D*DJ' YlfqѰZ SRP)GV>c^j0tQhX-j54ˡ9<h*h$rmmJAMM"B =jIʠ'YY$I2YkxJZXSF #j.I1hw02FAf} xbޣrxZQk9%JxA(ә ZL=h)4B᠐{f!qe5:C:W4:ƨR 9)H5?Ch# }zXt^c8yb3:qh[%!!+*t- xuINތF'I" rX"K |1h5uw{\ss$40?O(gSAn=SZ|vs?ᡁ3~+ CEg=|~BDl"Mx_.p۫[\Z&+B نLĵL,ۊo~ SpufҎ͖CgGj^R{ OLi'ﮯܱU!O4҈dCq3tW3'K*"q"s˛OH=ZI d6l.FɞWyK.7TN3] PO܎`eF@Q+oVGB|ǪDѺAJ@򮟢B?]hԪOZ2tc8}uX%?R>_UyQfsUBh?G8ߋ=f ٛt?X!T {v-!oܹr.6cW߀)7U`EvN@b)mDU.\IƆ L9xǘ~x{muVUN4Dwm 1:(]5Rw?従2DTVx[xt"sje3"sf]/ŎF574߆٨(hZlČ6iow*Q_}"[)A^6T5_i)9 DιꓰSןi]6 ˱g'9KnWO.1&=^UB{ZK2)B*U]lr&8dn|XWuI&zÒ(oF*1J<T(E.IMu7C$NR振[}t{6A6aӝO#xeW2Hʞ+v+GU~:-+EX܆pU7 zk"ޝLc˯)Jk=6 do˅fܟ9]@'fZ^ Dtq;f#>^lp 9ȶ-q9/9 ( @v^X )a:e/˻XȬ$fd.*BTܛ`} Ɂ":h0N@(eN;QBfu`*>edH,jHIjRBDsϙ8MN"96HH²Ya>Boey6|g}>oʦKT$%P$_Gf5u]N&lB-/Eʦ7|ֺL\5¯Uj`sa݆. @f;5Z5D4R+dov#9bo6婷)Jd*ÀVq%9p_J]h"V_@oI}{ف\DI% EP;kRq:{˨|nхS\j"p)و(ál%SB T mk1T:T;^ p]JmGCtkmٷcU;.U[K\͘nC ۃ+}_B]ے̒3i})_0=o/m>E3U?`#,.ktb> 4Tsɵ7SD ezA4d~&G 7H +g-p1BX=kvaHA]_Q̵Jhm/Y<^5x tiwr2HWq|Fj8C<کRjq IEqyeNȉ!}|ؾ:Wy\ބxF@(\pλyjdR̬"G,:N|(S"|X+deG%F !d#lNkjC<"E_j6˿Y =NÛ~ ۇ.Ju3m4YIԹV7ɳބ{?ItG .кEFAG%@;HɨB(Mzj4ɮF5hۣb-_szkź?H;Z @^fyDŽvlrSeBJ=)(]VP~jim֞*#l `iaϽl҄{/yzmEGw8<gSLto~z˻O߮_"^ᓍx⼻A%qhpC6}TGxEs }6gưg632ȃQ{L_ᙝppf6 4V!NOv4\od'q-һxO_ Xx ^v@(ҖFtڌ5^f+ݨ%F˶M팲$ўtocOj1^bVH!v6ܭf_pnCOFɽ r{ Ŝe\c `cOy!gMVYNʜ- "i11z35il9vIL@C6!фL TS.: Ɖ 5Kd+C$IviH(;C,j4􂆨xN{t . I@iўz.jRvhc4f 1|6Egh'OŨ4T&8XI),UQF0g,XK|PF4yX>,GeO#ZKߍhd5]^B$` *ߟ &?Nr22OH ZSp(?oڲz ?:N ͗-z>]Ǎ&'2Z w߳fD_Sɛ{;|sy^&whf;OaԴveL/ڰݮzhu[50 8Z$TRE#`9T)8\З2!?!h+Z-6')j#. )P;' @@<:+g Naƈs6@YC/7-zΌes]ݞk|<92o(gAlR|j2~W`"1R?~Ioсʟr^⒜0YZO?_y{b8Owן'& d+7}Ҏ xw}ge{Ÿۃ$ST FANu>7:CqUci) UN;w[+(dK[82]Wꧻ4$ts>⻱t|w x=_|jv_(^w"4j0kq#E n'vc3FUnu$=F}ɒU6K{d;9$υh=ex0SӶSl3 uUsՍY=9h*&Q)Jӣj:35C%j!4=S*4Tө] gz(xTA=`m.yt)}3<(}W %q*[m(2 , r~~eNf*xoEO\QȵyS%| 4UJּOlzPJޢ-7Sfg_MOQ= %}DYC^YS-}I. &GϨ#:8.P`5}y1rh}"? g̻rޘ1ŻNG3[XƧ(g [n\_1e_ެpOP Wff叼KX>NUDKxQLo=wP5$r{L%gJ*} K4䙫hN![MT{\R1QoYfQDRʟ2Nb*'3]mx&̈́c( \ VB2#ϼ8<ӹəD\\ ;WjWh1#N)L}ˬQy. j%h:WP)6W+qD׺}(u#>$( wbRDLi訂bu4sugzXכX0H}SQҶꜵ7[9Z%`ve]A7c ηRKNnlJ$V*UcARg2)P암C8@I%/q8&JI@У.5;ojG xJu(I>xH߂4Lc!RJdZq3f~_0V'c?Crb~ NbCΊܳ*\l(;`V@―K=INT+n$foHH*0ēGY狮Ӷ{XWm֜sm퉨-{J%G 釁 /|9yS{b Ūu]XVBTx9Q( /]|G/0y_٬.gy(7ވ{< EZuPX6.6F 35, (ՏR_<w;os*d%s{*#̌7cfr=VV23Gzɮ}OW L'D5voIСȗM}|hUX_d|\; )H7‚f7i:TxA=Q;;ksyxO^GMSr萟O񉲲m\ټ'p%YMΒ^ \_ˈp=G#M_ERvu>eJ< >ok"VޛZT0_^"x|wѝax^gU|xKLϊmJ/PG@J>4L<Kikk&EG*23]r{*΋Oz ȵje$3Ҟ޿xAu㠿:5I@IB<30/Nʆa[">{ŵ rg]`hS\T^RV T/(eEk))e"ń hP IRRʈh:jvY{;8, 3֙K Vts (RX%|:2#ڙGҕjT%krrc/tx:4PPN3,@MPOKiURm 1uSH/=$V;s7ɢ0F λ$~ Ĝj#\ zLWv6~/oHeN6tQphtL E !P,+PdQ`23JMaT DX]& sRDNzELO+;Pյw¢|S|UJR Gnũ9%7.eo<(_ܖ^߾sX 럈g?\tXxwW6{}߁!UsX;5V=~]'H%}͖ŧb_KCoFJ]%(2!/#;ӏVc"I1ft3 ]%9voWNC4Њ ;=N.5N go. /{KF1(B\ |ĉ(H*pB{_r}Vh{>Rdb@{OaP~tZSdud@Q Ǥ?9l|@L+j(QѤ"Jy` $/ D?ARB*&ѱ0Lhr4.GYPÙ"`xUx! +tY @#9Qïov>V׿ud m;5[2'Q`ok9DՐTIO֐ViZmHS%erT;" $|̊XqUzGS1pUlm̫Of&_`˰~'}3_F\P]. b"x|76|>j|C G~VCiB)Gth*dA}: 5L^O(R(@o<䌐%]035I@Ex3mH$hD{j4lkOr-f+^PUmiDMӎNҕk& ?G?f~o_طY(·oÿȨ4߅'>q:w-}FC0rP5r7xOe6@) LJjrUhN qJ)*'Ќtffsbͥ#}+FoY9sce.~2[-F=;:-m7GOѯ$lՀ{l&HY\MA-9Q eȔEo&X9 )ȳ, s_#8-F__mO)yy}R(^a{k&:ƭ XL#)D3Z[tr*[JP`٢Ol' * 2eEUS;3G)vv*z ?"{,p}F+\p 5Py{+ҢrwW5~J]v{y"ޥۂrof_o6⧻>+Z*ϛ urќDnlaNapI[,rtt*o6'bBMSjU h LSI2c) 9yaPNB&l4P.JU_IW* o52U"O/le.xj-Jŕ-i%_fX{rڸ~3nV(UVܫ_0}m:x{{2\?F+Oj:y33{>-&OT<@Mp' M*.c.owYZx)IG>VӋUq$[?Sr6 X*("caxPEBpa\L?MoOG^)ܝ#'9kx E2jϭC-,Ωbf.fTOr j/d?BB2kjv 'p_ 0efUkPu_ (w?쵛v^xW sխ jGPj++-x?}<`ܲzoSrATů~6}XPayuwV;^4%> 艟?n.ɣGiyt5+hZ;B a8&}w>/08N!94p!Ք1A'|#[U yIk'v"]^=K֮m)]ҦNm4BoeeR\Sc@C/ LD(QHs6u `bSA5{l^yRWwee;ATѵ n)n {3aylt*gͬbFG˗gOɗ7H@U(!-v^‡hojlnϫtSZ{}7]2Eyg*N>_2L5|Ӵ?\,8|B,Ф;Fy q1Oe༴:h J"ӑ 2DJc B3MXL`ʅ}.HKn``H^EσVxXQYQK QG0c ~*zZ2Ja&xk0#b(D^ vskԞKt_m:xh%^V* *:C]2}n/@7X~)PF:Z㼡÷5·Z"- hY``eJw6q ))QBq1ґkʙԺ "'-¤gF[_%@Avyd616 +_!Wfc͒91iD_Ε|ShO n9էem'R!8Ad+"(?'-=s=1snHv7R,Rk|*^%aI'e$ƔZE(M`)dT&X\M]_ޮ/Y_R;Bm['B7_*Rbaa&O)"2_]?N/|\~;\ GEs0"*#q`k6<͜4Ȍz luh;a3r BjKuU). P8֮4ABV]qI=թ\-v2wtWOr~.iyHxOg}B9c€̘ >^KsW4{%}?2,F='p FƭdK2p4v+ʡý+P֧x0Ş٣ڗ*8t0Rm\w/h` d%#o齭j~zL9S5dstUp8#|LvkM rz{\GB%>db3лǗ[Z*nIMt=O旾ڪ.÷Y>/uz톋]~ (&(-;pBA1t3ҿhsPphw2AApJa11AHqЩ 穷*'tyl[75sxѪlP`\[;ܼ8X Zi|v%LfᜑmѤ7G B)5/}PVNjYy28Cs6jF)u .VOE8/ AS<)+!W_) E8{8tgMh~vږaBxîe|szYsXM-'/EF;Kn(+V+;3iq'qNE @q0Ek$׊8N.1W*BI;Y1%aX OmD>o|L$TIyQ Q %gL:-М 6"i("ΓɹeO'3S0LL)c/~Ow9&0^b". .:E,`>{{xnuX @S| TK"]:8>;:!5rNt̖l|~oD~ RU;]x<,mxNdeƆ]G{wسp}bZw!xs3<0o2C;1ue>o`F5#mubE6E;Qkz`̈ZCDWM}yG\@/YQw 1Hg++q*.ADDxss9ޤLsxS% T7&Lp0f#L  ҍJ&')$yxxSq)1m 5h/d{x7SBI0 p"an+($ԕxeq ɻTfq\%Y\PPL7}䞜+y.dvu-BD H wv&C-fC50[ n,ܶH&ӶaZDH:XAx0ɼᆴ1 Kъ7YDm*5Nyqf:(FlNu##!_#dl?#sX BZhMݘ6?ۖo/@oI1o&^xf{}H3"nlV ! XB+Cһm)bA1riIN|Qf(mel[.Pᘞ]:jc9;7VCCp\rb*(ЦlB̆ch ]?+f龍1ڞZڑ=`VN;%\2؋ x#^Eh*[{9ok7NJԁh|̨WU`υ=(E^5YB-k^# \\u)gc/ ߒG1F9>t\_7ӓl䋫h|F㻋ٸ<<8°m};;xrюϮ.gYfz=}a^ɗ]77t|nW@t5*X2BiÕ .q1dIOd &ŋ&36|]NT^`iQqϣ# yNxTA[f ' &#mޡeZEʎiyu:m+ ` [g'ٿOjʏOޟ\D{5T^յ FlEvLL ߚ^^H/ϙJfhz_4ښ^+e|Fc~leC}G $9ڈg7 :ԓ;lRiZ-=9ϵ܎ߍqk03zQIߏ~zw?}u>lY;;L賹t{527OD*1Op3mm ȴviyCfKA&y@]A5q^ ^*ã\.SƿhbԠ9s t#|zčpkm?\76Ͳ8,>m2C Phf=̦:*#p9*ׁ =3hw :Z.H'Hc?{'rB|-1tyŬ 疽 a~iƮ杅c$dk^H DmF&][rjlBSoǿ#$/  a)1Uԛٵ3;f8=ڻc820o`"7b@-b Yj#c$V.Ia\>hK DS9YT jo?U{%Oo3tDȻ4>R8#`B2E'8%*iGSK5Jވ[ ΅hyU6 Q&+%*yz#=Z&F#J3/_[)^Vx…;J{$m] "|R^hLrN '*[.́Үe=*؁Eynm!R;!J@5ҔpqD (<f1{b9b\hiɀ1D́Ud1tŎkSFi"9 vrn8ϗ7gu嗴Of˛1~oW):YAPBwZUԑ8E>!`鶈:z(7R-IP}Lgi6?f{2>HTGd!2[ꯖfuw[pʤ)ˋ&O8*RL=6 +2(t\|me~ ]H:b7*jVT4/z1Q6"LJ5x<Ѐ#Y%_@/n/."6-rm~uN!.`@]g􌣴,GNR%$>Pnt(M.9vO%f8y}4կSa"l޾HK&-{юwwD|w93[ xfs4t77>z:˛VL ?ګ{| {#ocs !3tC ka_Ə>~gL=V> JQm~zL:a~B_aƙTJ^60;Npa1C dnՍ\fooR Tp?` ċ XBZ>*Bl>3- YކH|mE=?5-9܄3'dQݖ8uxK!{h,D |4eKZK+oHTīf +;9e !UI1<Zf5*"1.}@(@kPQL*rLȁOD @F +9(9)D#CJRf J$B.&PK+@ .+"!'X x IRHު\p*a6e+ljPfhܠ@y26(< RjÄZ%)PA3[5({`dž2V 1I!da1s :j&&Q hEI^M??b]uQv&7Nt*`;s,>3sqϏ?5Z8gOΞr .$CF76wIf-TTaUVTI%!갉ƟVg8 S_DHC vLru peLǷ=#|29yYr<,qN؎-'aąֲ6EZvɢrcQ-uXZ;ߺ '/guXTikEnmֻ=р&[q(pcnN B-0D/j,9_(!"imMA7P6YHuD]ŭrŪїo on^gB6[NJ3E+ O cĹ?#_X`Q="j'/Glբ[I^Trp1b驻ߡSRĻB jN~j'yX9/=XJR@ %˛cs.IJ D1CI Lu)VPScW Bfutǘ՝A*\ T5v9aFTs&T ߐ \Os ܣ ^aUl$[J^:HC;b f)TdW5drukhe PG{$/("н_-xՒKA֔^Q&}-IcxCpW2{592HA;QjPb˫eJZފܐ)^ޡZĕ%zy&s6ų4cZ9-C=|Ӷxs&2}DA#evvw٥R|S%[|9CV ElpxH Z*Ⱦ6{z~-p/e={mFF'qTSۿnAGw4x/ 9L7&|DJk-;G06ώPo7]/Dwi:Fj)-Y$ZP.5ҐXEsliJj̮'5vs˚w`/'s|qy GojWs8/.mBK8H"e4{d B͡^7!Q`>{ >烐*ifDC-yn5JuhnL#'Vkx'7J2UG6/l*9he+M-G8u/RK*rsis ֝O#B2Ar@SaP !2{ 'IX)! YBi pA8je$(hju%6o%I@'E$J[e>PŬVPiIڨ@[}U9w1\ÿ-rqf@-ِ 4lkNPLtd*o0aE&5_]68Րw2ChRPW0$lQumyw.w YP SЄ=hVo2ukBdCjxt"N4`Z3mn[3͞9/WUk|Bl斶Ikq/;2Z~}s^ݽx|ev,?!j >+[wzUGNCLpt"j\vSuߋCIN꠴RYKA:)syS*pVҁRpZv6ԗSO*9IgҩKڹg~M> onvszJ8!Npr$IV rr\ObLc1WdV#xr*7⳽208-Y;iDIu!)0<"D:e?5<4yi )@$ߤzCy.g&I3 J*[AY,Ͷ[씫6 H噠u[5KGr{]khۺ@HKEb*< E 4Q)6ЛK&%=30njq ^ZWԏఊ# 9p\0pɩJKԨfC%hmT6 ҊL&^A H6P&u RSku0yެcrIV鞿%bI>[1𗿾UD1MxII5[/}Urnn_҉-zd:o//>;I1!oտsYNA͡z{}'Wц|H۽?I/ (%ǃ*VPnu?^MJib(Pʙ0k@3E0Gi+o>:߈DAkCgYv?1Tp-H e#ګ ;}/@,W0Ԓ8ԡ|xXޜ<t ǦT}eh&Ֆ'I;qUqQ5iqeQ^0j7Y(Cw[ϘܗoJ*IX#{%2^_i©ӣ8)ϗN{QwYbs%_d0K&Ә~SɇY_>LFw=Ύ>ׇcKӏgQ&& Ghҟt<}$\QS6Kn֪Oo^W\aki;?ôʙHB^ZCWi7 !hNw4nALOgj.$䅋LL}xo#3C)B@c p-3hjVV3V,MMㄭlU3R#97 NJ:9 ˀ?3 x @URtOVFr;(IҵNu^3(|u&eQɚjrX Zɂ?/Yg!FǓ1pn{h<P*eg"1wwW2'SH*6?Vx!AEO0"DUBDHh$ziTR_B)Txb0<4;xW*FaϣQ)zK!Oudhj\7%v -q"sk>\-3TS>Y!˔{p ?E2h,>w@J֌U;wTĪcZq_Q22I:=Nv'|Z,J{zW̬]>XJ%| Z(ufST֐b)f@5&O:l:)8褱F p$$T.Xx|>7 g$Qkh"J\NSto.l݀&`04o{7hϠƒ npъb蝅F7 "~|l\dW/uDǒqSb?>Őnҁ/|=Øk/J2GOOBوR(WWIZ#-]7%^*[T6&];5XwVn1I+J6Wfc3 @hW35]-9ꤾYk&W,d-u/Oy|Hg.l4L> E G»GoF,[Ήb܈`4h/KK6\Z񕔄aXoK,%{(c\ftfobzG]N}!Ek0e_]Ō>DdƽOްt|ͩF&{*ح^w5|EGu?u7LJ2Z%]FwQFk[%ۗ-^(L2Jka Э(V36U GD}Ȫ xJunnh79nu!r뺹yX>OH]7 2lCc!]7Co03 p0us>F9'0v; ]TJ5y˧\SLIdtwKْz/]iH?v,N6q98.' *} awIbId RV:2lΈ 7?,t޲\*U9W^om[JӲ_J'izil|c=wTs뫳zí2):Ɛzۧ%LW[|NYoXe14W&UUԺ^ E*icu'(Jʹ[juCCq\#v70AH͂TnRE;8?T&eՓI#ۅ3@TV  ߪoDC.亮qQz1A*,^RPŸ}XjB/Ūiam6r1NEϪ$ZYo_(o|"lQ)lqĸ|쪎u/*yÚBe#\ -F*_ (yL.-}q:E&i "6,Y$c7c֚e:*KJydF}Kf͗ZbJ#xS`db(#R/+g]ѹ$ޡ3(|I5_5GKRqeMW¥Ԧ ]eOֲ?ʬ6XB2ڮ2dv!ВZ}IrlXDbǦ]1Uk\~8;^QpR*F~;kX4sp8fLLʾOjL:ElЯ!go>$.X.?4Jo;3vbo ѲQ>Hy8|s'>sZM~hwYҪ\]\\? Wې ~{[p If[b xD]`;XIi!9 2DOw3Ɲj lYcճyO ҇s'?wk\\ 7g -/s-linaWغ驅LR樔zN hwP\2?4"m(nxl`LEqPLe2!]QNZC<{_ƯπC< B254qLpz/ko\g HHzn(>B9l-˽~ܐQ0;hBgc'e6 r$faT=kƟ!4ry_*ns qݎ"jK1`V bͭ|Vɱ3f`6tqwʦ4S?٢9^ȧp@T8ͶH0N1!#Bŀ /c\b&aE$iPJBdSҚ1 yajh#rw\}a| &Hp>Ӻ>⨅Ƹ4WmaQhwb /\pi 7E J77cA,Fi K" W9i`f*kjPmFLRu/á)!0zc7 9OI!dW 5ge-((5z"cFnowן?[9 08- :ɜ⹳a6+p[3Ɲڽ+{Eᦋ :>s!JW!_pJs%9+V¨ݽh p*a42co P5]6ns|[_tJE@(*r|Γ)xɿޕ/[vTek,xE`~s\<*q* Exia 4sL yb W""(3X )I~,-M|!>*}:ۃR@2I#X:h^y@@]6D=̓hV61fm/mrE,Nך1΢1iIN[Dt,#MY(eY6yPvV}}jntoU>d zP7V=L׺=5O60ᓙ0D 1 lYpdJ[ B:/Ȕ1 1 +\BCXXҵv`u\~qُ?+ͬs IVV'!*T%9"iS *UV5Vo%w#VmP9C ؄\I1J r69'"YyLN#(w2k>J-e  efݱT_U@bF S^FC6|M$ r &4k)KA$Kq\X%efлYYǐSMLk5x2`.^0e|"e(FU^zEG5mu%*2^#i]E oBs]6cVAc2IM79$1rB!Zg3ZLW"}FԁSnj0ͥc ïKځϛ8`AQz??^^Omw#sRg~:ЧK[u/~k˝P@ }B`W/5OOܪSw:oJI&C t)+?>y}o^^bUU*=myw y\se.]J:u/c\oaDRL)TFLQ_5m4Ne\)_x;?ǫ&ZL^,p-LaU-q]I>b\|bώon|.gɕE˃˓i1&%ciEE}t&>'wԿM7y@#3Cy n'ҳ踴r,ėyK9ƍ[$)Hmg`(C[3#^RkI^o'z@!*֌Ut*i6¹v[gkL{5 v}h4^#o/*Kr;o7l;d{0!hhؽXl_0Ѷg>X{X}BaNBQ7;q*VV{?Oeh kiGLZm=t&Ia';\d(F^l `4ow%r\!p@v؝=.~$V̼y }H J&Ok)IuJe#Fo#{%;ɜ*jj4᠌K2EoŔ> @?X͵"eDRu%. Ð9%j AgtIfX \(P?{Wr}T߇-++Al =}X|+<RP3Ri>aO.9)ީ57t+YIخ vcE}W E+T a̍Dye SN3H 8I h#v|l !ڭK Xa91tyٮrHSV<N:P )CXKA '_$AdCAL&'kC yv GCtE$K98E<˭G#[([gsXInj~*YE8`+V8C2Qb0.9PM qchӯ- ĸ|Cg 1{6aFw u c1QlK(!@ ة%\`6%!<-)RZRE҅y%& KHblo n ӒJluYI\3$g,|}k]$ 9xQksYR|A]J/5@Y{`Va?d2F$oWt*+KBa ۷5M3KJ4>#0f vD"`-ohgM&V s.8Xc@YRPD* }zY?rǻH]IE8~sa"?=\¡Jv hTZ;Jء:ݚ,le|кXU# ᚼT.S#٤gTE*8x:I }X׋*kt%v D1ɵϩдd4h :k*%б׏ ~qdN}p9DgDkΈ>Zs~=E6ه󬞟~~wjʴYO4fuJ .H_~u8sB۬/Ȏ<ձT}+ryRQHȧ3$4{F 3-F韀ٵysqVgSI$oISE~-ip/!%PxQmi/JGN]H ´A:cJi'Dn9m>Yt*̀l8 )ǦXA (WT}vjӿO} ҞV]7t.t7;3Mf΢Ti>Χ:@vB•hg츘W"srBI5ڕ¯ _yBˤRvD~NWm{Z3kJBϫչdނF4Vv@OESug)1^P؂4U~vjfW/[EUΎB6e=MKb}B뼫sN,pD5v_'zs RWgTWI*NcpkDɱM "٪f 4i,'{k+kz&6q~?׿^ZQl>>?v|V{s,)P ",H|L.bybܟ wXKυTc`43 %Ƽp'K+< S~kipkwĐd;_7q=􇛏׿{=-tdoC}1_zyۄO> d`IB&0ℍ93,%S: $ 0)$kphmST;VBo؛pt_A*A8A>`ιcEH{8$FZsD(a$!-ϸ~79J':[7PR9jg Pb@Kj-i&ÉtH.1դ^`Zr4. ÜӎrVȀ$Zx.t =H,U<pǷ>&&8 bާd'`9N^>r!|"/].Xk\xy`q5G_z "p'.'Z_\A/L%7~NLgZꓢ-u><`FE\$8%]cKm}73{+sBmpcq!jTN@/ ZrEJHM9S Aq{JD1׹_f{?1;%{\*}}+`40tE!wCGeKZ"Mno&F߇m |dco_?K F$ =p]dt[7idL*[~鹿=:%3cp4/Nٝܺȥm [$oߪ0&Cr\QtputN7cZgU88{T՝=j;S *bƥUc.(=UXKJt8Р*Hh<<;Ўd gd5Xk8ſ/Q`6i;x6^j^VAqgd&\c%Pl}A5y#8F% H$!F;*KPB*Uc9c"ibz@1⠉vql`AXᝅvHJ5Ug]i qoA\$HEko'^Zݭ p*JՎ~^Bbb4oh~ ]/:OZ3F?XxwfY!:zR+}YxP"/ e--"Y\D{f YMw鿣X>rɚB˜Z .~:yQpK ruX7q|wwfG@MUhBҼ)=aoMakr pѧI~H Ȃ]Nc̟gՁ |xl8?\5qovsBaoӎp IL3P]N5fc9V^ajNNzNqB+HofeZםivtFXcJJ/':PXǜJAN4pR >馺̇3}nWb)fKcLgZ7~޵u#`{,ފ4AY Meݶrߧ-ɧoOsZX/U_UdU1Mh=w{O'E3h !lB(h$D˷/'e~8~k!܆?;m.߼~u#74cHKs*Zj "1sQZÂot[X~vֽ7zF8E\,-Gm/j41]Fl75:ANy tvOtѡ0 яf@sYt=0qrv9rjþgmCFuP /S?pqEBiIlٛѳ7a˯.d V/Q,ΰS\~ERl&4qqPcE'pݠ1R h (aەj};Ka}}#s#JtbܧJ6}ޞ}zk} ?ʻ-K[y'Rd }C𨅒GI:aSO[͔;]ETk %Y"ћlc+&xG`eZF6yށjD_t גb 2x, 3$8gsIAT>J" ٸ$N,!QeC5®{7v?=5Z3lt}djh$pე[@IY{{?)ڥ%2-6!dv_.g_!^Rj;\ޜErcZMެ>ds,)/ )tyC󣱩Ef5瞲۾Cec}Hֺ5]YIXr~ mz DҼ%La4+W>:%Z}u޴ntuAu{:Kp0֛Z6|*K=zyúql=M֭/5w.邞"S%fz3Zk݆А\E)Z bywH>\W)J8]tvw`mrmšγCX`\7PZs&U}|D.ҒBU_\(QDQ9뙨z8)A~+Ozݹz ~ Ac |P#8P L aeT4Dȁ94>`$z}YC."iS4QqB7 Qtx90{6=LJd/!Ց`%G-HŜpX1c9zxꁿ&O;+Ѭw8ZݢVȌv٠2Y력5gޓvXbr1Yy!T Nq~TSQ&[0!IL֚<:bi*N Ru B==2P6kQ| 5 杄HP<-YWA *90ʜ7jQZy/0 z&1Ƥ0%23)'<&۴M-5YmX(d6VAt$nEgNX8Yi\M 1*m8p2Y YriGO =SVzOQb6\{ #80/LY)qB1=,lafewI;[; [D]%{A0H ɁY""pJ=H5\\^%lƨ\'0!Z9-|̂Sz%C؂-sry{ŃA|m染oK5M~uSBe6~˞57_{U0;Rv ̎LantѢ}~q[Oiv+笳3?ϗu^͝=Vaqk5dt#!+ - -Ղj| gm/`t[<$+?VsOIlk<4,9ZVoo;̠VnȣG]2:ҥ|~K n{ s@׋ ϭ,Vcp\qp4p0[F"_'X#՛kBW0>/ԥ C\:Kj$_fm[E1;"EׂwiH/ƘPRQ}qیQֶ /6wd+ZKرmx/ǮV=0LZ_[m]N>=@UkdYpc+dJ]ePQz0'$$$xO?LQ@Zn 6#&h6U-= ^hѸsQ2Z/΀(Ko 1k"Ŵ24PhX@7©BZUf#8 o" +x5*^ꇖyEy%c5*Ĺ]ƏMM(*5^]ϫCGkm%\[iޟdEG z Vj;I c9m* wT\?|֋X.[ FLAJ.9K$08dnm<[*(:ˇNKxocnA1rξ}ߘ_[X ^1/g)Գ:x5hP,c*+0+yW끅ɣI}Oh_iI߽ͮIdBaJ\W^tƧyʤZ߭e/k}Y;kv "At!*Eb4ʊ%GIh50w !a,UE\i5;ћɻEBD2IfV,Lhis2Z.aOcT곙%yR*] RF+R7x% LQDh\V)C `~r]!C-՜*{\7pgs#p3 Z#Mݺk"hme69F- 8f(Z"B#ò~J۸!V#A`*NT-9@&ͭ2wL(D{XykH>b$[Ɏ l%jv~|' p<OG.@٭H?^_]|=4-р>  9BymSw3"C)h#\u^r`jX@|3ORXiC8WSK>$l`:MiɝЗD(P<=u1hj--1-|\PQjXg{>ΐe;8֐T}|ԆDiN9nv2=*CKԢ:D7]~}幝KԔC59z+;j#P=oRO,!r` @bgA4@1:syևHQw՚]5=Т;C~kDh}BJ&\ɋ܅uʄ0SP٩#J/K-z .teOMV>"йOPGƐc㭕ЛcHyA0"1}i _^W5t C?bЋն~y}IIG$ʮv;h5ѸnYy٩\ g!M*a٢Osoϒ+57WʿpO[P>'h/5^9Y[Ͻ?~'7{}v_Nf׷s<)f^&bؠd5b^ff!U0onHz)m#-=׌0jD016,%q9&LE3_3+l`f>eMh<&T""a7{WG {VfD%@/^>Bg)Q&/դbdVE6"< iF| Q@^ccC)_P(q*v4Um ;3i7^Ucr@ Z~7rz׌@4.-L{=Ķ̺}U->G?0ۣm,iE!=1Ugc:N wFLN u%`v)fb2}4X87joϮbG8=ȻW;:"w0$НEbs=|fm9iփIYYcSꇂn_L `L'=fͫ{Sy ҁl(HԐҾitUL <]6P=e=M9wj{qcLwgLԯEoI-2 񗂮{|~d_%"P`iՕ/C3ș(hwl{jS--*E8Z% ԏ$@ eب Z>z򞳥o08;sRHRh͙0+ 2NW ,aXJuh̓S~8;jAuaV`]Q>\5;}oYmtd)5ۯ5/?] 7[{˝G̋K5N7o$# 딷2hZ( T!%+x(z\P YKq20/i^tʁAicDNJ#֩$dPJ5m CYcNр?_^NOZhD1P4rŔp: eb9KbNQVt2X||:CʡJ(pOIɤI+4WW]*)FCQF=!6nLd 5լv:h"?B;CXqMi0I͎5L.MT[!x"8) WD6Ț?d/ DrVr l%QȪup2RsڭŁ X ,!AA0< ܇8ǡۨ i <ņAp:P Pae6G~ǤxJ7֦SpJSq4=Yv^㵞yWԸ0nG9T/7s -".( N(nKtn5O/gj2/>L,:]4N/KDl`{qO$PxO$P}D~Fg-xbẓ_et/lMHsz۳n1FbQ݆@pB#^->|cv=N|3}]iLή>"L~9ֶD[hQD ]=T(ЄRA ]s*m\^"-LDnLOo$U}'2ǣ-#9)&z8&iM 3@v'ُ Rө%@>`s :Q( BXLiE@Q֩vPJ+8KrCAd-tؚmLPT(qDSrdFο̉]9FxtmdhnF׭W\uԃo.5[%NSN#gB#g ,ݿ\[щpR͙%zOWGq%hVȍi+W kkUڲ6l>V-W{PENkGCvZyRe5X.dh,>Xj%zӅϛ2ngHC-U I*贐; ƃQ CcAĘC]N-I]h8o,܍DŽaNfta{$c$df:n^H5187J"8;]tGݤT4{IjGdOiHNO^ eKm,.> IvqgD0gE_b=H\i$W_/]ϝB֖l =^:{GJ3Y޶OW<__.fuo;ϰzݶ+Z+mzMs.= 6+?|P],7A69?]ws#zԘwn;+iujlm8IwB~p`S(@,~˻!u.|ӻ Ġtw;1봳:w|wֻ`!?)i^w3J/ Štw;  m8'ZwB~p=ئӚg;/j 8jh4EMbbKkRyŤڡ#ylʷ9x5 'M8Ti[ia*"޽}5{t^ =X".gߤ[,JhUjZ:/4g Neʯ&NMi3VbJ&ԝYD䗔0lGn$1v/׸ {F//)gQ*,sz>/vf6FBݎ_=h[Tj6-5z'˦X[ܩ1R>lOaQMG{8[ˋ 64oEo-}עDy-8)PĨs}.!Rm "ϟX[{q/z{v}m\.i Y~̛W#.l%5 Q/3DK$+z?^fj뗳/ן[uųgˋoVEݛTެȶ͙OTLS)ؐB2l$xIҮNye*$)([RٟJ2BG7|y)d)}mK@dU[QǣΙTXr8);Za& 4$xB`\ S*hbe2ZЊ!jZ-$Ŧ|ʖf!M^[[,b@x%-L12{&JXkp$TŻ B x>hANu'ZZN w-)*>+>2DjJV:-PΪܒz-[?dvb&[-E2A%J# +QosAc&RrK Ph<)VVFԋ[ z('F1q,/@͉ VAd P Y'tY^ R[nd/)SH} /V"66BIBP<:x +]ev}aR#!iԖ]eJ^%8ӕ r *NV1B չ%Kj0kʁ=wT=B*+v:bnׅ28tі>z`siԖĉ_!sbGPXH{nwlJLP2(Merg#˜ A*mJENf?f9f[eBJ}aޔZ>.z5ǥ E m v󇳛^2_H9_)#\]o|lcplRR'qsvDݿ;v>z0w-|r:~(Ʒ*Y;nϊ9G Ѹ/r i:{KX)Z=ʅq EA+te Qʥ uZDpfHT2{T{8CMJq!K)By%c9\j)۵^N ^vd\ѷ~iG !9x.+^# K ;ݒװyYactCO-%Fb :\j$9?݄mFyg涖}vZӜp 0)訒f\-Ǿq79~Ip=D\`_ 4_}̷T7x-7kRDcy#PǬhvkJNW/6W=8E("3%郤 QIFR,H9 W~F|~KzlѕX@&P&Lf;6ywU֧5]8g5v௴=tj<1i.yy,n1S_(~OipbFHpq1աv5@ϔοfu6~A3@Pb:{ȵ|QZ_h^@]s6W4rRȌ#)# k{vpe#2F<ɽLWN@Hʴ`rkzF[ ZR\Gd߫Ax;qBdXz׆1=m#TRf8eyRi)HsNol.n\Sg6hX:IMʨ2s) PNs=1` I{"kh ̼Te) C 6BQōɜ2.3#BҌHRVa"`i Tfa\8b¨;BM&졝|v2q989GnnEstїv2QNCɃ&+#Ě4uƗZjsUH 9!$O6UۄQ} 58c^G\0(h-IC׾(gavP{v5&'}ykVLuenfӛ7R V?fzqRA}}Dž ,pO]Pk^2+;XV]y#K=~|寖͇_>z|izJ#dFU1Ej1 x]Fuf*ȏ}+sKKL*&d47pLSǎ^5HVYw^\)q(nRkT i#iy',Ļ lhA|ڨJi2F,cy@tE,M,GP)փQ,x79[Wzv&-qac.p"xFD(&L׺ZUә:vgvqRfVx{) ^jxOͳb߄HtgB XUgE}F1]G6wB+uzӷc8rX9YYnsGAn`@y2OJXnT?2J6wu_T]ݗǏkAu =N`5<`RjFA1&.K ddAdvzq;>߭SMML>Ned-蜢 v[u]_: ;o0=&ig"@pơ Wxk_e>;(*&d˝㸭Vz2pCFV2mt|#+Y34tXGJ\p(P Πo4b8{\$ Qs1DϜJVBQuP =2JB7tM \kQwB>()kHG@cP>@8NN\MZ+>fD2. S=hS[%ۻa.blkQ~OuI5= 'vr5=^A.]>*3VQǖ.hܷ GѡѲmP;YN>歚49(ts1\;)N-M=Lx۠,B+\jjhjIc6 u.[CCOgAfjڳ393q:IQfkaߢjHƹ<)\cMWUwVQ|H&J;@bN(89mP`4ը4M*l^E[D0Z]+d QTTj7ulw/5Ӕ#D?fZ6Lh#krȫVGKT%>S*Ģ r9w*-ҁB̐ٿF~3(6xG.=qUb"Ѓ;"yq5T]}C >a|b" qJ<|*oQGf0"z`^Fq#ƼG stnznωj-J쌼@pNp[/#:ss Ԃ>G vb~ɘN >fNtqŕLT i6=wdc7Bų_CZ1v?pR_L->{q!zQ;u=i7nw9^80p.1 +,װ0eN~4EvKhZeJ1ܧ%5H5Fx"\AI[,H(r;sh+ʔBh*iZZCw5B"U#N{*Ɂ'hLJ4ͽv@Jp߂iĵX"‹#1A?_,%d'!;/FO(wh1|4 8Z޺9Q+96SzQi uO0.zp_iڥ:'#E -c|ҹy۪<,Wl\|y\Q)F8e<ϻ3t$mE%a?=GC@'g+(I}PAmr3-ǽs)1gou?Vr!x E9*ptŜ@IB'M޾$v4ܗb8sS1T1Z?'FsCkh3QC)(ی42љl疶N_htUsN.y'3SJ.!3MD `Z3: `wb"٧=| ɚ2ŚnXg.z-W*\9 ]哞<'!/PAx?#H׳wB[Q=-VRs#JSx5֥T8 I(7E *]O ƿ&B/ $LR?_h8;Ꭳqo\'$%:/b"N+#:rV*&x?3cy.9ysO2k|,M|=x7i6~~Cg/z=@l6P<(9X|~,>&Y?Yx91RS$X颽Ӌ8 8 Ge16J g_S̓,4ӖKr'S!Sb\Eĸ˟.Ƹ#yNbc\ViYB~YY`u1VǸlu<ўp*peqUZMB9LdQ `ݡ5_d .~zë+Xe v:~7 ҆O2y|=hqq֔cbr#b`8*c<ϼB1 s4cќ<˝tuJ@Fl`+Ϗ1f3Jjmu0㊓]WxH\ d?,?GA^D%9 _ꕃ95"p%RQrŹ{crySҔ+d:5'6{-٦6GP'6^]Ks=iIFQRa5 ENR,\Z j&Mm w:!/[)#Lb824B!顿4+uPVrx5ZCBp"fJ#BMP;JaKE fo aX9$WDY[==n֊+..XAZ߱wo~?,=0/x~ ={u Af;lX1%:C--C.-_ȱ߹^anжd.mx8Q:påG>z{ j8'"e`B>4O=7GX6%$Wk) @3/˸zܡp oYP+ Z2-֥h4x{EQrVe=Pe +5RLQ=RzլWLb&2'sŹNтPf[AKEs#: y J*KL].ٶ Rcs5~D p"5c⺓+&tߞnɁ +ԦJd9і$n6*TR\gߧ9\h#iʈg"%ijh͔Ol6`gT"L,>Sk1w8dNVء"T+auP d;MzEAvء6>:A0Q9\{%rmXh+3h8YY%AVB&l. gӔj,(㉣gLoZp"ah٢xR奴G;\.#\4nT+.>Y؄tK]ʅBvƆ H!;PKZ1{]1*=jP V-Ek>(tw]In 䪷VF~keA唐@]/J}Ջi|K;iȕ9,]V@7k#LG?{;{{uIET.l7-9}x=j4q:]cO3[P~7X$'kpٻ7$W~I}ЃWk=+S!Y3H]\xgOJ‘Ɂ:ZqS~aVHYPi ATy8j_+@ΌtNsE:@1cH+8}SJEcܳLYP!!Y3J/5"R,%Lw 15zA:G$fDGE.)Tbr^)ji%3N>H1X ΊZ1s&b2X̱4\uHc8tX|yכ>~Nl`ϏfE3PRiW7/>ًȾ4"ܔn!f&baly?ܼ @Wc!32io4݂_9 @tt/e]%9LiW$&S&N_)?~-APF=@@JA T[whIndjZ(=t,(@UzˁP܅*Y-,x8U@`8۽_aG'4^&) MeH&jXyq=.]U>J>U)H k-rsDNIlu}Z\|6b.b*k Z&{ ⴱ,O6: ֝,v#v4_MXz2c)cM{q"l)%tz,Gz|qn;|p}`t,<'|0&vFxy7|uGeapڍzsd4J3,Jg7Lm,{i7C ѕ'C7 ,(j bgmTvui_im(,T6t¤X`9x"[ cWooݛ7uvG&pf-{UF4_qg#ju<ª }yR6ܮ+f&}ݯx!vuxP!P5r)ENNiC6ȁh$+1d ʽ> X+3mpd'*XRdyڱXIr' y), ,Ph'`s~WgW5a]+o 8 G:!ՠ?m>k`wI3TGNs ^7H ?UA_SjIqMP_59IM>Ue'm`eu@W?U{Ž0`sQxLBJ)bu7}XzwJpPUz!0U5؁Xs]5=ÂdۮΫr}3^|!L⣱?YXxP\2< Hpϑy7{/1S(VFP(@Ђ#38%,""eͺVGN8p9 Tkw R4L#RQ OS; E9":itBu"B?%u6B':E"SF_bNsz-7oLbH~t&D<>\ ƱbRʇޒo!:G9!"a ~&E&_@Q_^֍fƈ[Ǡ92Ŵ˽WZpMol77AAx&Lq48cP^N&ԯC'L˜}.e7뵄rtktw_*ӜJ*ePp۾Kzw7 zfr%2{Qw>O`f,#NWkUXҺjH1MV.`aHsT QW**^L)pDqr,bއȑ9 E"_j0:2S kwtP!z.p !}MY`xnr1A|3K0sQ-+ Wqt`A+l=2y4c"Yw^lݮ[PDv}œÛ,FbN_zT IgW1ol28U.j 1!VӹøcRWM9JA$jŐ:PIC <ΡlCs(j,7[لϯtwIM0a2dBYUb,8(.P#X(ARD3O"6A[˔$$e*e?[u:3Gn5T?<!=nj49ɓSB:c{iȕHى*B EOJA h2{.+e˲2eV^t{ENcT28;c5<2,jMGQ@zH@,]D#'UނpbV\=5JCXJwd޸+;aGsbGD0^:pZDl!@}}_'9ϛ/!xx XR6X[:E eƙ.LI}N*Q#i.̸ ໲'O+~ƒ_~^ׯD@" nMaZ7\Xo&]91/?&@mx O(E1T7\\>8Lon$&oIMZ!#uGYqn'չnQ67ҦHRٻFr#W @kb|NK/ce FsJVh'P E:0Γb1CZ' 6G 0΢VAJ?Yd ļZr8;كY̭2qBmqZ_856JW&O˦yEKjѵFz>;{OhM!S)#-XИ  Z/BchC}vִZSP A(*1j; %m e`4tSQnAzS>jv_PQZأiuwJ5ʿtyB`$GU#[>}b[iݟ=\#[1 8a)Hx {yYIS/IvW&ܻû]Wt+>Lz<@Ù{_]r:=?VO|ћ_}RܡxW/G3EϤGC<}f-!d+I_ _sO ^f֓ ~K6@h/ڕ"F 3W%]fz-)n;Z: a!߹T!Ip[s _07S),06PXp}O9r֥gO}_ߦpn v9{\hƿ[D17H=9٢ԉ֮QeyC?ha/4|KZW5o[P~C)_AOC5oM^jq %O ~}k !q O`4>;\O2jEG>.}2!)Iޛ|py|f\pzӻ.*.UG^2E޿DO~{ͅ*\ >}>]oȚ=#&2$1E28O^b7i~fh*5ǠڋL㞘"-t9B"۟U̮a&-CjSGL}e{'$3P:Wn}ƫ<<=7t'T+)AT84v$pOj`#a{ ފ l hX)$ 6-@ u>$٘pI60\'m@z=ը6maPOaA15 o>Piu'Z'G¹yKc#2}e4C`M38_Dx/6R~MAs~tSVJ Q8{ De \j>uqzU#S;LjV'qط倛̮~7: zQozuN7w\aH>BKs2^W>'ٻV^Mru˗>獽.U?hڬnG"QLGnHimX0Gf6w[Q]US2&HG`ђA 2Z=pB8?kH19k/u IN~'܌珆f`q)쓙 mݑDLpD<y6lmcn'bbZDpXTѹ3pDqi%mT&H}q"/ꕈwo$keM;S(K滒0}EI겅jSM-*mgLmZ]r>`NFl g:x9)e\) ]K>jE.[[#QH0> j(OӬܯ WH3U`#DYr=ɵ\X=`]\gڞbSxb:Eu,af~L+"O:KADU62]Is U{XI LL`a oblݛؚ .EJ8޺H+ dXcqXKh1"BS ^!!Y+&\EY mhz@h͖M÷K)hwY/jbeJ_WX'_|I :_}Z`zJ孮"R\MH_ ǰͽ*8r+OMI{||,-'ۿ׷g?([S<͍fwqr}[|xq1bqs=ό&۸rq3]Nf&UP*s&lSU__?PYH5Oa?6K͎ܻ9Ah5͓i*K-+]LʊhIޓږ). {d0 %{88NQ4D IcXsg`4\l!Bv 95JLy ) 1@Agm&fGvr 厁r^Lm S"\d-]J^ ]ڪCju ԎBT^FQRĘbJ ayށkO*B71Y)y.khsfX|\).6ո&"$e^@9@I yAh%0^@I 뼀)QfbO^@G/y  sS%r?Y=2YǼwTNMx:{=L|DXE.)wsCP( eF5()%Yüh[W,ηA,~)(IS^j108/Z imFI)6|4&vi (COT+5ݖ y6MC^{DAzy:H$IӊF$4eA?y6LqlmlSܭ P9R9%RJ8 €72y-W<(/EqP֬r5)X2鬜Z#2jC2)$%Ҥ%"JkRevC*r:z~^m{k}t(x:|A;_L;1\E[x)^*]$U {|Cv1Bq CnR*wsDFHe 4(Jê xps㖼d;UPZ7[Ƹ͂c9G,k6L& n \(jM69+:09[ؤ-3$=!xN'3#B+}B\}lE+$rfJ ⻴{cQ#?փR4Tnv棓+ 8ԮroW|r'owWy փQɤbF(B:jOFYYolp'#X'mH2Jz~ɌUu(kSt> J$F8ݎm2Ed3);Hxz\ 0) 뜖JJIJ7eR5wqŽ㌸H<2ieI@И@e*Hd$nA"U" ɨ.dT 6E dV)aFL"eCi( %:4:݅ZrB䪉poje uL3̦VUSVՕ,'ik9R謦V"U !Ĭ7 :-'ҸGiN<*]T0|y#%5=n/ pC#-Y3侙#sI< ڣ*g \NQ'6 |?<9@ǣ Z%SNE'-/Ѵ(8 WשWd Q$,?z;p縿 2_88_P@>S%iʀ3q˳;p)ߣ1fg2|h5u Zbq|B ,yW 0և46W2Rc ޣW2bAy)94gC˱ڤ甞IkY[Zx)F^@P6Ρ Q"ţQˈNM r1dCʗFDQs37O6ϗnmp@8ϐ]O*cAj?؆Jn6К`[-NG<պ (jZRG#p9*ׁ Xih=( ΅ 5]`F+9hVN=ԀFrNT}C<m&ƒrBI3jE AAcP:%$W҂uʉEe(3t"Vs/DtB[gFjIu,V  Av@c ȮB.KgEMq]=&A\W![MJrHvp+WB(i FDi&N, th;7{[SɲN"7 2GJ i9CC ~ jNF7o[XqUf?&b&GORX }zW^݆t#'nE1!;gE CT`3Ѿ([la·-ҙ6a!kMu qWw*lr@'Zزi-р~V26AW3i#ĄS0H-fm-ˋWyDڽ$kW؁{ttxd֒£e3i@/ю_]}\_BГJ@Jt3os<+=p#K﮿z8x}~atY~'k~7d{nQBMvĭ>4Fέz\\CJjj0ߒve#u*<|$[|~XURIK_蕾;yב8Jt&Dw'NLA9x}wpjFCQOog!h;k[{2;^%yO1w/i5Ttu%-IIg>4m)g ӖDŖC:xGE9arވ MҴۆҷ&^Ӓx&@JܦTX#9-3̂҇H,1"q"nyd<8aJGD8wD!X`8z.Ntd<`~2T.}R*D4f]4z҇>[y'.gPC ڍx>&W4k:IOPH)hndr3 ̿8cկl.׋_w/iNG)L1r"-/\ZwX:#60o $kEznHMl!WKerq>Mşb#_G?-%J `Be0g^koIN Af!ձuZG 7Ku'}үJ3ot`'чˋYPXe Pq0,Ѕ)Pݹ{Ab [D;ARR;1e(a?KT^Yy5,ի%=)uҏt|K}j0EZ +L!}@7?ΐT! oL=U!M4*C'48j+G,tN;YAr`;!™nțǨz"\Y־P|g ]N~%٦zuH}ǫvK.!:u/k'k/)0NVe8YT??VY=0T溈* %,o >BZOz N@qnvǶJrj,nxWGgBy2oV_<.9ǁ2xLuRsmUzdtKl;vi˦nht"g559?}{q %jFb?9n~x#L9 b(kJOp(P04ZJ)Kbv~3/KĎD*JE)D8JHgQОJӈ?h2iˆ(JD/YXL" i_`!zNh8UBH?! bi)igݬ$`<ZLL34ϴq*Κ>]_a`E4UPȖO,: 5Qo\zGZ4pVP)iaIi 7 9庩BU3r^kmW)Vܷpe&Y8I3Ȑ= L?M/dϑŦ x5Q[ rY&V4s+j5Pq4您 s",hk GT C3Qzzy>hJVGʑŜ*wx$2W$ p49. 3DdiYJL˨^ zS@q;`߅B>zx> ۛOἋZ]H,;x7H_I΁m6Yxj>.'k>.'r^а;Ӑs@>,gzɠ>`[:w />s^|C2\=ٳSثw`2G%\tߵ;;hkHRU׬Z, ā!eg{P#2I}}.d)/3v#䉹MYd+'6HIMih-/VS:;Xʈ^ Xf{5"\31hmj'p%ϱGά7Oc@rWC,լz*l~=5=01\Bm'F]DB )gZ6{ AG_2Y*u7< Uc`x~;FSie&ejF)?fC} 5\>fͤ* !,\4RڋX#cICsVX;$Y~ nl_5 /}GEob),ps9~͸}b+g{~7k$Ue ѷkd6s}GJh 7J{4ѵ邽fk R"4"? iDJIQ̏#.p@TQ@ ︯n@ei{MFt9M֤>qGO&9sn4y'JctQKI%ȴ1IGKցaSiڮs0ڀڐInL@oߧf:m0?YW?=76hIWҌ,oە^!/D x8KP}9~]5dֿ)&V- 9|No_tN5\;4lGu%G)oé 8T[}|0IkE\qu<X:0[>(F9%xD>R[AOmݽ'i:K% f=JX'rXtP䭍hO>$9_GWE45ZK9G4,$GSGVO:X3[͌Au&-pApL8No1ކi5l{\? _$:S*a`K5yy18 \o/Is¢J'e !p28bH-$ D'Qτeq&&x(Sn?{WF/{VH_ 9n&~E@6ى'd2Wlv-lurllŮ7dSIx|~6n7Au+uLzHA))A-l镽3<څTQ_gdWc#;Bo>ݯN+qrN2"NXh5rܳs>=M- 0+N 8O 2Wu4`Df"8Ҭ;D w!Ooڦ2IT G3 dfcЍ?0†4F̃c 8uY\2N id-Vxɹ}&0eUBlR.ύv9'F%(pD o& L) -]wc\tQEcREÍ"Ƥޮ \ bgEtJ3|tKͩ4&J1jRhGk႙FWyĻnP(:cOb{1$n\٨7v<>/O^u;Ԑ -П}5e(dk,H, KNH,ͥ0J6U6(b@+&VQ+x6ýeQj<O(a9.\i]A>n_jmkеS,'X#a0Ӑ#x(rD#Eod]MLBBIzxësݻדP@6jfXskN@3R|whfE s8^à N/ބDsT3uN3r*Fn&BYnZmby F~ZDqvPG 5ENJfv0lN5番f UWj:SmVJ+d4_xףYP"E0v1G! ~9;{ m}y7a^ b0T|(S(6k)JhTlv]DH+6k vZlց|&ZDP"ҧmAN'!۴ g辧BL).t4 N`[K:e|L&u\*2dZzt{&I9a f29mT_ԩ)S|y"2VWLü>̣/y ;D1_Wl}l [Éϸ&yWI2,zOa~;Y,vF7-?1bb1iL GϕPsn /p~㚸7pGDt/b/K;ĉ纴}cy>YѶWB[ sf$>S/#1C vad>Lqg PR@!2&5)~SX$HFL}V:%qz$҄@E_7Jbܬ q7v}Pl S$q gR""F5#VʸԪMp=-228#7Ɏg^Q// l)e F+*HD06ådIMBE^0vSFHd^ c(U]O?g7'IRf||(Iu;q|QFVZߺtRFXk:Md`Kv/Pmȩ6"mjV곢= 9YQ[)$R(fV *u:=n+M;(ET~H+NҒj#T@ k'#ðmT_ԩfl$8&ͩ!Xv/T N4 GVЎ4ձJJY)xqﳝ:Ք4-a?>Wh=+f%ձJJN|s1+M;_Q͘<ťGnq0ceƵQ, YT 3 f"pETf\$giFP[X#G4%:-R۠ Ӎ8 E`HG=ˤr: 7Y,oY' W)UÐO^C>|SOɗ_c['y;,r-x"L?vi,MsҬ5ԡYn3su {|1.ſ9'TT{ք7'|uA0ބA@6XäorY9 @wo]1J_݃/˧{z2:iL !M-F><ѻ^ ӆHb6=߻7ҷFҕ][ߤJS `7uWphߗgO[-=M^e&p:so V9'JqN S" C<+\A3RXjԞ| 8GZ\MhO[y5wh(I㌒X1Xm'rT_ԩ撟rŎѨ@meppbUR}V>/q|Q'؈ c>}e%':Ʌv7utm%qvo#"t܏޼]@7_~*[m5* 6tKrϝՙg4dDIt;^nJ>nJ靉jD F-wmW(EV,QI%P*yJUҥcC'Β*JIڤ4Lj[pJeckT`fY7! &7Cof_hA1%ۃ ܝxm5f;n N=9<(}3䙮~wCeK]+G$CEHx=G⦷{It<.ݸxϓ=̆P" ן[Șq0rzߕ).%oOaӊdQdov:!fjb/a ?:X-avY|' ~ƦI&o_~~JB!!)uȟMPM32IZԈ$="xѴYΩIZMN@&kQ8 }*SбI'"]‰Ia3մ lUQ]X<]_primQ֍2NňձNkN3D*85{U'nTvڟv'~PJ!SM>UYHrO4p6q.*_b9_،<6_uIhjMӲ([9o ⩰Ei3Bn Jf3T9ѲuFm_t> ',YoʷIiQs*/-k.}lFrt~\<[ʞD`)"Pf(s,QLO8 YhaOc8x9lkmD%MJ;}cJ٧'#1GW&0-[ eHFR6pX`]|-y`J >/x3 G@Wg0UԿ9Cw1/VZy{;Z%P=zWO.v6z$Sf<͸F7+t{}cE֏gR,x|̴6R-Ai-%D d6(KkCdʆ>&Rwv{䌒Q{$V[h۩fy 4BAX_F 1C}JOa1Xof{γ}{SJq# S9؝AN"KSN*$L|+WxnO2E ]ɸ.LxȃɝeOFۍ[e2|}wK1pu⳺I*W?E!t<*BU|AO>&d_y8'@+fwzVVJ):);yFgF'N|uZRa em0cJ5#tbz4JF5EHJɱmH$ PCBI?I" {rf(gj{ɍ_%J&,ppHrYd:Fk,y6EݲjK6)_,{{E)(bޚ&PR$<@&.zqHpY Hpcp𧕉e0"Y.n빯lEU}B#ON70؁IC̣owQK"ry.9M_wgߗg[;]֗g7E\O%5` ! alf>Np%l}eM\pw aZkܺf{{\b1َ׈pSd2+fgHҚwy3kʘZA(d{"njKA5I*^\<WgaOW]cp' D@5pcM{Ǐ^?,,>,O N=_o/σCz?T.(scΞ֥ts`c^SUnew]Jr55x4BRy]NUV`'Ϳܖm/5}v_O#~/Zf#xvX?;ň=QP7^ŅyWWmzP:S͞GVƮ}y~[_h͂d0ͮ> ~}\C+ vOpn'4o"Y7nA6hT35wClBub:n] yd@GBq )l$5<~LLBub:nny +#[ MeShS |uu5ߟAnv{w]n٫Dds}yv5?_-$|ۖtOP[źHS?5IwUF_ X>nS?3?̨gכ9 zVu|fncF|X[_/!JV oD49zJRn479ԍ Sɚ 3w+:Oqh@[b(y T_(,jZ7&:ߣν7i,':.~72w7GOSRu14h΅UP 5b*WP+G_)^? u \NfbDGF4VrR(Kz 0J)Ș!n{j (:'zhqŨp:EcxLULnv| )|c9zTےNdr@tȩ˩WzDWvu>9n8n Z!/gQyQpz"<]lycoֻ{u%xtA;@b< PͤV]*ys1n릌7xAJZ,U[jg9çFNNSޏdeoc|Ͻ7P O"i3oX?TP3هcaf^~*5l`:&,MPR5a=Y;z,NL!@Cf. Y?$1 $Fxei@eJ51q<1:kT:n FrUdH ~~_,qtcɪ+nS{J1v~*)I~{gۣa[);.NIQ̷dkJ. *<|Ȍ :^xcM<th  k)q)Д΅B々R'D(-'$s`P V]N`gckõ_=8\;oL쳹s=&_49J-HG=;.E𴜝 e9ՕN _gmoԪ.4rIG<ܪ0Z*gҭ/v *P֪zk+ﳜÜ 2V5fFY^̾£S铙T=R[;Ԍ=;8x)Y7^&,1~X c4\=қӇ?lPZwY׷eFJLrV'X$fB DrYuhP$/ QRN&8Ww;Fa71\9g?/W+X!FFGLG%9 66X#kUP*`=#S&w ;iptF5i]>3IfsIeiB˂RJRe좒fcJKEcf@6L~̙.pUd)5[ascPhRjLE&"3 7C(!z~AzW*Hn wTm컖'1FIQt!'4.N-hNp;?ѫu#T9#w…I/΍ucԉq->zpnjK`}OuӽwX(u^J01MLA7\}=?b5kÁOXqs>0ZhT٩v0͸QRc 0T :% ]}HZ2U_W<s3ܟu W-A)rח8 Ĉ *BueôIlOWfddh_$2ʫOXNR;]Z[LFIa\TynB-Qε,Tz[sss,[kN7o WoLpeL+Ah^K1cZQ7cZ\r$4Ov $a(&p#gLQE$ e2F=1FĀ__0e ѯRen9%Y_V:qfTPhR:g)HKͅR2,)R YiUA,pGrKB? LV%ªLBeƤQEʂ+H0'*%hJH%P)#Or1P&> G+J[=PA#Pf-81Tc\ Á U*7&U$d^Z{>L(ORiKԄ -Wë́|<zbB /9kl2xT ;g F`bk?58:R՛`sFFOQt'7l5*&V5>9YK%3ꕓzSIi5~)о Jԩ-Jږ:=q570?}sƯbDCT*嘦Zj Q!cBjPYD(>)-ր}{]6p}hةvRS9x@ᱺ7{vW-]~'ua*VMd̓w_禼,U[n D,<6< - xc)Ďۼ'ap9h͸Ҭbq_a ʬOd@DNBTf&:9Ltϥy$r^RL D~ WtOpi2p }(ess7@( 2ۿ1|X5WFΥ߫Q.z7L|3U4<,G D0E'Sg,Bg(; FßJɛJcŸ;f :"Ȩ⚲_R8O1[)x_-ѕ|7;}dڨط5Ժv jZۏ-֫eV<8O( en~~Oz?YdSzoT !+3e&d)ϰe1C #5w@@)eD@v8Y$^¯@O9f>pmkS+Ztܒ^W{j R(+!:9aӺх؊Mӹ!H".Wop}߾~I⫶Č6rx"zuèz15xu1")% FP'3#0|髟Ȃ Z# SeC g9O0dsŌDMi}7]s0wc3 pe?_t}'̎|]x+Nz7.Q( ہwHn6r-#)ٜl#( EEO5S WoZcu.yZNC5eHs9ǠpS`AЌc RBI0ڋ^ Ȁb!TFl!-TIPh"+ˋ4͐eMsN f,O lyftPvZ5T)RH~ž4]Xi}hkIyS83^'47ڹтɕKg%6{4n-*;ͺNqvpwRԈngwh26/䰷xUn;˳[K<{GKisHxfB %Pm̯xw[+Yc%50F"/fe3IFeibRJRefYBaJKEcD1'qcldvS\ ʭ7C̀Ƀ=['Kڈjkӆ$&6(f3j\^b//!X\c!Πfj0bS!kҘqutf9^jW>U̓-n z4qٟh*X+D֏vg~I Z pzX~b}<Zx;p`IJ@i7KL`?u-hwrk NFw&ْOyv{&Qf9+Hf6ER2" gi">HSF3["*sEt$B=9M`eK}Ֆz/[t9zaJ(%R΀a+\>W[)J_-5\LVٻHn# 3G2O޽qɁVVRFv=Z3{1#`жZMWbX%+4H M;_̜r}yVz>GfW]AnsUBԯa%Ro`+5e7Bx\t9~mNUHfDYL,3Mu0k|.܆V#~\<5歈٥WeAgy([g'Hh/@A6R< 3Q^HKӛL W`FThP휨Apb4rnЁ"%ip89ԴC)́i٨W0b;E^;(P}\}\}\}$tJLMIH Bfs`@ɍ"CI !,gPԫJu0F=kꧫ4*5L$F93e@`&FR r<0 ,Q"z͑$C --ؐ =]WA\7bm F&w_"VseU(2ЇPDBA<8R Q8+<괁DQ13k1> ק)ohL eZkPj_kPY)@1抛,:B!yX0ofM )D(Yjp5 o?kF, p{WՍIH_iQoiWeWA.tړn[hAOr[G™ `כJx8QԢCy}r**@VzȌz;:(kѕksԌ[otA0ୋ} i$ |=֒LW3NA@Y3e .:K[σ6˞(+? tKQQE1^ /ZEvHJMJ5J5) Sr"OțJ5iVۛ5)%x=`zd9щ& 8wL]ηyuɿwή:.ή:n:ɿJ.1y($#hS b0.J5 6i 11RD各g!]BP*x S`H;ƟȽ{.sRv%_Fq_$TʓFv):Xm KB֖^^X6_b@zV4!,Cc< b<6䳒}#xmy[:rQ`L D֡<=s,QH2EvM51ieN ơQ kz6?'A"y(@Vel 6ڰlGoIxryzl?5f˜/ᾑߏ"fR2dVݾ}63"OCLE|v*ƦrMK 1r`|4Oϋf|mH.n|pq`٢evW-z_nb5֏ѯ4`V+(^aoGpvsrzzm}t: g7>BN!F+=9O)rޣ2(EI٨v[^@0;/[CSJ'aD+tBv#0pq~t~Q>/>`?G?CVcc8XVШ^A8*vSzGXjn&t dⵄz4 "7G#Ct`(.h.*ep1$@~3ˮJ? #CN)ˠO? tzD,N3jܻ$NNF_A੼!̜-/dUff/+=5sV q.A3]gbQ<!Pol^Qas&ǟUY[(ʻ~1|uuzr9C_5t^Qh[@%D\DRd-"=Q%ƴ`I:*b\]ޔTΕ/|JNw5ebZ^!Arj> b{L+gD9L`h"lkՌ^jJhQ\TѭeWOH'lܣUt|6RRtꯢݖ|I-Nܐ&lj$~ "͛AKc3SfzVKog'BCI9x 7em,r \5ZRzyqcPi.ӴgcjU^<'tT)0\DY)x}t}-*f dдղ[{4?|*a:{*0"s|#Yeʣd's9ݹt]cUsON͖,4{R_2;^쯣}Zw@~ j܎S BAD,d퐧@2$R@X#"2l^I#̃1y ѡءRQP?z8R)&!NEh)ζҥP+5 =MlWTG]j > ȁIe6GwܵMHQɮ/N.E1V#g-cc5hauo\W# -=Z%>%vmWpk Y1-]JTKU/]"|k+~`'jIMRȕn7:鮢~rU_E_=9+o@WѻPh 5^bɩ> gh"*\ ~uR{4p$j)AY[ݶ|)۬/k7tY^90\XM B`Y< ԍTjcv5 Ac7<)zt 8T X[ͩR]uZ&Jyp 7aPJ8£ڥUGaj_?/ʎry Yf.O-AJ:  (B5G,Eaz䂰.u\>J5ݫT R^N0MW+-*[8dțu]WBk4cӹ'W#ӹn]U/O#B-j=Gڝ;@6|jYy4Sei{`Bt:*#@"hpपh⪀{t ~I щ %+v@ |]cCe^J.1y($#hS b0.J/ͼq}׳X !HE]լMCgpfe5WnkG%Dufump([=ߋ\ {zqɾ-7"o(W&'{*ҪCEY$5NY"Ԁ+MOuڮqW΢xPS:H7ɚj[_ jX]ۄ4~YL^htCrS(ҍF[_ jX]M>ɅI nCpW΢k<< #9 mn#֜{Wr5a5ܓ:哗W3NA@Y3e .:K;& eCkRdc8'RCBVj- =Bz:u҆>h?Bk@ )L(RshSCs&>2I+{tYIVM>RYwyx5hս r)%Vݚ{0|ĺ*d |‹݀$;o^.  .rw]Dz{r*nPIR4scLc%Qvhp$I9QrIA2@YI@ /j.E^xnJڠ`(ZFHRhhEJGb&zXTFyVkY xɷ`m֪r֪8kJ:&䄈!5I8 1,P:͹b=f \jojF'c E;qEIS"O"l < ( F\m5B k5|%YՎad͕f;m~_dYkL˘,f޿o Ųm\;4smg y{qؗۼqk_\32e&SPnRzyH2m7\X^X4,'+,=Y*~Bf&IZ_m q,`7K1L(F67wU2oi 8U2`~ZERLLu/rzD$ZyZvgPL)KQ'i1ERNa[&MQxºu6Z. w(.(?~߳įn1տ*WcSv eHJ>o\GG =u;j#_; Ҡ5t 9nԢ'`ʡ?VŮT/Jf.n_<-g1Sd#'_nhSmdmgVżE- G1z}Z揅^f78ϔjhq IrW߭gfouqZϓn>;^CB } <J 6=i} a-Uķ!fPoWH oWH8 ^? '$bpp)r:f;`Qh7/,uE=yV*Pd?dH{%i:˝[M2>baݡR ?Fwr/th|9kҖ{4C{h{4C8eZ/8.y#!۹m]_\x\x?%;|ZbV5 ^m>0Ql!,֦fPaM(d^/<.Dܤ#r%EʼnEVO!teolf_OCd_\i"3% Cr]w[n?LpgvbJ'XGu&7&du2VD\zj|mHB` K9o"C>1ڸ"T<\ӻۧJXsKсjx˝p16F,ǭ78@= q 2)D:DpKJUc!I7mݎO++%wbxV9CX%La!KAI5E#r?BW%oPNVT UEB JŰhD(ETRКcEs TJW(U̥j@ Ɛ|1oF&vXVW_Te]F>Mþ J?&>$KcR̗ˏ~3uz5 ᬽ_./|)̮D H}qp./no5{GMb츟avyqV?S}/3]yԫ`ˋE>wM~Mݼ#iўD*^R3&`UuҵD؄Q[i}jUY[Bg*7qj;J,WbVQDBs3~ I>ޣ55";Moh/?!;?Ή6RRu4EmbZg~=Xq?e* [7*fg䯫Y] 0I ?n1<]sA(p쩘qƿoJ+iCv<h/%3FIf\)@b/`)cc_m9$)ks]xNȺ. dR"+Ck#~xdp)A,A!TdeSռaԢZݢ:^Zd&- ƧL>d>ckd;|*4MFo,Z0:BE!=ew; p}W#]"v3\bH4U@);µ6%II9"e^z.kg4e.׬DqAT0"t^vldsFGeJ<[5C868J 'x$ ){NbT -M9m\;'[5踱8UT;DKfvAD8}n+!"7%"D!ޣ38Yɯ~yݎRhuϿBDOnLa}B"tVcH)T!N nW:LGGy1Ƞd]iLIL@ 7`%2Z,S\"9/"kV-͓wEva,aj'(W g1$yVpm] ,J:zOPmcDKUSLCIGK|'` (-Ǎ *A$źߚ]@ ">4 80u4D,Qw8L3f+wZvrx"^aq\"sBQS51JQYM,JTִESL\E.pU3ѻ:UVV^Q,S$<ES(DS`KTbjT Q`DPBrwsbvX3DJ{J,(wJr T| xRIJl9Zbr~`!nZrb`"&}?b\&Dʅ'v=CySN{Rnpb1'Ipbe.Jj٪1rbO, LLT0/XMcsj;9n*`rn6^Ԅa N/c-qpjzdpp;ϹT1qQ%sR`mJQ2㝠^90{j͌]]fٙNkŶؕ.J}u6]e f m˭(Q .I#@֊g,,f P*^]p'6ۙ1kT?z#pg%(ߧs*+mʮ'$ =ڻˬx[R$iVm%F#- mՋy \Zf7#{Yo]I[V>DQo[#@7"0e+|E T_Mx"qzĹcI,Tš@ $"5> (R:91B)Hm1  ,B}c4pLE*N:&C%c? l`eǠpdǂ1J@,$`l4@e-߯qFw99 } N(Y_~Ckʷ8æIu]{#Z63̶AaTFxO u]1 3$Σ̤HpOHB==q{R|}šFмdH\52#)dNXMiEq^0RsT1jX T' N^"*TZ+EU4碀jHYU9T*KJRP&cRc%;yȟxٹ4f)Vvt)i:ғr'xBi{ߌβ9{o"f\U:y,ﭙXZ:LG(O#RC_zHliwFt?RBAp($y1Wg~J1 ւɏ??Ixr@TV8Ip}҉t{;"(EmC [zh3$OQ2d'W O14E,ڹP>>=% R4/^jnfG]VAw\stM }{q'{r:2w}owKH %RB Dw,ѡXФ^ݾ/m n;Ўˣ!G|8`)i;Tzy;ntcZB]'Wm_ ս_^i" "<S˾vr'Tlԉ@A;Hw].Ab# @R>:1|":j/"?'ݭno _|a?f{g[4^+s3/w2{NW?doNDZJom(T R>?%9"*Z(^Ik7~k3=ZVbd-痢-1n>W󟭽}donOVkuJTNjLwPt^/! G2^mQ3NWPDf"UF3U"ÌTL!g ߑ BǗd9gf_oL} "enC;!%>:<O4#b}EHV}3vSa:9]"˧Wa*9PjEu[2փg J]s@w!ᔘģ; $AT+%E\V23jf5Ȟzj'Avl~urBOѩF_V8Mrnbc5\mf\xP&JkdG%-+Cnjj9 #ۗpA4DZuӒ1͍SGj}l'J!dZ>R v]Ȋ>ㄭD #ZoO{˪$wQIiJ xސVzRd9j ,$YŁmAv6qNc]V}y~.YA~<Zppf:bL1=0€N;-/5Ŝ[V:4 v ;AyKQ]:##I1j ]|!$8uȒcok 0Mꁁ\"bݪwp@R-TӰ1;p@Z )z\9BZCJ',?"zlf%.i&sYQeTP fʼBYY~  ĩ*s0Vxf Ҕ1jy/" {rkqE *4,T%JV`pyn-Zh/QfBcRnJ ׃J:,dGcog33/ݿʺbr彧nfoۏ: ŵGtpC)QA7'TVD n}1(&Q}k!B(np##޳?i r= "Ux >.c3y : A|DXʟ*?v)=>(%Sg`j }L@G_eDR }tLR 0m-v`eǃa%&%G>Ѽo: qNj?B`C$z*݂(SV/9jRn.ox%q>PL9zrZl7C}=)%vCjVY^f"/H)΁ˊ 1[B6\fȘT Pj+/%F8kda9Y3]fR$BK6(x* IFS4N,65*g 2)$ c!3EX)6k ad=A!Ck,Ț PŕXSa5A=r}|G{ʇ)tXMtΙt+r ט5A>leF& CmM2#('ĪGTnxpCRbn5$HC{bӞ5{{#}G:oV4o/'h;u'#v$M\F +a,r];<'_ H*j))& =}t3Fo#ʫmC޲ mZpȩ.36 %cV{ ;˓1: o 0l+r': 'ŔQ[[ #3%gTh!M{q$F #ZcWTp p ;S>CNFin$vb&-Cnxn ,6$d uI4*3Iy+V)XQit_Utv[r`fhJ_K+1eIЂ}c+|n؜k^ TTY@ĜnUK?&h!TƨA{ -,~W`7;zkՋF#[W/0UN"sz(!ܜyiq'T5&e zjwh8rfuK*3z.Qx[򨽢]ox~Uz R=f*6u֋+oVWwa-}kusy=k@NnFoOp]['fHְtl;Gy ]#<8c9X'Dwz0ݪri r֢QaoZ4,۞4Ύuڦ86QgkRB8l5u߸wM:k$jIF_p}H.fOW8Sɟ_]/f>?nrO|\{jGg[YM_|X>.yd|g6??h^^<=I>lROIJdXݲCღ\vq\CGY_ëcv`BC[ܸ;MZr@;%B'{!p|MP 9GJ]a%6mYNIx`ʉ->k=K9O8Wېysl}TD"K] >߆Wݱ[]@mQ$0  0n-'PPLn|f&uX$4BtX"11H|whw(Pk4_ h:ͅBڬĿO~w|ȗwIH-2.amA&*4WBYGճũ1V<MmYE )֨B5Ndb%/7N`ΓW]!q&-H -Z7.0 4 Z!\taʡG'eH!?ʰ2 VUfl)#md sTH4q2SVWCkdU"T)^"YS,2ͫLT\rFTn<73 D`Mi+"ƩvWC>|$#"8Zp7~!#Ͳ)C ISqW{~,Ԏ܇Z6qfZjlOOndhQeWPF=4ll" $f`=$ 4{LxLb*A:["c@x\U(Q:t+&R鬵%ls,L E!(=!QjdVUBl+`WUBli@9)gdeټ.2U^L8RẉgBꟛ gh~ 폲y駋duH?'{yyysOWwv<`ח~zW__ϯ]v7?V\?|nѻ+OWŽ}WO,^/Ca/z1GIs}-@*wQE[o2Ì尳啰?/吹u,aAlͶ<n[ޠgeqg?껼Lߟ_I*ko/ڃ; M=ۆmwେy8˱êyǤݰVbR%Lab,l6>v@[W󨐆7ϣZHyYOHt _R4wX@p`MK}]l}V*Z$8;!jAiY >B*|\ 7vx׍vJk@맶!6rjKU[rExY BE"d&p˔`2*6Om.sSͨФ]6֯ Bυ)Mf3+rʖ*YY*&eTQIYU34u*ƪlbCo۱4jV:x''_jnԧϝ:OW&?.,(_}!f9sej^ (%kV;/(_PP 4Z܄EQ`UiݜA*PQYV9/41˅MPz)qQ*}}LnC*PjZʭ_Q;'RJqW 9@HQ8'$;'*F7СqNZO-maVwŞ\..M"W-7")p6ZG708o|Ԥ[$aˑ-a)-"=n hL7Q~K w^]xYy Ww]>-=?o: 8~锽u! k%~Si#',KqV~,Lwvgpż7w_]ms#7r+,}$eRxQRʩo]%)eL I޸̌#yEʷ8~dWYy'W =^gp2 a@(jeɐPS_&$U|b$;݊滛QX<3G]==X1ol}*Y]r43˽HN=YͧK Q 5Vm] D| }L`awׇIzG33Qv޽[ipC=jpV +n1]79즊z瀾1baMoBYׅ#ycJ~:5F;=Q.?i%$yAXqƠN!b*EL+Hac1G W-=0TEIQFR)a#ȦVL Je; N-,*G*j CoHs?N>7C,.hh k,f$\@OʿTei~#Ԧt1Lô>M@ՙj9F"";{,P[[=pn<;vc޾kzWzYzVrx"3:gfG `|]d l).eSagAWyYJo+*Fg(b}e ~z/=7144wc4*:ݫ{F(`kqܞ70S4: :cML9&I$HS$Z2JƐ%H*Bfc|% H B!˚(a'_ֻﺒ]p*N#k),X5SI5*ĪX%ɯxFbY<> ‰ϛhaEWq A+Î"١Bn6&~6^3& J [52oJvA0h Rʘ C| h43:*9 *;PAhf|'Lth7 _O^6fv[6=>F_~.4k"~~=UII>z<"UzzXѢ R㞶o6ӏO.Gv14JBe|.l(2˜1V@9RY#|w ;3xhW=8Y-WmеTJ#$\DS%TKj_)PUYݽJiڧ7psamS=< w4@]͹kNMPh65$苍sw6ς~쩣) M߂'ܠ8M?O=@ ?+gjǑ̟==6Ǻ gX=G/W JF0)A_8j:{,usBFji8h;?/0ڲN;&AVfHic]Msb PG`0!q׮r: x/Н+GD)vgj9i::s4I,9`0G wX|x|.9fafÍv0e1;;bߦ΁EeNy_ gmGw>6C*B8vdSg v>>j2@ː@E+ہ})(bp$I2Qh1RL!> 1-A*8ebRҔ Fe#(bHj^.AKbcQ3NJ#Ҕ"4LRMHr.8DJl?=Zʇe٨Dg+y3rvͯfݧF3)4BC$t dǂu?tW9=3/ywtZo t~xzs(g3ĄP~1YDͷEWO BbN|1m.&B?>=X"j;_'eu҄V`DlSOC_JHԙcT7ƅV6T0F{nXQjv+;oL9YEo֠m*ac=<}?zN뱣 hD#fm_,i׎m_.qb+hE"T8M$̧/yR> }J?^oU_wL'jq"pcڄkK\*}IB'q/ G{QI)asϮpU9JiD84>MIj/LhǀfSc:vucVRg!\% !qRE*M4WHq%i$ƱLl%Wdhl? ax0];5fNdd֋m?J Nx=Ӥ=Ut~Q1 *ұ@Z3siFdKd}T@J(ߒ%[ fՌ J{=N#)A QQbDz S00N`WK0`mV͈9͸4*ԩƓSUi O*BE T0 L(NRLb " jĎH Sc"c{|Dj'dE+֤g_$w?=918h02=v|1 .+H0D8$dgIB?P`2V0O"!lɃe @Fn?mη#xMG)L%3 l{뜙73.q7_8mϛ/e .U j>]^]^/@y*Zw 1̌麕gs{w 3`@YÝONBT+Yw|j&ÖQ#8cʩQM]BmE(hp]Өݫ_Q~hA6LOa1^Ctve6XVŪH~3JPJ*=ANCArz ;S2κL D'hO={:16Youow"I]l$(=^ ZpQSx2GfZ@H9lqY/2nyвg-X̾r%Ń{@?huǗū/-pNey!H_*5$ry1[, ٝrV}鈉=7^/V4ظz[nMNiLP*u{ ܻ$IA;r퀮엫1Vכ0}amx+٭"?RBas2rL<1YDŽ/9 @p[OnJvIӛy% y&dSr$˾wCX1xPN;|[DDJ_nxޭ y&dS>~Dh@ }wp e0-%nޭ y&oSnxo08 5 S`grȅw"%JȅVZ}>._IkTe>zr!l/6ڸB^gԯNh}{wS!_?}6`! -ߔJB@A KXpyhX N@ArHP/1VᗫeT0%Pف z4 s P7kC༜my_U\IQa)4(Mi$St2"N iL#iTG:!ɀ/ ^P;N,LutK9T)T)/tM68Ra!DlѯUD ԁN;,G\JՂs )#Ηޭ,b@ }wpa07^ڰwnB6fDnίϻ|Ay~aF?W.Rqy7ԓvQ"ce0SJ RDaQ#0c`OblVb6 +n1]79즊z瀾1bak!!yH`ثecqf?  l ѻ6ΡX?>.km#GE˞9a,^gq$Xgih"^Iv6KKօi$+Ⱥly,ORY>m,>y?&.}֟ld(w=qvABu |xǝBJpAYo. ʱZ@{|9Η/8U8ؑdfY,%2SLnjXbSI!IUiLdm,xAVa*Hȳbj2(V4XlCP N's*Ђ/y= R$(j wr΄jf BCZ(g$' i bFCtH)PkNI߭8T憲cU֔w&Fu)N,n$V n4*CRe ^CFшdOIr,(P+F)F h2*bxMd'3 Y,[ݕJca 315iTTa#Z~0v[p6J-;[~* %k?~E%p@{ad"dxwg:Q8|<꓀ς`XL /t%J? gg n,ac&hbxVx - ob JBr5CGRN7%>/6Ňvx&R]FǢHY{wiJ@%3%}ih*SYjӰ2a:p,jV9c" [NL3~E婌Q;Y຤Dk2n?Υ%qTsO]$>L56H TT⡤FfPʭ' sۂBB+]WF9k1dȫ8<#;c+Q`y](ʩ.0aC牠[S-)ow9^䃻ndh.uͺ aW_UT82E3=V;X EL {vZ<|EޮR;s=SWhpy$;*26K& AUAT}G֟ǔִ[偆j:$;e *bcgbOZ?IO8o8Y 5i=vQ?i[%Pq QR zB Z[܅+ce 5jO_/~׫PG::ۥuXLfyɄB PZh"$%$_$%S@W `p*; >k x?*Jn%tJA`!ɑ3ZD!߂30>u}CZ3\6Oq[6Ƀ ̈́3<Ͻpޒ+{_>תMRg~:Kdz㨷ͽ{(d{m0o| $fDrd${VNzuauNW@4Jsdқە:Bbs܆6JmfΪxF*Mki֜sINŁ6nܖlOL-,W?Otb"K#gI#*`k-1$Å (]} @x{h8 Bm=5[EgZ|hܡwbTcyf+~.wf5Ą'&jTwרQU =TFLkI8Evk4*PP[1p:mB!S b7MM BNw`d!`Q !P&$6$4 6B~[X?n 4 t ]#6h fCÐl0JQ֮ZjS12I 0^mqbJ%O?[s}14ӱ[Ӽ$"[d/W[y8d11, ,KTXbMFn"qlVRJS&ؘ+Hr 3T2;G$4{9Xa8 +E5=w},,}^eY,8b?|n>L}?[w ""gl2_,Py7NfWzGq\Go~TtLQҳ, r$A&[DAh{q7gmF%j/Thx l,OmLp 2P ۡؤHzªAI)(7-V/RJQ}I5SRzRkl02& л}T_lR-aRo"z!ؤTR9]M)RzR _7n=T_lR X2H)J)aRJE=Jt)ؤqI)=i)e:LJ_;  )e:LJWTk)=i)E R̽}R&9@ yr3 ѥu9 3)=@&ՔaW0)~QAO0)ͩ)@2&z] z!IiN5#)=i) ϯ[٥RH5t؉:m)@4't)h")=a)]UsX*Ksl}K+֕>BQ d]>rS7{P,ׇZ7F:JC~8柣5?X3{eGqvzMVm\tWft"cF Rո^"۷*~r~3VDb㵙U=e)riS=sMPKwz]&}`~ b~rtKh\ͮ>7 gBcȷ ^Έ~PrnPnUP4sWʛ p+{_(\nbGtIhzˡ'Ut5t ^S3Xe"È$sWsicmMWw-p6˵ eB55QɈLzsB JMe}8[7v 3{n쫷7Sw/˕W/h}\G{FgEh[F4$hbf$fw=v!(mnZZT{nn3 6Bu;(e!q(KlQRuNG]P=A%35%[ն;Ztκ^iC +\G2 A:,g9 ΰv5 ފvV98l4B;>,h/@je0},krpˎBϨ4+a !,l /$G jP}I5gzy5>KH) ʔ[PUu~B a3'!xU@W<-s &ftClo|?Vʇv~ԭ_w+*9.oDI.VáTȟt>W&~u*"Q6ѷ-LbimC8}{tG%?{WF Co1E>h{6:vg  @Y!)O{6oHQţD.l"*_"H12M2/&fӺ ɔ\O+ca0Vء'8;c",_] +APT\ez^.p8K^"WXI)V|R@5aNmQ(E있P0 _ K%hoY%^bANi{+&Qe'lmajmՔOdG^=٭mK"[ \jN= I30*+A{?\?-ֆX,uQ`{77乤4`|B*{1rSP:~[Р{&Gt{qNJuݽcKTx"ʚF+RÁQD`$rL/kA;=J` \m1;..<ND"ǻ|K'^5T Dj\zAU 첪tN*aZs08Rs0y޺}>jPzPl頀ؼm; ^:=E!k&6f+~ö]+AQ4~F譎 ĺk;{߮}[2hf uY61aUf4VذJf5bG;5Q Kk]'vE]$*WL%V|꺶$sgR>\M'o"X(ubiJubޣw&T+%lXNQVY1C*L 2CȈc\;Jv cX]W!$~w@VcŵwDAZH, _FxȊ̩N(^(Mk)]ti Iى̗V;&Ll aʨ}W!=[C(@ΦgwIBN_lgVl$|F.4U0?[=¦%vc_AԸgQQUH<+LR;Oc}g~ǂ٬߂D0A.zvXtӄWFW6<7 \G@A Bjэ"{;D7FVcjC/B)DK;ǔ(ZPN#-4G+ (+њ` {NBʘAHs\"-ҹSY&U)-Dh!4cJ›"!!jxK'YrFE5qpVG -TN!}9ޤ-m=>)KR#dwknvr/JRHuފkPG|ȾvI>筏5 iTG"d$q$zCFOd%F GϨֳk%'LΓD'NL >|P߹73b7pQvy\xvy}8q$WM&Z,8r<%JH|\R5ncyRnMsXo.><1YO BV-b95Qv"P89~x\o#1O7w}uX%@= JP+@\LyM 4R( XUwY&ˌ@>HԔI_$|NH5zkw.N6}HʔL ok}}pMn Jxdl>́ a[K};w~=\]^{[roHcKH/sOrC1RDKۅ选nqSV-XK`E&slweqZkw6~G=\~~K͌;S|tQiTF.O8~5As7umsA* FM V1\W"n}j\$+P"fSK޻X%"oU8r/W?]mE>du AWap t&JmY0a_>Â,ܞ=Y/} X-[I'@&HBqm#SғJvSZAA}Gv}Io־MvBBqm%Snq6֖1)tGiշvk_2Ru!!߸6)FݸiVM,{ v$ȩ-%7n0oi!X/GtK3\9HTK5J2+9uQBrs4Q9UqsJD<nQԘ!t_+ɅF'\9fI8ӈ̲< gz rB> V BIcs= #rY)b.0o!H"Mi˹~eyVaYxnN;0\yAw\l-@D!c,Ә`Nvbfy;)P ұ`V+;Ŋ kf 1Y Yna ˼.Y+`SSk#ц`ፒ3`m΅•*ls©m_cKI;XadeqIBU)F+,Gb eXROs/6k uX /etƙipk yU6eڣ"+0]P uhzM30+,VR# ΌQ㬢^.,WpD#L^1> f*rHp`\jĹ&RsǔK K ~@Lcv@JaF[1AS+z+%EGC2xFfځsf?z",7;s6p_%C2OA^Voc;b~_> pYo6癿 (0ΫbPHDnI.72W?Ed%*D2X'E=ސ}HbcˢjnQ|:f^-%Mt>Z{kٓXPX`8i&Y5Zɍ52-Fq h0 "k@<93~(ϸ@<8g:/.kWלf ֜bTӏo;9#ʎU$'C7'?M%n+-W.nR4s/EƬ7YL&ϭUafU.Ob./Om@Aq:$ D`_&LJ]bs&kF!S@ |{%,X~VHHz}RSh%JV ]4;|{:ukm/f0α?߸ɗcr Bs;Y7,i&gNmmq5_.j3]X1b rY ~~| ?1G>V#Be8M~XMI]a6`yi7Y۰FSmr $6# UHaAdq1#ԕ,<a=(@$M$Y闰d%FSЮS{nRrB:|Քiؙ:wQ X$Z){@ |UZi\}ڼ4hZJ@fYsYR)4`ߊ-]jOx@b=~iR{"Y|2FV-_ r b늺$?T75/SpŴ6': &ήYgD=\S֊)0%pكe<:{p&xDu(Gkh4GȾR qA(tH:NM@T DhTܜhU;OVsfTfqU[LO[-6\1|t7FVF㣈dqb&t<.179e6p|MWrBn.e$aCx0ḵ6̣=X#STXLj3UZ,DžVN!lj1/1T۸4NK4ع_"w;r!Hn8}37FJGhT #-H¡htg+s(?wW.>u ]QrswDonDWOa2NpDHgz{qW1:K ※nܛF#%u_lwww(ۉǎX38IM≆HJ$c]tJHe'Հ~HEJIŏg&u['6v0h&m n>^w[SFjA`TodLBKZ|+3QWpkV: f.YeZPDL'neգ\ ;f: _VlvA'PN+gQh%պG!4^Z|nmA—"!FvB[_Hso8gXPά t?1ˏ3 !(u8}zDmX˔Jz%쒗$1L\)9L N;-j-@z -Bˀx/$4ed:kQ[J4 `;+[)&LV BP:T8iLbkXu^*͖go>O<8Tyfާ،?=|~Vz`^P=?}#|0 _~~xp?N~JgC{\/'/vs@pG7Y@d,.ݻ}W5Z)N͏}qӠR6lyX.kiyA,uKrwsk~e8xqdz:toBj*w6&~2f\6mwp鑋E)n~Rbx.~H+$:vT:-*:&FYiVnMH+$Rd?IZK%?ؐ1lv_=Z RHO5#R=~ 81Xs$M2ffG()&1?ebFG eʌzo`Fhf4Y4-rӱ:HFIç4c]Rɔ)o[N&+!mٌ /%'n_eʄњTK T RQ'}TÝ_nBB^&TDvKn>?ՊnfN5Fa{&^-ZfH]3a+*i܂@|Аf˜;Kz ,E~\.U X .M%PR*&jif3b4n!Ad 3%GTlʠ{6n@"8?^Bkz&n%6ly0Dq8Cgdup BU1*l&|4P4" 5sXal.Yh!v6:>v}-P*n;ZwW5ɇz*Ij].:-w8쮨W/;DSˆ%l@}:A=_U%h_nF<"]/S sq=\šNB^ SNr(%,(my 'MİI*1bR.tg|.Y3eϾoC⭵iI߂;D q~b={`wVH!ɮjv$·2@X8y>eA2͸[K! !}vΙ+2жiPJI$Kou4] ÿ#Ybtgb/3 sf:q\D|C'Gw^"`nu(eolFqٲi7Aa.yeU#JI՜Y) ww?}_<؋kV;ͻŠ_,^},.pxwѤW+]䫌A?cL|r$M铧LRx+(r鏗PiΨgœW`_5OL$zA>?~~<=I?Qb*QbFYw VV #%{-/X樢8pYiz4KǺ$^>|Y6J5^.{(YdpOnzuH~,F"-Q td)Js֛,6@`E-k,~*u8Y_ŘUY_ŘU9f]^Q y.T#s\0+?uNhF\)HT^/JZ/[rWykcJ01 b20 ?c2LJ0V}N{d^n4B9*+'~;tߘb,XdS %/!ӵsݘ9rYJ/cJ%g ˋ}{@?} Yμyc1Y+徱J%Q1~?c(6\eFÑT M}n:M'~ޯg72}ND n.nLnboVEM@4-e)?ҏѮc)@y>sYFPYz&Sf13;; Ii b`2/9;kx!+BF6nTL?G5]GV^nx@s\#, 0{`Yv-9Ӗ/?4$T…Q,ϋoƊx w=x~MDApTkUp i@Do j׈35lFH:'):Keaxme9hN{ѴQ{xJat逵ӧAx٧?6J mWǖD݇uÍn /rm9_мk{OWxܿᤳ3R:wlrJSJznyK4+"6{;I@)c2N !x"˙3,DYJ*o^AAGHn̩pA#6ۛ?;X\/W"4Lɻנ ji8)/ mbm~{2p81JvIRp#^]N$Pɒrj8v*UPy9)0!76 )$' ĞE@pR_T '{ 6|ӆnnYzT$ٜʤ+UW>W*U䔒C_GTfrNr@FA]<*s0Z]QO\n'[vHÍzU:ҌjSm!.Cf[6ճФn6 $s#F R/Δp0hkrxź"6,E|:gH}hc*+ٓdvC5& hAA:*@6B""0.Mp#Ģe%mP9?ŕul<Ǐ %sF9=mh-.bi1'nVXD[y9p_np=g(兿uO BwX A"A$hhj*sJͱ?ž W }JuFr %SFOXA1>[JTh&nV*4fdCl}u-P.\&N40\fZnWm@Z c5~:q^ZN?hõ98{2iuDJw˹{̢L b0%xC=jHtS÷vt!4bL2"2NlmeXhj"Bs{-9Smk|1͏wex/S:M%ny $#iz뛞lMRȖR@ז4`sl[ɶkKfRh=o!xzk]/08_B ӈ*Bu?\KfVE+_)e5ڐġ;/ipAHrŕ4ƀ ('s'9+LS $l-s\E#sSz&Cg @^O.G\@`u4La0ƣu(bK5Bjx =0[tM'> aϓcZG-g^9A$CgNT.-'KDZ>BL,.*K)gv:d~{Tஊ,-] dMIkpۿ{kx|}_(V E[v4C֊ќygښܶ_Q V왚ƕ)S] սډ ՠ[Tfb";88%κԊ+>f8prl8QDB{Dpb>PkP(F"Xb `?9QHe|t>vuGlLW=/K/99nBuWdo%zPއvk(gyygdA0|ޠogo;9?k[e׼I[ d#9w/zhcฃ >q[M<|j}#  wu0$Lp.]"Z OjqC3)q3KJ][.pXܸ/j6ڶMP,Kĺfyte-\h'Qq~DƉMYTLXfϒ$B% ٍ@2#(1JFIM(C *|39T86#x&Bp$K>Y?We ! !~.7_ʞ7X%O_Ӣ5E?=`gzl YL'_,?l~:0-*FI}29Fs=LO=*r.;Z? ~9 8m[/V2T29d!kT3| OwNnsJ`ݢܚ>rHpܫ8Hk̀xZҵfIO:Ղey]lp'џVm vB'zwUd;)9۟>>) J(1[0cKj8U1rb$%B&XpZ+Ϧ|+Hެ% .ʬ|?+bg'ca6}_O[2eVOx{W$ E*wV׸\t݃}![?j͖lq{7 R*דJS&sZuSAV%;:`!1oXqr::G1{>:E Fm's_AKmCJ=ŒڭV=v޿{6fP} ߣ_]{OH1qSX\)p.*_>|̓ṫ$L(cu+'MPi-∧1vNK˯e^hG=PB\k 2D+N>.%T $v}ߺtm's7a5Jgks鈪$(14XF)$1EʠOކUȕ< }8Yн]y.+/97[̪ajJ}Slt^0C_~!/^UbYm%e7v!s3ݎk}m6l:AWT4>MLіc(OF A pC?.$R 9 r8iGBqƄFeI0 ktjHQ ,e<1ђ!$&q\uQRf=?`0#F_"GW!N A"M05&!Hvv(̸ǘam]M'?li,@x 6Ogb|dq~w>l;&&rWA62eY"tsK]$0T~I>"xHٗ xc+ƥ~1rpEN˦H"L5Ln_Oh5Х8{oQY3oWY>$練隷jrCoOM=EC?}m{ s:Z`07't>TbZ(C OGQ}Sz/4 q11B"i*&jZsN1MXN& ')pN*DpKf?V^iI?:0@f\81܎2eJ,L+NubevL UBDB-V"fgnjAHI:2 EP<@ hDHJb8t4:s ))DkiRJ" &)Dj%aL]t% g]hGV߁stI7}~"[8kdO ޗ68ڼ+=!w/.eCj(Oe^XXj.+r$:8>f] lfK,xL[+ a_Lh9+4D)U'&o0U%g.U efvOlJ-T2^W5}vϕ/g2l@Ż@Pt,F#j1J71;P_t?#FWu!xmN͠V!^k5@pYA4uӠv4ͦAŅ#JYnTjBG6 YB^MהB%^Q e9{-1RrzCC/5{Ivw] uUΝrοGfl'mR.}q;͖w]Xk.!)Qb o'sk^355=E)U竺j~p^Wde?mcW{yzoVGZS%mn!ywYtLSHObՐ|""ScC]b- mvr LJmk|Aڭ E4H.:8v GtBۨEVmS5!!߸):E7\Q\Ei7(-<n*> 񍰭7IE;mtqdg/-@4GHT1T|^K=J@B nsphYg\B w0 EG$KhQ?En}{ 14$KS r.Rm\-[Rчe}6W;}/_};hRB}c-o8٤ݞ;0[SWѧ`kH*Y)MY^ (tz|$"Sө`Waөjbԅ_k$8vVH5Վ'hU $TC]kawel3a)AbF-g޴uNn9:K5 !߸Ȕ j+vcD!hP |D':ڭ?rj@g-0P 6jn=";-8˷ր|"/S̳j9CS ?=syIWPy =7uj][F#D,H"-110-3b]WLcIM*F T:m A9^Z'_tGǵ_냓W/TMbOtaJg-e4c-: N _[Įq>"^}GuqŤt%yyM$g4-j)lFqlbȘbܚ}OrxN?Ԕ |%;R6][EnnS7G ZVw[м?ƠQܽ]X":+zaw7E/+2䜨 yԍ!?pqS 0 ]t;xzwKz7F#5UY1 9GsU[#!q\üsV&0ŭk5x~wܗsnAj`Jۏk8SgͺߙΧ\ G0=~x=ՓIx):h@:\ .hH:u>}b`"٪;g#3muWY4Uiʁjn=B*Y=9m;I"J o߇N@~mhxax۴s(*:5'Cz  O>}MD jBZC{[Va"-1ӢU|@4SVEèKVQᚺ-?&&SY$~a%Re*&,{P% qiHi 4)3-PbTlK|VR ](WJTEMf ?vI 3o8/lcuխ났[TZZ1){;RfHL׻wrLzY\161u;ˠc*yx }[H+,}dCP'3W[Į~T3FfCʱnkfHypH:$_wh@f_3;gSWZ_:&@QvonRowÍx:=]`O`kB3-@]B4Ou5kJ˰xuJ˰xCiK0v]iՀIp4h}I!Bq4sJPg6 ,GS%(7a a{9g^h| >k10Ȇث,-޽y]u2ɝ1r{=zl>g6[^ϊ^Nc L*IDbF1FqdbL(|rFy[BސoZ$XvQCYsة:{1><цw| p!Iukyo!ycS Ā/8۫ s0ˏ5 e)R.5„P(V~nkl!zϙÃ}ܘ5 )0D?\.ػȊc(ikgE#`Q̏rEMbq^}FJYGˇ{H0L8 @|jrk7hV‡#%; {!k`{o[q,7 w.xgmwGN yN>*L@Q]3WpEy=-u(#3BqsouX&{D z{3Wb@Iv]c]zQםXɷh;^[a۩j)FL5lKO[J1 R -:nzPZkzғR¤^xRJXT#YJOWJ);P6RtՃTKpO[J! RD)IiN5<㟸@)} e-qq6ub4y+B҆-̧nw1Rf`2%T&liǓ})0BSک0)sFڤ b(f86af!@ `8p-#N8p-#ѳu:i-.C> )FL5’J)!(_R0{>rAx][E{3ˆjJ7n2G'n6Bk/ǹX#L)c7@.JyMyS-<؝x.7`MBچ SJ+ߍ%N_X E9f2MAXi j5'9#QеX%AR07Ayj "fvC;ER:U…Hw/]';~4 ޘXs?h^C."aP]u#8[W\9h%[WQ_{BDS1:o`+YZU'$½f])$\Be'vvuh;ddՁpn22Эj#-JSZIFmMYfRXHLqdX&qB\L(@4Ԥ%’SX",TWѬFTcq`Vj\g5N ٩_іA^x&HKC KAT#8N:yG\ heI͛-bYtgW&?W7"*w;2BzLjTF xV++q#\Z9߬W=}T@t=Eng}ZKuop_Wl9׽wn̬Oû{0;/y4nה&Ѿeݡ|umfΘ[a5P/a $x q!lXUECRg/a:KM7D`c K8{qa C>ڣ͇qh_xݍ@8F/R].R]\^n *td<яػvY멥iw,7y-F|jZh.I,cex*)لxE- _M"UR"l<@mX ݛ[3XɕwkUfa xdoy]TJ|'_ L h)֘ Ji4E y!k;D$%H>Q2SYB^pH*kr˫!8"g ;wJ9\Si{SL=b[S kJUE*j4ֆ `# aɉ.Z@&>zU'Dbd¨P a?#fI!Rb`, h;^`b@7/ 'C1͟qm~{?w~ru"}xm01o:YW1k}@з.zx:q hxOrH_x> z\6rPQg䢾QF[I%#&yfe7:a/MEh&]KmwM60$)IÈ&ĺQʺo]uu'e1}"CAxlxiD)Xv"? ; i*#vUs1sxlUvAUsнjeL~ò(; _ Gyh( 'Z>6 oȷ[ *#PIvِ*.B2(R!%9CBqd6s/./+twj/͏#osbc,C.wk>D/cWfJc I_uʬy 7yWQn[LH&["95n5~bBwMq.H \?{n/.2cVVlb,hmmt,%\%{FRMC66٭W*U7|C5)]"QȕR9^Z V*k* *I^k %3â-TNI[J?Fqz1 +n$ :f Ec[dyYch|8::fX;rŌHzf!šy9b\F6xԟuh{e$](~\2:úuh~e . %&LXo#z34OYׅ7@&.![Zro fUYWiic^aFN4?%E7ΓAQEb k-GJ}0L%^cyJ .hgw˕^t/!4KkQ+)BJ^r+'TGmI(Ki9 aT-Qݞ@"\sk;ɐf.sJ~{W*b(RYF:Ʃ5ou]0!*e0ؕ(@HZ $oFO=LU<^ƒ::~Fij6?β2 W LŽs/YT=hE=kE>Ēp%fBD!n>I<7H هxF\{YIfA8E!~/ULhsȳ7˃D~8/nm_ײտY4}f#soͻ7w?U_/i PGX@Nq =Py D\qEJ .?78{N w`q˛ۯ.!/:K9^~}x`]bǹtͅdL:n>gy$86q#WmqZ٦MdJ tբy1|%P"m׻Aѭ lk?/;X|Z\v<$շN#z?4|;HŊ{8ib /˨Tױ|fAQuD"R;}R|s|PMB(5J2eLh&rIA>b 5g '6uqgMz?TÊOժqk;5x$ѹx^qTsLD-dRv;˱Q-tJ }28e3-%1?ߥZVV?F7nlj ^xn^&\$ws3ᡬn]]a#l ~|s?-kȊi2we.;:qKrl^?7.+$䅋hL1ƾvkOPb#:Hn=OFn>ڭ y"#SI[;{iB햋A~#EMx<%=vŎn}H 2ev'hs޾]*AlA\-ĔiI# N$VsGƸTGƘ7Ȩ"+[s+5| CCEܥгm1ZS~| l;$ky!"y!r |'L5=Ux|F@G |*Ȓjԥ1eF Xiw]h5æ _"')q\w۾{t.jK~UH(CcCfDK /D$/JaDԚn8.vHxy'y׮˩JCϱa4 taQ2 0QAY1`be-G^J( <Tr]89rܭZ\~Ui}K+ݬܝ_Bs };uy Xj?5K3N )]ժ-9LQ !#2%ZOnZu:57'ݗ ^!Q";Tu%[%_BKiRd=ZR1.٣$(}ZytB$xmX8y8._-VuJoMͿ퐩"T+lí?>(|)mJ&-/.9h{jBgɛ{m+j%jC2 r n U2gGqp? c'˳p9Apϛ- Ŝsf%nm^^-}.o Ȱl^~"nH :xKI<tޗt:7G*+EH,ZF#J+aU8cyAVh""mFwC@8 '$;cO%۸5";N#_1%Qq3+dѶ>RZ!0)1nޚLs>R(И0cA8މ}^'FM^iF] }&[F{v>' hSC%busuiqiP:Kmlˇ`[>m[6/p_/B1,4ʂ*+3^ښWXmQҳ [;Q=ndjf }tڮbO03-Xm~vKP/HEJ3Y?SNDJ,1)& J#P,:F ^0"H%A`y{|Co(NfýcM^܅QD18TZLg{\{,h ㏝4`Г m B#AA{̭J ]T=|k׽A=2t@)#;) 'sds"5C8.4=75ZVӦ?D,n;oF=<ԍ(7'2`a~5ڜ g)fXʉ 5.I3Oˌ$jaf5x\{H-~jXSQAG5Cĵj!yG ;dak^BT]:+[3JbeF ᤣ*LhʺH*dYJSRa=AUg+F)Uw쪳Ts 3@nHaAJn? d"*;F 35~j3LFʀL19ڀct>$䅋hLzNi7[gsnĈN7Rۄg/CѴ[~kHևp͒)s_^Nr1oh)oZa页[~Hևpݑ" &gٸSgcppl cHK S%V) +Aa*0UGe-Ğ.FZUc̸PF0"3`DBPf &LZ-2+!efz9p}ү(D,tlYJ č2{+  Ȥ2|4|^4b)~)vUG O[운c3 @䌁 z:rzŤ&G+^\< 4i%]AoxnlYQC-C 84(V2B-5wˢ`j %N)]+3Ѵ朘4<Xj%MtD!xe9+ IЖFO8~hv^l2cxvo҈I#Nf"Q#paIF~f3`>C8'MVs,&c8ᑊsҸŅA5^~w_?=:K~G SLlefAGPEEV?%)=#z> 6]}Gw"  @KnɋʾwNM@k=^T`sߊ Daw!9g#WF"rƄAP ^ghJ*@QvŲ&p21ch.s6IXd7RJNtddt+N -$>yⅪJ)$S %Q]J1fB*aRkɔ- eLQjQC )ƥ4sL ?{WFB/;3?z21==OU2&)(RR(PUTG˒BL$H'v5 QK([Ҡ(;J@?}F5!!o|B!HsdBɻ>9|~.G=]L_QmMogT,],($UlYG} "[Xy"/,?diV/ft-61pAqDPU˓&K냶ϫ%y1ؿp6ș}f VD0gP;JK@h:d8@35 Yܭ 20#H ;% %>5_EpM~7.@+!mЉYJnY0>O>Is-x Qn~'*7nFu@aKc-IҸLshL`Bh[dְϒNY-,%[{5btkOt#Glž4WК\ǓMp)p)5Q]q mk4&w|/p 0T*gr.\`_Cͷ!>',"8g_+ R) *NPq[%(%X~b\pڡB;4T_$o߳ypӚdk=*&%D"I"V+(55gJttF5e} 6ll[=0훞mqB/Ut\ߕMy[wWs @v۫lMlκdfG<:ۮݱbUrh w5_쑊 1{(c7gM+vI`rͽUǺ YJrgR=5 deHK!=s *.L*PJq*CLJF-N@uXWN*ʭIV!>qqLK8PFTK)J*c`"@> UgeeWH*S;ݺE 8B1BJy /<믈=/_QUMt9u=yn=H#m0uӞ~Tə[2 ۺ0g3節٦sk# ǀTZΆa;&X*fY ތ:Rt/)HM-冕Ý䂟5R3y?(&\Qk,tsa.gfZQ)+\/:o+eIV,qgY*8]nRZN΂³!NzCZ4O`Z/4@{ nE=Nmzs!Q?=筊|]q}~Bf"ϕ59~=G%J<~~ͮFTm6)i?S./d:[0:%~ 3pQZyf観[d>3^\76DČa/nZή;P_|$Nhuz#{vk[4[Orm%rRmGN^+$ЭrFW7ZU :T\nﯶ>c2ru#m_GraS5mTmT:iO/7$`*cW~K$t@; ( OwmHgn복Eplkse%XrlRGp ,F͡qL))cLJsA, `Ri׫f,'.pX,f)F}:j1YJOZJeN=k;JQ&MYJOXJYWn7ץpRyFm g)=E)S&k[Pq$R LJQù"ǩK)2)P[|ΏBJ7zu엞B/dХף 1iKeR*um72RJ.zԨKYJOXJ;ZvWTاhy8G.?_ܹ4?ȥ//xa՘mML/Nh_櫃irĺE( sӥ=|,4&=z/{;TIjrfA%~)8:.BL):8 DSFj"|+T3Ϊ("PݫۺAk,51Lg9-LiHh= x6Yc5P|n>mmeGQ`B( rDpWOU Kb=D[f1G}lʞWNܯBJ1mD?8I~EZI+}`ZaB$ЗM* B 68N4g^rm3߸ w!\g+AHRQMQF(cWNdQhqvS)5cl:גǿ5~ i8! v'V2bMDxc@I%@sYq`D5En7r4U; >/[[H2WB {KFg9u3cQk4DFiC zJ#$G8-IXي 7-]O( ڹ8tAngѷè]z4Uīh)9I2zlK ˎShz릿x8wy)XbVcE`lk$?.\[3]m ][Q 3~ u Jb$qT T\^imAݛag$(RԮAHFgrMVxԙ22fLpE/[@Tj8V/F"5MF`Pǒ$qtM>8) .fw_!+]v crPTWU ~+R 8 A}$I䈌Ug^تxkhy]ZFvjE ֒o}T(APp E`Ī*I.1zơW/n򞥌! (ng}*e\x"LztTTktTt,q<|E4TBJr+~UςF"40u6/B^+kq|`:5wLfBj$(&GFJ:zHWe[jhZHN'<$Ppᄥ!Е2-tEnUI_[dϕN+ `lS(bk(h#xOCFVD$ع@AR])+D#!B*:qSq~In!Sœ\}"uZomy QB/"gRRt=->No`i닉Qd.VȀ! ƝHʣCMjg)`5[7 w 0 I3ؙQ J@Pݠ@Y~I]]_fVVf_z@|Y9Ȓ)%GBSy1Ў7awi|t@'DHk' 쬲;LလRfE0{R)_'z(=k$k=qb2"t\B~ЬŶ`BIv&Y@e3Wy$u u`r)&!玈ed [m+n <2)Lb-\ш4 D"a l+ t 'ۻBLCp fP†4O6`kH <cVceÓaiZD$0:* gRg~Oj;Zb D!z$勮["?V*a5-dz @-s9a9/bK5{W꠸jE[)4˧Sne6>HEHt>qjwcA_c=dഡBXp 8D Ø@$p80:h^ =S,D5}7|>dgǷٶķnHʏIۚ.<_z웟zj${%_ڣP',"xcZa}.| q rzKpkl?tpO92 dw?'f|Uj#:=xB6q${DdLt3%a.b,D,vZ簞qRDFB;(|r[W)OE'9KQ:=r`J+uZ?]$<8( q"22&$>sM5D سQ]s`TU^Mh6eq2Tn?Vlf; /Wޏj䁶y156Lܔw@Ƥ|v[\¶ qmL(yfa縅2bSTVv|kRKf ˀ֍JApW[7WOA]}:YŚ7~MUx+?+m+1&PjuYЅ'Cg*=˟j,4Tmщ4%j\$im ݖIjNlV-(P:$0.y *4˥FĀi%98-&h )[bd8 vj/q߻Y{a:XSKzd&^)7C߼\ry\~pg/( T?W ~x`({f#ӽ˿LȖ: ȝ(ldAynO-zӵXL&XA X4c”WF[]a1WZ.?9f L!\p: 3J! <:i$Yׅo~< iV91c_f^55-|k`$3=G.͉Y!gER/4C,H62Gj;>$]#tNDR\BV^<$x{5'ˋT[\ΫN"Z{5",9Fچ\ܗ#3ꎋr;"6I =d*dhE{/!Ɏq)l6]c,?1iaB%y^Yāsi q멶 "9l`FB9 6Ѫę򒝬\`lbN̹-I<M A&"dOs[/c<-uk>ٽƨ @KEt*d5YD:t`J1i]]8 b:Ku@Ȼt׬8^^ LU恽\ lTKﴋ68<w&Ӱks/zo0_CJ3ln3T:5k a0'r{O<[RNZ/]ehZ8OM JQc{bNyAND`Fί$zgg$| S˟(Ú="bk#~|!uk:GZS '/U7 /?*ZS?R.~VYਓ3Tqo"k/g>0BO~n浞2fw{ھc Enn\wk+&x/E(,k7WTjݺ?>O r:ϬvUYKY "fLvy r*lz&i ]{c~~9Fz,We"/yY/˭+pY?D'IJ6E⟽7)aiUͰ ~\6=Vd\2W0Ŭ7Ϯ?{yCCkC7XzuI w7NxU-(L{12HCC&|ŹgS~$_u=_ʤVRt / ;#?{ֱ̢~TwW"dpOIIKSMDSdxhK6h_OuP?֗:(pz&i3z-c/iBV#'5N%'>ۈnybHCVr3-lD'&d> TW 2a %4ve?8M7+6lOcw2{r5sBQ)P gt|>fݘPśfD$+| 5Kv4xO,3(-fuBqϮn:o%7[) 6Rϒ!K!z'`vF}O{YrF߼=[$->!"n>`Y N&\gڮ,}!-Zf0!baYX#ERcjP#ڔcCN1:ew:qDE/%, ,]qliadྍǿ{5tlS@&]f!lɢF$L V}#L?ʷؼv;Vi<* `6m'ʌ[J]k[ 8=,ĦpbNO:u:@ T!G&//vXMg酨PmyHwYVeCk˞jLPPAM'VZo KI&oߵQ9&1618ɸ+݇7xij'Z)~v:hkmĞ=Q@_cIh+_l xʱY?Vr(TA\PX O}toI೅^,:hr$nu:[W;v,^CWʾ=fu`5"9aIw6@"3riɹ${IKF~r}#pZw1ֈ`s]j" )S)RI[d-^P3~Ħ]kex ÉFnӰZ;7Sћ aLHwWbTFT"kCL0fsdJktd<>_, E/2Ob'RkbZ<\{=$#N/sODkqy"dh0 fy+Hs8[G0n; EzL,1i P՟l֛k2!XXCPō "j[@/UlV_5![e=-D[]$]k?gOr(od (-ewۘsتy&wXgUV.9Vڤ3Ɲ:zh.2]Ӽu53k[N5[lmghj_=C>2fmsJ 4񩸮{Кe) bk D|!wu.*J *E$p;iuBkFJ9qx׍=`JIq\( ڕ'P*b)JڙMWv­_}[-Rb69SJ"8g@hW"FUy2,ӁGo1+ANdQ,PPFF0&+t H[B#`Ei@̖5Ɂ-9zIF=!ю3'ėF 1e.'4]ah"Lr~,}kb_NoŤܼNʚG=i\+$/x?~uZ50Ae,8#0FotLit򡾞tѿݭ_ː@^լ|y?_?&WoKD|nc1:񌉔'5d:R"fUEM^5޺q\ ,ܖ_!)s~ԟ`Ϝޔ3gu5 t)BٛX4O@cy3st7z̈+\+Ah>9y7N^8igڹA ً[&EG;ձ7{@\/XG+wq{2;# yyOmoo,ܛ7ICK.zzFDIPNHJE)Mrn}kMht&upU犢|6FP.Y%_-NwجRاΛ+H? XCd:R_:Ey~Tƌ, L{j/5B{̰beVsmb{-ld\ x+-RFrR.X Ě)e5ru^nϗNsYCHA=T>o6zu7)o/{~|=+RX)ea)` ,Qy$k$i,}pА>d:gsux[wmmJ%XGyY;sK 3I-/J?J^p.${5Ah4_w[a*sBrzpa9f6h;.5ekDp+UqWSe kl"PT0vz 2PT鳔/ FISJi]rMޢ'sg,/,O) .xlRa:̀+4^! %s+nCeHIVC4#ۂܺ,R+8˜6fIA5n , ɵ@["+cjm$J60RGD}sU¬Jjqs~rvyL2ysgsTb(B #|7$Xb?*? ?|"}jP?jڸ渦V˻v%a?rӒV]D}L.}ǿДJ?Q <<;7e,R8)@ҖY^'+cO07oBn?d~8Vσ^D[Hi ?W8TZը0'\3>D]{U.ȵͦB6o?uyoqCCp84ȥ3$V`2[V_jĽLjȄ[V0אF%ϻt1X,$׸y#|[ME ?_8;ECX fO忽/~6N׏Bp~YR__{pZ}ǹ͝be_s!P[os?-|{z9;(σ2M+dy42/?FXyH*./ Gvk~pAup| *° juaq `:SGPQY.Fo|܍30Aj-;DN%xP#tK&L=; yH c\^zZן1O_؋anF>~`λ߸ߏs(:[?O\]WÙry7L%#FxY5pV YȔV`={SFX*bhSCWHɥI3ew~yץj6uv6K\V+)HsCMJQaLa +N$ X &%%2Z rrGgagB Tyr7._\?s: BXA (3UU{,Kޮ l: Iϕ⿭߆h]Ɓ+]{z%nI~XnUZcޟD8V3>{%U^=e,K% Ǫ8JNaJ8YERj0O赍DyBjGU]N9NH5Z˹bR>::uP9t“M&&5j #HA͞,͹\@> ['TPꌵ,1DM"%6! :\^*Ep]o,Z]kJ[B4Ì&p0i&^^^R߼QFVBnu:ᚙZq skZMڛJkׇRhتy͈ J..^d19*&vy孷ۺ]odz!54UB(fiNB:tZ0$岠iRXj`D~4Ll{ Uv{+fe~1Kƈخ1ox"GM1964(D^46XRI! ?; 1uPl Z㎝=A`s4`nX.2v\۹(-BQA4wPI S\IJ I) l˘5X" D&Lj~($m _MFkH@&RdJdFfDrS! `srD4ש) iS|KSY^Xi8COb5wj ?ۂxXPb!ൾvA9+ByAgaE (#=siB?8U/Y( ' (,&H+n]=T jo.JV[?ڻo_} Dq~:L԰%btqhy5V2Σ4j$qdMEY 8,!=&[K{7=ch%:JwULog Tt{+cԬ@Jh75K#䠴9}o;qRqeTjrSS3S[{(r4,W: JK.Ey-r{pWP'T_OX31z p[Y +LXAjN_6 WFiI|3Մﭣw]l0_@FZx́T-׌ՌZՌ޻ :Hk|/˝A փP㪷#tF5ڋ4+<_jfR$>ͧ/Oît$!ƫO9vh5G: BWgr|27g1ᒽX^:9H)W#fئL_iWrjFWSD Gw8@dQk߈(3w.I'+t4r^fhO%fGtVT0gF,;aXJLjVH-PƔɵu u/rfpV akoje3{Ɇx2l& +u1nf^ml _ [PQ DB%>D9qhjݞLoS8LzbcMI`!~ 8Th-`.M6 Ê܁v}Q_tDb)Tʕ!ѐRO** ˳\{pvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000006677636315130061650017722 0ustar rootrootJan 08 23:15:28 crc systemd[1]: Starting Kubernetes Kubelet... Jan 08 23:15:28 crc restorecon[4751]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:28 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 08 23:15:29 crc restorecon[4751]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 08 23:15:29 crc kubenswrapper[4945]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 08 23:15:29 crc kubenswrapper[4945]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 08 23:15:29 crc kubenswrapper[4945]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 08 23:15:29 crc kubenswrapper[4945]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 08 23:15:29 crc kubenswrapper[4945]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 08 23:15:29 crc kubenswrapper[4945]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.836349 4945 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840099 4945 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840121 4945 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840128 4945 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840134 4945 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840140 4945 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840145 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840151 4945 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840158 4945 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840164 4945 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840169 4945 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840174 4945 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840179 4945 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840184 4945 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840190 4945 feature_gate.go:330] unrecognized feature gate: Example Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840195 4945 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840200 4945 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840205 4945 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840210 4945 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840215 4945 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840220 4945 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840228 4945 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840235 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840241 4945 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840246 4945 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840253 4945 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840258 4945 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840263 4945 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840268 4945 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840273 4945 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840278 4945 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840284 4945 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840289 4945 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840294 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840299 4945 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840304 4945 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840311 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840316 4945 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840321 4945 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840327 4945 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840332 4945 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840338 4945 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840343 4945 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840348 4945 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840353 4945 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840359 4945 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840364 4945 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840465 4945 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840471 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840479 4945 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840486 4945 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840491 4945 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840496 4945 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840501 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840506 4945 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840511 4945 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840518 4945 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840525 4945 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840530 4945 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840536 4945 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840541 4945 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840547 4945 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840552 4945 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840557 4945 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840562 4945 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840567 4945 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840572 4945 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840577 4945 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840584 4945 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840590 4945 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840597 4945 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.840604 4945 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840914 4945 flags.go:64] FLAG: --address="0.0.0.0" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840930 4945 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840941 4945 flags.go:64] FLAG: --anonymous-auth="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840948 4945 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840956 4945 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840962 4945 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840970 4945 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840977 4945 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840983 4945 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.840989 4945 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841013 4945 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841019 4945 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841025 4945 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841031 4945 flags.go:64] FLAG: --cgroup-root="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841037 4945 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841043 4945 flags.go:64] FLAG: --client-ca-file="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841049 4945 flags.go:64] FLAG: --cloud-config="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841055 4945 flags.go:64] FLAG: --cloud-provider="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841062 4945 flags.go:64] FLAG: --cluster-dns="[]" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841070 4945 flags.go:64] FLAG: --cluster-domain="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841076 4945 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841082 4945 flags.go:64] FLAG: --config-dir="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841088 4945 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841094 4945 flags.go:64] FLAG: --container-log-max-files="5" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841102 4945 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841108 4945 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841114 4945 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841120 4945 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841126 4945 flags.go:64] FLAG: --contention-profiling="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841132 4945 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841138 4945 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841145 4945 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841151 4945 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841159 4945 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841165 4945 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841171 4945 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841178 4945 flags.go:64] FLAG: --enable-load-reader="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841184 4945 flags.go:64] FLAG: --enable-server="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841190 4945 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841197 4945 flags.go:64] FLAG: --event-burst="100" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841204 4945 flags.go:64] FLAG: --event-qps="50" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841210 4945 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841216 4945 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841222 4945 flags.go:64] FLAG: --eviction-hard="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841229 4945 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841235 4945 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841241 4945 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841247 4945 flags.go:64] FLAG: --eviction-soft="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841253 4945 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841259 4945 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841265 4945 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841271 4945 flags.go:64] FLAG: --experimental-mounter-path="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841277 4945 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841283 4945 flags.go:64] FLAG: --fail-swap-on="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841289 4945 flags.go:64] FLAG: --feature-gates="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841327 4945 flags.go:64] FLAG: --file-check-frequency="20s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841334 4945 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841341 4945 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841347 4945 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841353 4945 flags.go:64] FLAG: --healthz-port="10248" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841359 4945 flags.go:64] FLAG: --help="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841365 4945 flags.go:64] FLAG: --hostname-override="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841371 4945 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841377 4945 flags.go:64] FLAG: --http-check-frequency="20s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841383 4945 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841389 4945 flags.go:64] FLAG: --image-credential-provider-config="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841395 4945 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841401 4945 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841407 4945 flags.go:64] FLAG: --image-service-endpoint="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841412 4945 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841418 4945 flags.go:64] FLAG: --kube-api-burst="100" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841424 4945 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841431 4945 flags.go:64] FLAG: --kube-api-qps="50" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841437 4945 flags.go:64] FLAG: --kube-reserved="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841443 4945 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841449 4945 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841455 4945 flags.go:64] FLAG: --kubelet-cgroups="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841461 4945 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841467 4945 flags.go:64] FLAG: --lock-file="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841473 4945 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841479 4945 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841485 4945 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841494 4945 flags.go:64] FLAG: --log-json-split-stream="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841501 4945 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841507 4945 flags.go:64] FLAG: --log-text-split-stream="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841513 4945 flags.go:64] FLAG: --logging-format="text" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841519 4945 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841525 4945 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841531 4945 flags.go:64] FLAG: --manifest-url="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841537 4945 flags.go:64] FLAG: --manifest-url-header="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841545 4945 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841550 4945 flags.go:64] FLAG: --max-open-files="1000000" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841558 4945 flags.go:64] FLAG: --max-pods="110" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841565 4945 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841571 4945 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841577 4945 flags.go:64] FLAG: --memory-manager-policy="None" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841582 4945 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841588 4945 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841595 4945 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841601 4945 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841613 4945 flags.go:64] FLAG: --node-status-max-images="50" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841619 4945 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841625 4945 flags.go:64] FLAG: --oom-score-adj="-999" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841631 4945 flags.go:64] FLAG: --pod-cidr="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841637 4945 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841646 4945 flags.go:64] FLAG: --pod-manifest-path="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841651 4945 flags.go:64] FLAG: --pod-max-pids="-1" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841658 4945 flags.go:64] FLAG: --pods-per-core="0" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841665 4945 flags.go:64] FLAG: --port="10250" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841671 4945 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841677 4945 flags.go:64] FLAG: --provider-id="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841682 4945 flags.go:64] FLAG: --qos-reserved="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841688 4945 flags.go:64] FLAG: --read-only-port="10255" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841694 4945 flags.go:64] FLAG: --register-node="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841701 4945 flags.go:64] FLAG: --register-schedulable="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841707 4945 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841722 4945 flags.go:64] FLAG: --registry-burst="10" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841729 4945 flags.go:64] FLAG: --registry-qps="5" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841735 4945 flags.go:64] FLAG: --reserved-cpus="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841741 4945 flags.go:64] FLAG: --reserved-memory="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841748 4945 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841754 4945 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841761 4945 flags.go:64] FLAG: --rotate-certificates="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841767 4945 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841773 4945 flags.go:64] FLAG: --runonce="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841778 4945 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841784 4945 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841791 4945 flags.go:64] FLAG: --seccomp-default="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841797 4945 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841803 4945 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841809 4945 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841815 4945 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841821 4945 flags.go:64] FLAG: --storage-driver-password="root" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841827 4945 flags.go:64] FLAG: --storage-driver-secure="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841833 4945 flags.go:64] FLAG: --storage-driver-table="stats" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841839 4945 flags.go:64] FLAG: --storage-driver-user="root" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841846 4945 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841852 4945 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841858 4945 flags.go:64] FLAG: --system-cgroups="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841893 4945 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841905 4945 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841911 4945 flags.go:64] FLAG: --tls-cert-file="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841916 4945 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841923 4945 flags.go:64] FLAG: --tls-min-version="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841930 4945 flags.go:64] FLAG: --tls-private-key-file="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841936 4945 flags.go:64] FLAG: --topology-manager-policy="none" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841943 4945 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841949 4945 flags.go:64] FLAG: --topology-manager-scope="container" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841955 4945 flags.go:64] FLAG: --v="2" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841963 4945 flags.go:64] FLAG: --version="false" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841970 4945 flags.go:64] FLAG: --vmodule="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841977 4945 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.841983 4945 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842176 4945 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842185 4945 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842192 4945 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842198 4945 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842203 4945 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842210 4945 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842222 4945 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842227 4945 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842233 4945 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842238 4945 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842245 4945 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842252 4945 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842259 4945 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842266 4945 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842273 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842279 4945 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842286 4945 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842292 4945 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842299 4945 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842305 4945 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842312 4945 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842319 4945 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842325 4945 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842332 4945 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842338 4945 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842345 4945 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842350 4945 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842357 4945 feature_gate.go:330] unrecognized feature gate: Example Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842362 4945 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842372 4945 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842378 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842383 4945 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842388 4945 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842393 4945 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842398 4945 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842403 4945 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842410 4945 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842417 4945 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842427 4945 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842433 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842438 4945 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842444 4945 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842449 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842455 4945 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842460 4945 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842465 4945 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842470 4945 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842475 4945 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842480 4945 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842486 4945 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842491 4945 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842496 4945 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842501 4945 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842506 4945 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842511 4945 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842517 4945 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842522 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842527 4945 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842532 4945 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842537 4945 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842543 4945 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842548 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842553 4945 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842559 4945 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842564 4945 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842569 4945 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842574 4945 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842579 4945 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842584 4945 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842589 4945 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.842597 4945 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.842606 4945 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.853486 4945 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.853506 4945 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853583 4945 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853589 4945 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853594 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853598 4945 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853602 4945 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853607 4945 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853611 4945 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853615 4945 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853618 4945 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853622 4945 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853625 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853630 4945 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853636 4945 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853640 4945 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853645 4945 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853650 4945 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853654 4945 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853658 4945 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853662 4945 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853667 4945 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853671 4945 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853675 4945 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853679 4945 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853683 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853687 4945 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853690 4945 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853694 4945 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853697 4945 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853701 4945 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853704 4945 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853709 4945 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853713 4945 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853717 4945 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853720 4945 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853724 4945 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853727 4945 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853731 4945 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853735 4945 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853740 4945 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853743 4945 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853747 4945 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853751 4945 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853754 4945 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853758 4945 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853762 4945 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853766 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853769 4945 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853772 4945 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853776 4945 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853780 4945 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853783 4945 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853786 4945 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853790 4945 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853793 4945 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853798 4945 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853801 4945 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853805 4945 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853808 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853811 4945 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853816 4945 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853819 4945 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853822 4945 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853826 4945 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853829 4945 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853833 4945 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853837 4945 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853840 4945 feature_gate.go:330] unrecognized feature gate: Example Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853843 4945 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853847 4945 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853850 4945 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.853854 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.853860 4945 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854026 4945 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854035 4945 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854040 4945 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854044 4945 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854048 4945 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854053 4945 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854058 4945 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854062 4945 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854066 4945 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854069 4945 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854073 4945 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854076 4945 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854080 4945 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854083 4945 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854087 4945 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854090 4945 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854094 4945 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854098 4945 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854101 4945 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854105 4945 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854109 4945 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854114 4945 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854118 4945 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854122 4945 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854126 4945 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854129 4945 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854133 4945 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854136 4945 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854141 4945 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854145 4945 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854149 4945 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854153 4945 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854156 4945 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854160 4945 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854163 4945 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854167 4945 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854170 4945 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854175 4945 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854178 4945 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854182 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854185 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854189 4945 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854192 4945 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854195 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854199 4945 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854202 4945 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854205 4945 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854209 4945 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854212 4945 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854216 4945 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854219 4945 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854223 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854226 4945 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854229 4945 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854234 4945 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854237 4945 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854241 4945 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854244 4945 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854249 4945 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854253 4945 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854257 4945 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854261 4945 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854266 4945 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854270 4945 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854274 4945 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854277 4945 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854280 4945 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854285 4945 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854288 4945 feature_gate.go:330] unrecognized feature gate: Example Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854292 4945 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.854296 4945 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.854301 4945 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.854425 4945 server.go:940] "Client rotation is on, will bootstrap in background" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.856949 4945 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.857032 4945 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.857555 4945 server.go:997] "Starting client certificate rotation" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.857575 4945 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.858159 4945 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-10 17:26:38.708839206 +0000 UTC Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.858310 4945 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.862311 4945 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.864257 4945 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 08 23:15:29 crc kubenswrapper[4945]: E0108 23:15:29.864918 4945 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.874458 4945 log.go:25] "Validated CRI v1 runtime API" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.894772 4945 log.go:25] "Validated CRI v1 image API" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.896470 4945 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.898299 4945 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-08-23-10-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.898338 4945 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.930714 4945 manager.go:217] Machine: {Timestamp:2026-01-08 23:15:29.928608558 +0000 UTC m=+0.239767584 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:96e5d795-e15f-4ef9-8efa-c5f4d0e8076a BootID:3499f74f-1067-4bdd-9043-f09d8e65e05d Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:95:c1:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:95:c1:82 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6b:b9:3a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1e:d8:7d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f9:f4:55 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ad:99:d5 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:f0:b2:88 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:52:a0:59:5e:ef:04 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:e0:56:9d:4f:0a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.931131 4945 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.931546 4945 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.932568 4945 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.932891 4945 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.932956 4945 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.933414 4945 topology_manager.go:138] "Creating topology manager with none policy" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.933434 4945 container_manager_linux.go:303] "Creating device plugin manager" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.933696 4945 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.933775 4945 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.934137 4945 state_mem.go:36] "Initialized new in-memory state store" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.934281 4945 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.935320 4945 kubelet.go:418] "Attempting to sync node with API server" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.935361 4945 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.935424 4945 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.935450 4945 kubelet.go:324] "Adding apiserver pod source" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.935472 4945 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.938022 4945 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:29 crc kubenswrapper[4945]: E0108 23:15:29.938148 4945 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.938564 4945 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.938524 4945 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:29 crc kubenswrapper[4945]: E0108 23:15:29.938728 4945 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.939142 4945 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.939963 4945 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940560 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940582 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940589 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940599 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940609 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940620 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940627 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940637 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940647 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940654 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940678 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940689 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.940857 4945 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.941466 4945 server.go:1280] "Started kubelet" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.941724 4945 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:29 crc systemd[1]: Started Kubernetes Kubelet. Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.943707 4945 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.943101 4945 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.945352 4945 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.945523 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.945762 4945 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.947082 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:42:57.993156089 +0000 UTC Jan 08 23:15:29 crc kubenswrapper[4945]: E0108 23:15:29.947227 4945 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.947250 4945 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.947272 4945 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.947304 4945 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.949726 4945 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.949777 4945 factory.go:55] Registering systemd factory Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.949795 4945 factory.go:221] Registration of the systemd container factory successfully Jan 08 23:15:29 crc kubenswrapper[4945]: E0108 23:15:29.948847 4945 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.74:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1888e488cc8d7599 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-08 23:15:29.941427609 +0000 UTC m=+0.252586555,LastTimestamp:2026-01-08 23:15:29.941427609 +0000 UTC m=+0.252586555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 08 23:15:29 crc kubenswrapper[4945]: E0108 23:15:29.949870 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="200ms" Jan 08 23:15:29 crc kubenswrapper[4945]: W0108 23:15:29.950033 4945 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:29 crc kubenswrapper[4945]: E0108 23:15:29.950108 4945 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.950218 4945 factory.go:153] Registering CRI-O factory Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.950281 4945 factory.go:221] Registration of the crio container factory successfully Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.950521 4945 factory.go:103] Registering Raw factory Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.950546 4945 manager.go:1196] Started watching for new ooms in manager Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.951538 4945 server.go:460] "Adding debug handlers to kubelet server" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.952739 4945 manager.go:319] Starting recovery of all containers Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971332 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971419 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971444 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971472 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971493 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971513 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971538 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971575 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971614 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971633 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971660 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971682 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971701 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971725 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971744 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971762 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971791 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971857 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971876 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971904 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971924 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971942 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971963 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.971984 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972039 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972059 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972084 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972103 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972123 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972194 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972240 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972259 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972284 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972303 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972320 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972357 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972396 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972423 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972451 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972473 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972492 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972512 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972533 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972552 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972578 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972597 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972616 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972635 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972654 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972673 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972693 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972721 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972832 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972864 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972889 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972913 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972933 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972953 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.972971 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973022 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973044 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973063 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973082 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973101 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973130 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973149 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973166 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973190 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973236 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973261 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973294 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973345 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973375 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973406 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973455 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973475 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973502 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973521 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973539 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.973557 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978242 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978292 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978317 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978331 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978365 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978384 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978397 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978412 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978423 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978449 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978559 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978572 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978606 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978619 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978629 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978643 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978654 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978686 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978698 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978709 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978734 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978917 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978936 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.978946 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.979002 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.979024 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.979044 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.979193 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980640 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980704 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980735 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980760 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980773 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980792 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980803 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980845 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980860 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980892 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980903 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980913 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980927 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980954 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980967 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.980982 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981006 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981021 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981033 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981044 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981061 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981072 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981088 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981100 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981112 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981125 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981135 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981147 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981166 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981178 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981193 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981205 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981219 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981231 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981242 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981255 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981265 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981275 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981288 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981298 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981313 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981325 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981336 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981349 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981359 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981372 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981383 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981451 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981466 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981476 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981489 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981498 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981509 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981528 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981538 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981553 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981565 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981577 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981590 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981601 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981612 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981628 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981645 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981662 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981671 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981682 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981703 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.981714 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982334 4945 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982364 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982376 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982389 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982415 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982426 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982448 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982458 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982468 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982485 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982495 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982510 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982525 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982536 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982549 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982558 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982568 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982582 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982592 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982613 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982627 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982643 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982660 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982676 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982691 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982701 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982712 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982728 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982739 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982757 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982765 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982777 4945 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982786 4945 reconstruct.go:97] "Volume reconstruction finished" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.982799 4945 reconciler.go:26] "Reconciler: start to sync state" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.984251 4945 manager.go:324] Recovery completed Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.994116 4945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.996412 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.998201 4945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.998196 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.998423 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.998434 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.998404 4945 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.998924 4945 kubelet.go:2335] "Starting kubelet main sync loop" Jan 08 23:15:29 crc kubenswrapper[4945]: E0108 23:15:29.999131 4945 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.999606 4945 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.999625 4945 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 08 23:15:29 crc kubenswrapper[4945]: I0108 23:15:29.999644 4945 state_mem.go:36] "Initialized new in-memory state store" Jan 08 23:15:30 crc kubenswrapper[4945]: W0108 23:15:30.000521 4945 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.000617 4945 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.009344 4945 policy_none.go:49] "None policy: Start" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.010008 4945 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.010036 4945 state_mem.go:35] "Initializing new in-memory state store" Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.047295 4945 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.061783 4945 manager.go:334] "Starting Device Plugin manager" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.061820 4945 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.061830 4945 server.go:79] "Starting device plugin registration server" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.062309 4945 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.062327 4945 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.062492 4945 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.062617 4945 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.062627 4945 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.074429 4945 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.099601 4945 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.099688 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101017 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101083 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101223 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101509 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101558 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101891 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101910 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.101918 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102116 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102272 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102385 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102418 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102461 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102473 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102917 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102964 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.102976 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.103110 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.103253 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.103288 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.103402 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.103425 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.103434 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104266 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104296 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104305 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104305 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104334 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104583 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104634 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.104652 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.105676 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.105696 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.105704 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.106455 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.106477 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.106485 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.106653 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.106677 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.107327 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.107350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.107361 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.151113 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="400ms" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.162517 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.164128 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.164161 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.164174 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.164199 4945 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.164655 4945 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.74:6443: connect: connection refused" node="crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185214 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185245 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185609 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185641 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185692 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185790 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185809 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185828 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.185934 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.186053 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.186115 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.186696 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.186803 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.186859 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.186890 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287681 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287729 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287747 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287763 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287779 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287802 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287816 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287831 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287852 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287874 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287889 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287903 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287902 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287921 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287939 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.287954 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288010 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288052 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288074 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288130 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288114 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288104 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288184 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288193 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288204 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288217 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288263 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288312 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288203 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.288269 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.365355 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.366946 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.367037 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.367053 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.367102 4945 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.367777 4945 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.74:6443: connect: connection refused" node="crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.431912 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.458346 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: W0108 23:15:30.458358 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0af6a5b06812c4c762bea9dcdd1a8a9f93621eb352548573ed7670f472db29a8 WatchSource:0}: Error finding container 0af6a5b06812c4c762bea9dcdd1a8a9f93621eb352548573ed7670f472db29a8: Status 404 returned error can't find the container with id 0af6a5b06812c4c762bea9dcdd1a8a9f93621eb352548573ed7670f472db29a8 Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.464676 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: W0108 23:15:30.484232 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ee4eefaa3a0c1278972888a974315548f0eb78caa2397073c3d76b7dbe3eac52 WatchSource:0}: Error finding container ee4eefaa3a0c1278972888a974315548f0eb78caa2397073c3d76b7dbe3eac52: Status 404 returned error can't find the container with id ee4eefaa3a0c1278972888a974315548f0eb78caa2397073c3d76b7dbe3eac52 Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.484728 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.488951 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:30 crc kubenswrapper[4945]: W0108 23:15:30.507732 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-d9769e8f803211be199c2e6998289dd1aa10d9240401e96eaa4581658d428373 WatchSource:0}: Error finding container d9769e8f803211be199c2e6998289dd1aa10d9240401e96eaa4581658d428373: Status 404 returned error can't find the container with id d9769e8f803211be199c2e6998289dd1aa10d9240401e96eaa4581658d428373 Jan 08 23:15:30 crc kubenswrapper[4945]: W0108 23:15:30.511657 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7697a56c0faa7522d47ca2ce047b1e4930c0ac740310cbf7a67242935450a17f WatchSource:0}: Error finding container 7697a56c0faa7522d47ca2ce047b1e4930c0ac740310cbf7a67242935450a17f: Status 404 returned error can't find the container with id 7697a56c0faa7522d47ca2ce047b1e4930c0ac740310cbf7a67242935450a17f Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.552641 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="800ms" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.768214 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.769370 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.769397 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.769406 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.769429 4945 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.769860 4945 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.74:6443: connect: connection refused" node="crc" Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.944183 4945 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:30 crc kubenswrapper[4945]: I0108 23:15:30.947857 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:13:37.027054282 +0000 UTC Jan 08 23:15:30 crc kubenswrapper[4945]: W0108 23:15:30.965518 4945 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.965579 4945 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:30 crc kubenswrapper[4945]: W0108 23:15:30.972314 4945 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.972354 4945 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:30 crc kubenswrapper[4945]: W0108 23:15:30.996304 4945 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:30 crc kubenswrapper[4945]: E0108 23:15:30.996354 4945 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.003651 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.003742 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9769e8f803211be199c2e6998289dd1aa10d9240401e96eaa4581658d428373"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.004967 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28" exitCode=0 Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.005023 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.005086 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee4eefaa3a0c1278972888a974315548f0eb78caa2397073c3d76b7dbe3eac52"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.005202 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.006017 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.006046 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.006059 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.006486 4945 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2" exitCode=0 Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.006574 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.006599 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3cede4a570b8349188110d1ad8bf9c68245ea5739a7af22ad68eddc7464e7e7e"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.006712 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.007698 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.007724 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.007749 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.008263 4945 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419" exitCode=0 Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.008346 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.008393 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0af6a5b06812c4c762bea9dcdd1a8a9f93621eb352548573ed7670f472db29a8"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.008561 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.008565 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.010130 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.010151 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.010159 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.010491 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.010511 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.010521 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.010677 4945 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175" exitCode=0 Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.011119 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.011161 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7697a56c0faa7522d47ca2ce047b1e4930c0ac740310cbf7a67242935450a17f"} Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.011225 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.012005 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.012031 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.012041 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:31 crc kubenswrapper[4945]: W0108 23:15:31.080043 4945 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.74:6443: connect: connection refused Jan 08 23:15:31 crc kubenswrapper[4945]: E0108 23:15:31.080129 4945 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.74:6443: connect: connection refused" logger="UnhandledError" Jan 08 23:15:31 crc kubenswrapper[4945]: E0108 23:15:31.353281 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="1.6s" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.570957 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.572766 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.572894 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.572906 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.573029 4945 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 08 23:15:31 crc kubenswrapper[4945]: I0108 23:15:31.948406 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:12:50.959327195 +0000 UTC Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.016173 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.016218 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.016234 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.016315 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.017114 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.017181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.017207 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.019938 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.020016 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.020031 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.020043 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.020906 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.020943 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.020952 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.022312 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.022337 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.022346 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.022355 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.024223 4945 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727" exitCode=0 Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.024294 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.024395 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.027048 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.027077 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.027088 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.029805 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08"} Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.029884 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.030725 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.030751 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.030765 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.038436 4945 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.696090 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:15:32 crc kubenswrapper[4945]: I0108 23:15:32.949078 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:13:13.297142819 +0000 UTC Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.038946 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718"} Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.039158 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.040403 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.040436 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.040447 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.044325 4945 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072" exitCode=0 Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.044413 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072"} Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.044464 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.044570 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.044584 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.045726 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.045770 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.045788 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.046029 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.046047 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.046058 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.046059 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.046094 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.046110 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:33 crc kubenswrapper[4945]: I0108 23:15:33.950622 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:47:41.813090108 +0000 UTC Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.051133 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd"} Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.051170 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.051194 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc"} Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.051212 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.051213 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3"} Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.051598 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b"} Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.052371 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.052408 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.052420 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:34 crc kubenswrapper[4945]: I0108 23:15:34.951742 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:31:59.417982549 +0000 UTC Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.048448 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.058121 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848"} Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.058166 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.058190 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.058228 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.059133 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.059165 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.059176 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.059685 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.059808 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.059878 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.271824 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.348359 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.496538 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.496959 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.501809 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.501889 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.501917 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.652483 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.661547 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.934842 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:35 crc kubenswrapper[4945]: I0108 23:15:35.952087 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:21:16.601022289 +0000 UTC Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.061291 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.061407 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.061612 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.062987 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.063058 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.063076 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.063544 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.063606 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.063625 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.064288 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.064345 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.064364 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:36 crc kubenswrapper[4945]: I0108 23:15:36.952460 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:27:39.277023043 +0000 UTC Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.064733 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.064785 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.064816 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.066884 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.066933 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.066929 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.066952 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.067032 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.067056 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.363773 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.364119 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.366382 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.366482 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.366520 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:37 crc kubenswrapper[4945]: I0108 23:15:37.953206 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:37:32.790190961 +0000 UTC Jan 08 23:15:38 crc kubenswrapper[4945]: I0108 23:15:38.147883 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:38 crc kubenswrapper[4945]: I0108 23:15:38.148152 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:38 crc kubenswrapper[4945]: I0108 23:15:38.149973 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:38 crc kubenswrapper[4945]: I0108 23:15:38.150087 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:38 crc kubenswrapper[4945]: I0108 23:15:38.150112 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:38 crc kubenswrapper[4945]: I0108 23:15:38.935807 4945 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 08 23:15:38 crc kubenswrapper[4945]: I0108 23:15:38.935910 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 08 23:15:38 crc kubenswrapper[4945]: I0108 23:15:38.953548 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:00:41.589408177 +0000 UTC Jan 08 23:15:39 crc kubenswrapper[4945]: I0108 23:15:39.954577 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:53:30.364957697 +0000 UTC Jan 08 23:15:40 crc kubenswrapper[4945]: E0108 23:15:40.074768 4945 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 08 23:15:40 crc kubenswrapper[4945]: I0108 23:15:40.954831 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:57:37.494453648 +0000 UTC Jan 08 23:15:41 crc kubenswrapper[4945]: E0108 23:15:41.574386 4945 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 08 23:15:41 crc kubenswrapper[4945]: I0108 23:15:41.943719 4945 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 08 23:15:41 crc kubenswrapper[4945]: I0108 23:15:41.955013 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:53:03.889671756 +0000 UTC Jan 08 23:15:42 crc kubenswrapper[4945]: E0108 23:15:42.040416 4945 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.071423 4945 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.071494 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.502599 4945 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.502656 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.509369 4945 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.509434 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.622463 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.622664 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.623760 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.623786 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.623795 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:42 crc kubenswrapper[4945]: I0108 23:15:42.955311 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:33:54.163296763 +0000 UTC Jan 08 23:15:43 crc kubenswrapper[4945]: I0108 23:15:43.175095 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:43 crc kubenswrapper[4945]: I0108 23:15:43.176978 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:43 crc kubenswrapper[4945]: I0108 23:15:43.177243 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:43 crc kubenswrapper[4945]: I0108 23:15:43.177414 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:43 crc kubenswrapper[4945]: I0108 23:15:43.177624 4945 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 08 23:15:43 crc kubenswrapper[4945]: I0108 23:15:43.956228 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:14:21.551173048 +0000 UTC Jan 08 23:15:44 crc kubenswrapper[4945]: I0108 23:15:44.957295 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 13:25:48.760213304 +0000 UTC Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.055080 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.055320 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.055860 4945 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.055939 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.056841 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.056873 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.056884 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.060451 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.086433 4945 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.086943 4945 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.087030 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.088051 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.088117 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.088136 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.273272 4945 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.273369 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 08 23:15:45 crc kubenswrapper[4945]: I0108 23:15:45.957781 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:39:24.015171682 +0000 UTC Jan 08 23:15:46 crc kubenswrapper[4945]: I0108 23:15:46.373679 4945 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 08 23:15:46 crc kubenswrapper[4945]: I0108 23:15:46.386616 4945 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 08 23:15:46 crc kubenswrapper[4945]: I0108 23:15:46.958691 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:27:04.921847488 +0000 UTC Jan 08 23:15:47 crc kubenswrapper[4945]: E0108 23:15:47.502612 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.504813 4945 trace.go:236] Trace[551956088]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Jan-2026 23:15:33.829) (total time: 13675ms): Jan 08 23:15:47 crc kubenswrapper[4945]: Trace[551956088]: ---"Objects listed" error: 13675ms (23:15:47.504) Jan 08 23:15:47 crc kubenswrapper[4945]: Trace[551956088]: [13.675467167s] [13.675467167s] END Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.504841 4945 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.504842 4945 trace.go:236] Trace[627478629]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Jan-2026 23:15:33.425) (total time: 14079ms): Jan 08 23:15:47 crc kubenswrapper[4945]: Trace[627478629]: ---"Objects listed" error: 14079ms (23:15:47.504) Jan 08 23:15:47 crc kubenswrapper[4945]: Trace[627478629]: [14.079290999s] [14.079290999s] END Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.504866 4945 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.505159 4945 trace.go:236] Trace[1677145522]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Jan-2026 23:15:32.652) (total time: 14852ms): Jan 08 23:15:47 crc kubenswrapper[4945]: Trace[1677145522]: ---"Objects listed" error: 14852ms (23:15:47.505) Jan 08 23:15:47 crc kubenswrapper[4945]: Trace[1677145522]: [14.852890954s] [14.852890954s] END Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.505184 4945 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.505707 4945 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.506322 4945 trace.go:236] Trace[1746865292]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Jan-2026 23:15:33.907) (total time: 13598ms): Jan 08 23:15:47 crc kubenswrapper[4945]: Trace[1746865292]: ---"Objects listed" error: 13598ms (23:15:47.506) Jan 08 23:15:47 crc kubenswrapper[4945]: Trace[1746865292]: [13.598725254s] [13.598725254s] END Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.506505 4945 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.945542 4945 apiserver.go:52] "Watching apiserver" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.951408 4945 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.951646 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.951978 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.952118 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:47 crc kubenswrapper[4945]: E0108 23:15:47.952207 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.952380 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.952438 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:47 crc kubenswrapper[4945]: E0108 23:15:47.952441 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.952453 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:47 crc kubenswrapper[4945]: E0108 23:15:47.952878 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.952895 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.957136 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.957174 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.957137 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.957185 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.957837 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.957907 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.958052 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.958639 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.959042 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:53:09.274479711 +0000 UTC Jan 08 23:15:47 crc kubenswrapper[4945]: I0108 23:15:47.959853 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.003290 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.017914 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.026805 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.030227 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.033548 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.038901 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.047453 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.048055 4945 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.049284 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.059297 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.069749 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.081688 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.090544 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.093498 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.095198 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718" exitCode=255 Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.096038 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.100940 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108769 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108823 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108847 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108871 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108893 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108917 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108940 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108964 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.108985 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109033 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109058 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109083 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109104 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109129 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109124 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109152 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109177 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109202 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109232 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109262 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109283 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109310 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109313 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109334 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109347 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109362 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109391 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109441 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109462 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109461 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109510 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109532 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109534 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109553 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109554 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109577 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109580 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109575 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109599 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109661 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109685 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109704 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109721 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109753 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109769 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109786 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109804 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109820 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109836 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109851 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109866 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109880 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109898 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109915 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109933 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109948 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109962 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109977 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110016 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110042 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110064 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110087 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110112 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110132 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110160 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110183 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110205 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110225 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110249 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110269 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110295 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110316 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110341 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110363 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110500 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110524 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110547 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110571 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110592 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110612 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110636 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110659 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110714 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110735 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110757 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110779 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110804 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110826 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110849 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110870 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110895 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110916 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110939 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110963 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111032 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111057 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111078 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111097 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111119 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111137 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111158 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109662 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.109848 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111240 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111268 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111293 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111316 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111338 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111360 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111386 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111409 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111513 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111539 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111561 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111583 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111603 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111623 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111644 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111667 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111691 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111713 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111737 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111761 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111785 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111811 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111836 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111872 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111893 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111915 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111937 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111961 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111988 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112037 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112059 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112085 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112105 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112127 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112151 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112173 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112197 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112215 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112231 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112251 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112274 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112298 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112322 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112345 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112370 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112392 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112413 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112434 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112460 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112484 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112505 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112529 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112549 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112572 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112595 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112621 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112648 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112672 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112862 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112890 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112909 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112927 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112944 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112962 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112977 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113048 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113072 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113095 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113112 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113137 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113154 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113171 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113188 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113204 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113220 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113235 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113250 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113266 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113282 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113300 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113315 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113332 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113349 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113367 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113389 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113415 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113442 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113468 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113613 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113637 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113661 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113686 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113709 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113735 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113759 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113783 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113871 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113901 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113924 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113946 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113970 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114010 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114034 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114081 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114110 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114132 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114157 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114177 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114193 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114213 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114233 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114253 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114279 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114312 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114337 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114364 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114387 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114459 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114476 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114494 4945 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114509 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114523 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114535 4945 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114550 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114564 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114578 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114592 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114606 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110308 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110363 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110370 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110410 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110437 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110533 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110570 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.115636 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110596 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110489 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110620 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110641 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110703 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110747 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110793 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110860 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.115406 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110929 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110968 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.110981 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111035 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111030 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111195 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111205 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111229 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111336 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111374 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111445 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111538 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111752 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111772 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.111956 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112164 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112430 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.112477 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113396 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113443 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113627 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113848 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.113855 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114081 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114099 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114120 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114191 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114252 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114476 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114482 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114537 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114574 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.114671 4945 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114662 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114813 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114887 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114907 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.114960 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.115230 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.116147 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.115565 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.115789 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.116567 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.116592 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.116858 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.116854 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.116928 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.116943 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.117122 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:48.617100587 +0000 UTC m=+18.928259533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117207 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117309 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117452 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117506 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117594 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117636 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117871 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117948 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.117953 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.118336 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.118374 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.118754 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.119740 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.119780 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.120091 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.120287 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.120444 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.120456 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.120635 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.120709 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.120896 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.120923 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121151 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121234 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121263 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121281 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121450 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121442 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121619 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121873 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.121856 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.122075 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.122308 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123022 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123068 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123090 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123115 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123195 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123206 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123284 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123608 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123690 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123740 4945 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.123761 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124196 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124433 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124458 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124589 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124623 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124772 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124908 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124948 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.124978 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.125035 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.128332 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.128296 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:15:48.628190895 +0000 UTC m=+18.939349951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.126408 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.125368 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.125386 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.125845 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.125921 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.125944 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.126276 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.126572 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.126694 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.126883 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.126950 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.126962 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.127017 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.127903 4945 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.129608 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:48.629583349 +0000 UTC m=+18.940742525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.130082 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.133074 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.133242 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.133301 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.133642 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.133714 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.133918 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.134254 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.134312 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.134398 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.134551 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.134929 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.135125 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.136112 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.136363 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.136698 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.137105 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.137112 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.137281 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.137946 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.139194 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.140924 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.140947 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.140962 4945 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.141046 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:48.641028385 +0000 UTC m=+18.952187331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.141647 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.141872 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.142395 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.142437 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.142679 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.142701 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.142710 4945 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.142761 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:48.642752427 +0000 UTC m=+18.953911373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.142806 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.143042 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.143727 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.144248 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.144315 4945 scope.go:117] "RemoveContainer" containerID="9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.144696 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.145020 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.145096 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.145173 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.145452 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.145656 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.145107 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.146259 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.145234 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.145957 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.144672 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.146314 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.146677 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.146713 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.146816 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.146850 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.146895 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.147049 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.147065 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.147269 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.147304 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.147312 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.147683 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.148405 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.148840 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.149315 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.149589 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.150124 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.151378 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.154066 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.159441 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.162376 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.168592 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.171832 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.172072 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.177519 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.182736 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.183396 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.186512 4945 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.186614 4945 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.188524 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.188583 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.188600 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.188624 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.188639 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.193628 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.203693 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.203922 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.206870 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.206915 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.206929 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.206945 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.206957 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.212277 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215373 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215406 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215451 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215462 4945 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215471 4945 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215511 4945 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215519 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215528 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215536 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215564 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215576 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215585 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215595 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215620 4945 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215630 4945 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215638 4945 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215647 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215657 4945 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215666 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215675 4945 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215683 4945 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215691 4945 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215700 4945 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215708 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215717 4945 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215729 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215739 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215747 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215754 4945 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215762 4945 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215770 4945 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215777 4945 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215785 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215793 4945 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215803 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215813 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215823 4945 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215849 4945 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215858 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215883 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215893 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215901 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215901 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215909 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215946 4945 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215958 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215970 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.215981 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216009 4945 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216021 4945 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216031 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216042 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216051 4945 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216056 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216062 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216083 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216092 4945 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216100 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216108 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216116 4945 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216134 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216142 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216150 4945 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216158 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216166 4945 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216175 4945 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216185 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216193 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.215736 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216202 4945 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216310 4945 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216321 4945 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216333 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216495 4945 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216506 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216516 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216528 4945 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216538 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216548 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216558 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216568 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216578 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216591 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216601 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216611 4945 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216621 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216632 4945 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216642 4945 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216652 4945 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216662 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216673 4945 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216684 4945 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216694 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216705 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216716 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216727 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.216981 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217007 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217047 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217058 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217067 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217079 4945 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217092 4945 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217102 4945 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217123 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217134 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217145 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217161 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217172 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217183 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217194 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217204 4945 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217217 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217228 4945 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217238 4945 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217249 4945 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217260 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217269 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217281 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217292 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217304 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217315 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217506 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217519 4945 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217530 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217541 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217551 4945 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217561 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217573 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217582 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217593 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217604 4945 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217616 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217626 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217636 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217647 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217658 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217669 4945 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217679 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217690 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217700 4945 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217715 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217725 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217735 4945 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217746 4945 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217829 4945 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217842 4945 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217854 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217863 4945 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217873 4945 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217884 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217895 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217909 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217921 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217932 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217945 4945 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217957 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217972 4945 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.217983 4945 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218862 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218875 4945 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218886 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218897 4945 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218908 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218918 4945 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218928 4945 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218938 4945 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218948 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218959 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.218970 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219007 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219032 4945 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219052 4945 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219067 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219078 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219089 4945 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219100 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219112 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219123 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219134 4945 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219146 4945 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219156 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219166 4945 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219177 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219189 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219200 4945 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219212 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219222 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219050 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219245 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219256 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219272 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.219282 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.220821 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.228398 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.230843 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.233465 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.233512 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.233525 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.233545 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.233559 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.237824 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.242874 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.245683 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.245713 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.245723 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.245737 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.245749 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.247457 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.254962 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.255077 4945 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.256561 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.256594 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.256603 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.256616 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.256628 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.272374 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.280880 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.286671 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.358755 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.358802 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.358812 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.358828 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.358839 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.463118 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.463160 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.463170 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.463188 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.463203 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.565081 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.565114 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.565122 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.565135 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.565144 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.623746 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.623847 4945 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.623894 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:49.623881611 +0000 UTC m=+19.935040557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.668215 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.668264 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.668276 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.668293 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.668305 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.724675 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.724780 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.724839 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.724878 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.724927 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.724935 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:15:49.724905112 +0000 UTC m=+20.036064098 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.724957 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.724974 4945 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.725016 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:49.725006815 +0000 UTC m=+20.036165761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.725045 4945 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.725058 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.725086 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.725103 4945 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.725128 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:49.725105957 +0000 UTC m=+20.036264913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:48 crc kubenswrapper[4945]: E0108 23:15:48.725156 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:49.725140348 +0000 UTC m=+20.036299374 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.770626 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.770658 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.770668 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.770683 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.770695 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.872830 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.872866 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.872875 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.872891 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.872899 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.959602 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 04:00:45.672145929 +0000 UTC Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.974727 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.974762 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.974771 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.974784 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:48 crc kubenswrapper[4945]: I0108 23:15:48.974795 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:48Z","lastTransitionTime":"2026-01-08T23:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.057206 4945 csr.go:261] certificate signing request csr-f8whw is approved, waiting to be issued Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.066881 4945 csr.go:257] certificate signing request csr-f8whw is issued Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.076850 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.076889 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.076897 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.076912 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.076921 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.100460 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.102372 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.104073 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.104116 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.104126 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7f7face5b5eb094feb4aa24682e43674675d55f9451159a2dced963bbffc8c15"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.104948 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"08ebed3284bccfbc0a609eec66dca2c5edb92656606467efb3fda66339374452"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.105970 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5cvrd"] Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.106284 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.106316 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xlbqw"] Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.106399 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.106447 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c4705686eb4cd209ffb9fd22642221f8f29cb05d24b1239f6ab4ee0c965e412a"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.106500 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xlbqw" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.109704 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.109995 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.110623 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.110751 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.110828 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.111757 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.112003 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.138125 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.157853 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.175551 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.185544 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.185989 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.186026 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.186043 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.186053 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.208176 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.228947 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlj8\" (UniqueName: \"kubernetes.io/projected/5a7c4784-65bf-4adf-b855-a397fc1e794b-kube-api-access-pnlj8\") pod \"node-resolver-xlbqw\" (UID: \"5a7c4784-65bf-4adf-b855-a397fc1e794b\") " pod="openshift-dns/node-resolver-xlbqw" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.228986 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9606db62-4e63-4b79-b069-289e09548144-host\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.229058 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a7c4784-65bf-4adf-b855-a397fc1e794b-hosts-file\") pod \"node-resolver-xlbqw\" (UID: \"5a7c4784-65bf-4adf-b855-a397fc1e794b\") " pod="openshift-dns/node-resolver-xlbqw" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.229166 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqcfh\" (UniqueName: \"kubernetes.io/projected/9606db62-4e63-4b79-b069-289e09548144-kube-api-access-lqcfh\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.229391 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9606db62-4e63-4b79-b069-289e09548144-serviceca\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.245679 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.263691 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.274780 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.288501 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.288550 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.288559 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.288573 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.288582 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.290571 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.305265 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.319120 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.330243 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlj8\" (UniqueName: \"kubernetes.io/projected/5a7c4784-65bf-4adf-b855-a397fc1e794b-kube-api-access-pnlj8\") pod \"node-resolver-xlbqw\" (UID: \"5a7c4784-65bf-4adf-b855-a397fc1e794b\") " pod="openshift-dns/node-resolver-xlbqw" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.330269 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9606db62-4e63-4b79-b069-289e09548144-host\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.330299 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a7c4784-65bf-4adf-b855-a397fc1e794b-hosts-file\") pod \"node-resolver-xlbqw\" (UID: \"5a7c4784-65bf-4adf-b855-a397fc1e794b\") " pod="openshift-dns/node-resolver-xlbqw" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.330317 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqcfh\" (UniqueName: \"kubernetes.io/projected/9606db62-4e63-4b79-b069-289e09548144-kube-api-access-lqcfh\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.330344 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9606db62-4e63-4b79-b069-289e09548144-serviceca\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.330458 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5a7c4784-65bf-4adf-b855-a397fc1e794b-hosts-file\") pod \"node-resolver-xlbqw\" (UID: \"5a7c4784-65bf-4adf-b855-a397fc1e794b\") " pod="openshift-dns/node-resolver-xlbqw" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.330486 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9606db62-4e63-4b79-b069-289e09548144-host\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.330485 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.331428 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9606db62-4e63-4b79-b069-289e09548144-serviceca\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.341561 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.347536 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqcfh\" (UniqueName: \"kubernetes.io/projected/9606db62-4e63-4b79-b069-289e09548144-kube-api-access-lqcfh\") pod \"node-ca-5cvrd\" (UID: \"9606db62-4e63-4b79-b069-289e09548144\") " pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.353253 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.353984 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlj8\" (UniqueName: \"kubernetes.io/projected/5a7c4784-65bf-4adf-b855-a397fc1e794b-kube-api-access-pnlj8\") pod \"node-resolver-xlbqw\" (UID: \"5a7c4784-65bf-4adf-b855-a397fc1e794b\") " pod="openshift-dns/node-resolver-xlbqw" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.362006 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.377068 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.390346 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.390387 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.390396 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.390413 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.390422 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.392708 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.409578 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.418607 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.418731 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5cvrd" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.425761 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xlbqw" Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.431543 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9606db62_4e63_4b79_b069_289e09548144.slice/crio-5dcc8e89580709b0cef65522d244c7c9a18c5b189e6da7404b6d0510fe4cbac2 WatchSource:0}: Error finding container 5dcc8e89580709b0cef65522d244c7c9a18c5b189e6da7404b6d0510fe4cbac2: Status 404 returned error can't find the container with id 5dcc8e89580709b0cef65522d244c7c9a18c5b189e6da7404b6d0510fe4cbac2 Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.498337 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.498623 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.498634 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.498648 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.498659 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.606241 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.606269 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.606278 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.606290 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.606299 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.632035 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.632149 4945 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.632215 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:51.632197863 +0000 UTC m=+21.943356809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.655312 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dsh4d"] Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.655651 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.655730 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-vbm95"] Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.656293 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.658537 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pd2nq"] Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.658609 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.659306 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.662352 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.663104 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.663155 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.664148 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.664215 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.664220 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.664249 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.664168 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.665604 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.665767 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.665804 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.683376 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.698425 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.709463 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.709503 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.709515 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.709533 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.709545 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.710416 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.725360 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734008 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734124 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-system-cni-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734152 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/694a1575-6630-406f-93e7-ef55359bc79c-proxy-tls\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734172 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-k8s-cni-cncf-io\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734197 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-multus-certs\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734221 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:15:51.734194857 +0000 UTC m=+22.045353803 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734265 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734302 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-cnibin\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734318 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-socket-dir-parent\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734332 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-cni-bin\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734348 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-os-release\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734362 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734385 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734397 4945 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734441 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:51.734425252 +0000 UTC m=+22.045584308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734369 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-netns\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734481 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-cni-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734502 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-daemon-config\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734523 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/694a1575-6630-406f-93e7-ef55359bc79c-mcd-auth-proxy-config\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734547 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734567 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-kubelet\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734589 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spvcr\" (UniqueName: \"kubernetes.io/projected/694a1575-6630-406f-93e7-ef55359bc79c-kube-api-access-spvcr\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734619 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-cni-multus\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734638 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-hostroot\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734665 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734687 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0fa9b342-4b22-49db-9022-2dd852e7d835-cni-binary-copy\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734705 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vq4\" (UniqueName: \"kubernetes.io/projected/0fa9b342-4b22-49db-9022-2dd852e7d835-kube-api-access-88vq4\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734723 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-conf-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734747 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-etc-kubernetes\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.734774 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/694a1575-6630-406f-93e7-ef55359bc79c-rootfs\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734862 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734876 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734886 4945 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734864 4945 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734961 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:51.734951795 +0000 UTC m=+22.046110741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.734989 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:51.734983476 +0000 UTC m=+22.046142422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.737417 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.748696 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.760815 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.772268 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.785405 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.800880 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.811488 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.811513 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.811522 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.811534 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.811543 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.814351 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.834727 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835224 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-os-release\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835268 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-netns\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835294 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835310 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-cni-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835325 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-daemon-config\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835334 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-os-release\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835340 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835360 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/694a1575-6630-406f-93e7-ef55359bc79c-mcd-auth-proxy-config\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835383 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-kubelet\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835393 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-cni-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835397 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spvcr\" (UniqueName: \"kubernetes.io/projected/694a1575-6630-406f-93e7-ef55359bc79c-kube-api-access-spvcr\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835429 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-os-release\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835450 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-hostroot\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835478 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-cni-multus\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835503 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0fa9b342-4b22-49db-9022-2dd852e7d835-cni-binary-copy\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835518 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vq4\" (UniqueName: \"kubernetes.io/projected/0fa9b342-4b22-49db-9022-2dd852e7d835-kube-api-access-88vq4\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835532 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-etc-kubernetes\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835548 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-system-cni-dir\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835562 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cnibin\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835580 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-conf-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835600 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/694a1575-6630-406f-93e7-ef55359bc79c-rootfs\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835620 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-system-cni-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835635 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/694a1575-6630-406f-93e7-ef55359bc79c-proxy-tls\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835652 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc5zg\" (UniqueName: \"kubernetes.io/projected/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-kube-api-access-bc5zg\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835671 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-k8s-cni-cncf-io\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835688 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-multus-certs\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835705 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cni-binary-copy\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835728 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-cnibin\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835743 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-socket-dir-parent\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835759 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-cni-bin\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835812 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-cni-bin\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835834 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-etc-kubernetes\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835862 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-conf-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835888 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/694a1575-6630-406f-93e7-ef55359bc79c-rootfs\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835920 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-system-cni-dir\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.835980 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-netns\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836056 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-hostroot\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836089 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-cni-multus\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836155 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-daemon-config\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836652 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/694a1575-6630-406f-93e7-ef55359bc79c-mcd-auth-proxy-config\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836660 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0fa9b342-4b22-49db-9022-2dd852e7d835-cni-binary-copy\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836686 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-var-lib-kubelet\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836705 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-multus-certs\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836736 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-host-run-k8s-cni-cncf-io\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836743 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-multus-socket-dir-parent\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.836766 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0fa9b342-4b22-49db-9022-2dd852e7d835-cnibin\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.840481 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/694a1575-6630-406f-93e7-ef55359bc79c-proxy-tls\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.852714 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.853029 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spvcr\" (UniqueName: \"kubernetes.io/projected/694a1575-6630-406f-93e7-ef55359bc79c-kube-api-access-spvcr\") pod \"machine-config-daemon-vbm95\" (UID: \"694a1575-6630-406f-93e7-ef55359bc79c\") " pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.858890 4945 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859141 4945 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859178 4945 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859182 4945 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859153 4945 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859195 4945 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859230 4945 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859257 4945 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859393 4945 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859422 4945 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859446 4945 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859467 4945 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859486 4945 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859506 4945 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859640 4945 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.859621 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-pd2nq/status\": read tcp 38.102.83.74:40272->38.102.83.74:6443: use of closed network connection" Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859707 4945 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859146 4945 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859749 4945 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859230 4945 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.859779 4945 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.859924 4945 projected.go:194] Error preparing data for projected volume kube-api-access-88vq4 for pod openshift-multus/multus-dsh4d: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": read tcp 38.102.83.74:40272->38.102.83.74:6443: use of closed network connection Jan 08 23:15:49 crc kubenswrapper[4945]: E0108 23:15:49.860020 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0fa9b342-4b22-49db-9022-2dd852e7d835-kube-api-access-88vq4 podName:0fa9b342-4b22-49db-9022-2dd852e7d835 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:50.360003876 +0000 UTC m=+20.671162822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-88vq4" (UniqueName: "kubernetes.io/projected/0fa9b342-4b22-49db-9022-2dd852e7d835-kube-api-access-88vq4") pod "multus-dsh4d" (UID: "0fa9b342-4b22-49db-9022-2dd852e7d835") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": read tcp 38.102.83.74:40272->38.102.83.74:6443: use of closed network connection Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.889951 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.914592 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.919669 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.919711 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.919721 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.919745 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.919756 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:49Z","lastTransitionTime":"2026-01-08T23:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.936417 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.936454 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.936474 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-os-release\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.936514 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-system-cni-dir\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.936530 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cnibin\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.936548 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc5zg\" (UniqueName: \"kubernetes.io/projected/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-kube-api-access-bc5zg\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.936563 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cni-binary-copy\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.936719 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-os-release\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.937256 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cni-binary-copy\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.937317 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cnibin\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.937389 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-system-cni-dir\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.937516 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.937615 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.937830 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.960020 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:49:17.430497031 +0000 UTC Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.960090 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.966801 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc5zg\" (UniqueName: \"kubernetes.io/projected/8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042-kube-api-access-bc5zg\") pod \"multus-additional-cni-plugins-pd2nq\" (UID: \"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\") " pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.978389 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.984306 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" Jan 08 23:15:49 crc kubenswrapper[4945]: I0108 23:15:49.990466 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:49Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:49 crc kubenswrapper[4945]: W0108 23:15:49.996533 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e4ebcfd_e9bd_49a7_a0a0_edefafbc2042.slice/crio-3b51fcd121d42e67d0b9318d203ad78b0b59a00b2cdb80a7eb51cd2afd9ef0c1 WatchSource:0}: Error finding container 3b51fcd121d42e67d0b9318d203ad78b0b59a00b2cdb80a7eb51cd2afd9ef0c1: Status 404 returned error can't find the container with id 3b51fcd121d42e67d0b9318d203ad78b0b59a00b2cdb80a7eb51cd2afd9ef0c1 Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.000080 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.000098 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.000174 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:50 crc kubenswrapper[4945]: E0108 23:15:50.000272 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:15:50 crc kubenswrapper[4945]: E0108 23:15:50.000361 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:15:50 crc kubenswrapper[4945]: E0108 23:15:50.000491 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.003480 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.004058 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.004731 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.005326 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.005905 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.006312 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.007182 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.007791 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.008714 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.009347 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.011066 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.011717 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.012564 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.013053 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.013940 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.014504 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.015440 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.016080 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.016521 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.017715 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.018162 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.018921 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.019449 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.020675 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.021572 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.022323 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.022356 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.022379 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.022390 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.022406 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.022416 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.023280 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.023890 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.024964 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.025437 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.025966 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.029460 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.029741 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.029973 4945 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.030088 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.031799 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.032825 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.033298 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.035094 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.036066 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.036595 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.037673 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.038313 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.039243 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.040241 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.040427 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.041316 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.041910 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.042869 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.043851 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.044789 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.045522 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.046973 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.047764 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.048920 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.050659 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.051381 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.051845 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.052281 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gcbcl"] Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.058353 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.059763 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.061850 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.061932 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.062353 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.062598 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.062716 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.062913 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.063717 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.069103 4945 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-08 23:10:49 +0000 UTC, rotation deadline is 2026-09-23 08:22:50.095745574 +0000 UTC Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.069138 4945 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6177h7m0.026609045s for next certificate rotation Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.073215 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.093544 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.103566 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.109017 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" event={"ID":"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042","Type":"ContainerStarted","Data":"3b51fcd121d42e67d0b9318d203ad78b0b59a00b2cdb80a7eb51cd2afd9ef0c1"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.109934 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xlbqw" event={"ID":"5a7c4784-65bf-4adf-b855-a397fc1e794b","Type":"ContainerStarted","Data":"546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.109962 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xlbqw" event={"ID":"5a7c4784-65bf-4adf-b855-a397fc1e794b","Type":"ContainerStarted","Data":"9ae6a10ffe530fa5b571e8a357a839ecd0db94ac907b084c5a7d8706f731b9dd"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.112777 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.112811 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"fb3620221fa4b39aa98ebc277adcd6560bdbfb3acf925db277d385dffc705517"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.114146 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5cvrd" event={"ID":"9606db62-4e63-4b79-b069-289e09548144","Type":"ContainerStarted","Data":"dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.114177 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5cvrd" event={"ID":"9606db62-4e63-4b79-b069-289e09548144","Type":"ContainerStarted","Data":"5dcc8e89580709b0cef65522d244c7c9a18c5b189e6da7404b6d0510fe4cbac2"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.114718 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.115931 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.125642 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.125679 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.125689 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.125702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.125711 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.129303 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.137835 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-bin\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.137868 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-systemd-units\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.137883 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-ovn\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.137898 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-node-log\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.137916 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.137931 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e12d0822-44c5-4bf0-a785-cf478c66210f-ovn-node-metrics-cert\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.137964 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.137984 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-env-overrides\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138001 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-slash\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138014 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-netns\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138042 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-netd\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138105 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-config\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138145 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-log-socket\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138213 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-systemd\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138257 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-kubelet\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138273 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-ovn-kubernetes\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138298 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-var-lib-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138318 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-script-lib\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138345 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-etc-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.138369 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7px\" (UniqueName: \"kubernetes.io/projected/e12d0822-44c5-4bf0-a785-cf478c66210f-kube-api-access-vx7px\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.144353 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.156771 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.169904 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.181895 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.195761 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.206834 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.217511 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.228243 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.228630 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.228664 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.228674 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.228690 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.228701 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.238739 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-log-socket\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.238825 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-systemd\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.238848 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-kubelet\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.238842 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-log-socket\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.238894 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-ovn-kubernetes\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.238929 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-systemd\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.238948 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-kubelet\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.238862 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-ovn-kubernetes\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239317 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-var-lib-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239360 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-etc-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239383 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-script-lib\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239418 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7px\" (UniqueName: \"kubernetes.io/projected/e12d0822-44c5-4bf0-a785-cf478c66210f-kube-api-access-vx7px\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239451 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-systemd-units\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239471 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-ovn\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239494 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-node-log\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239515 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-bin\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239533 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239557 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e12d0822-44c5-4bf0-a785-cf478c66210f-ovn-node-metrics-cert\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239593 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239617 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-env-overrides\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239663 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-slash\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239665 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-bin\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239686 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239686 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-netns\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239727 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-node-log\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239737 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-systemd-units\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239752 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-ovn\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239495 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-etc-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239739 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-netd\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239775 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-netd\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239539 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-var-lib-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.239995 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-netns\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.240060 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-slash\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.240113 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-openvswitch\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.240175 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-config\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.240453 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-script-lib\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.241382 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-env-overrides\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.241460 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-config\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.248364 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e12d0822-44c5-4bf0-a785-cf478c66210f-ovn-node-metrics-cert\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.249875 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.261374 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7px\" (UniqueName: \"kubernetes.io/projected/e12d0822-44c5-4bf0-a785-cf478c66210f-kube-api-access-vx7px\") pod \"ovnkube-node-gcbcl\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.267393 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.284543 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.296681 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.306269 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.331484 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.331753 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.331764 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.331780 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.331794 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.335420 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.372598 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.379742 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: W0108 23:15:50.384295 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode12d0822_44c5_4bf0_a785_cf478c66210f.slice/crio-cddfdfb0a3634c8df82a1449f3c18bb29e25e559fc3ea4f736a830231b8deadd WatchSource:0}: Error finding container cddfdfb0a3634c8df82a1449f3c18bb29e25e559fc3ea4f736a830231b8deadd: Status 404 returned error can't find the container with id cddfdfb0a3634c8df82a1449f3c18bb29e25e559fc3ea4f736a830231b8deadd Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.417264 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.434400 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.434424 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.434432 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.434444 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.434452 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.441897 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vq4\" (UniqueName: \"kubernetes.io/projected/0fa9b342-4b22-49db-9022-2dd852e7d835-kube-api-access-88vq4\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.462342 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.485324 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vq4\" (UniqueName: \"kubernetes.io/projected/0fa9b342-4b22-49db-9022-2dd852e7d835-kube-api-access-88vq4\") pod \"multus-dsh4d\" (UID: \"0fa9b342-4b22-49db-9022-2dd852e7d835\") " pod="openshift-multus/multus-dsh4d" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.523902 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.537320 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.537353 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.537362 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.537375 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.537384 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.571387 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dsh4d" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.588863 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.610944 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.635595 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.639264 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.639300 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.639309 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.639323 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.639334 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.673584 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.690428 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.706164 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.726090 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.741633 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.741663 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.741672 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.741688 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.741700 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.745804 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.766064 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.820554 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.832790 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.844551 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.844585 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.844595 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.844610 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.844619 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.946456 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.946498 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.946510 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.946527 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.946538 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:50Z","lastTransitionTime":"2026-01-08T23:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:50 crc kubenswrapper[4945]: I0108 23:15:50.960276 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:41:08.284767162 +0000 UTC Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.019351 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.048628 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.048659 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.048667 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.048681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.048690 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.089448 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.117959 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsh4d" event={"ID":"0fa9b342-4b22-49db-9022-2dd852e7d835","Type":"ContainerStarted","Data":"39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.118014 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsh4d" event={"ID":"0fa9b342-4b22-49db-9022-2dd852e7d835","Type":"ContainerStarted","Data":"1ca33439e4b8be5db5a1796c359d4adce2a4575c34c09bad1b51daf107692700"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.119633 4945 generic.go:334] "Generic (PLEG): container finished" podID="8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042" containerID="a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74" exitCode=0 Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.119675 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" event={"ID":"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042","Type":"ContainerDied","Data":"a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.121471 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.123094 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.123338 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.124928 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258" exitCode=0 Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.125017 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.125039 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"cddfdfb0a3634c8df82a1449f3c18bb29e25e559fc3ea4f736a830231b8deadd"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.133458 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.151625 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.153705 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.154960 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.154984 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.154994 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.155018 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.155026 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.158713 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.169966 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.184820 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.195625 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.197645 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.214162 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.228590 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.256796 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.257798 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.257830 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.257840 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.257855 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.257866 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.294191 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.326167 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.355800 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.360581 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.360624 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.360635 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.360651 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.360661 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.365574 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.385646 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.405673 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.445931 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.462372 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.462407 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.462415 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.462428 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.462440 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.472692 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.487101 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.535187 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.564405 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.564448 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.564457 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.564474 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.564486 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.573099 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.615389 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.653436 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.657955 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.658080 4945 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.658141 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:55.658124749 +0000 UTC m=+25.969283695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.666181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.666209 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.666216 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.666231 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.666240 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.695401 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.738049 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.758504 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.758605 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.758630 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758694 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:15:55.758668828 +0000 UTC m=+26.069827764 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758722 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758738 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758749 4945 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.758778 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758792 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:55.758779611 +0000 UTC m=+26.069938557 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758789 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758817 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758828 4945 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758879 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:55.758864203 +0000 UTC m=+26.070023239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758907 4945 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:51 crc kubenswrapper[4945]: E0108 23:15:51.758951 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:15:55.758943984 +0000 UTC m=+26.070102930 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.768230 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.768268 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.768287 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.768305 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.768316 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.773982 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.812509 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.851975 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.870611 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.870643 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.870652 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.870667 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.870677 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.894114 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.931599 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.960909 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:13:48.205576464 +0000 UTC Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.973241 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.973287 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.973299 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.973315 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.973327 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:51Z","lastTransitionTime":"2026-01-08T23:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.973373 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:51Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:51 crc kubenswrapper[4945]: I0108 23:15:51.999899 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:51.999960 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:51.999920 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:52 crc kubenswrapper[4945]: E0108 23:15:52.000047 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:15:52 crc kubenswrapper[4945]: E0108 23:15:52.000130 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:15:52 crc kubenswrapper[4945]: E0108 23:15:52.000239 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.017677 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.059476 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.076220 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.076265 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.076279 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.076295 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.076308 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.095856 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.131125 4945 generic.go:334] "Generic (PLEG): container finished" podID="8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042" containerID="f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff" exitCode=0 Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.131169 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" event={"ID":"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042","Type":"ContainerDied","Data":"f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.134849 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.134894 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.134910 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.134924 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.134937 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.134949 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.140698 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.174956 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.179449 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.179489 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.179498 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.179514 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.179525 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.214067 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.259132 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.282491 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.282529 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.282541 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.282554 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.282563 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.293346 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.333114 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.373231 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.384869 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.384915 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.384929 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.384947 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.384959 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.411294 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.459918 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.487374 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.487409 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.487419 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.487433 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.487442 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.502305 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.533937 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.576007 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.589090 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.589292 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.589316 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.589331 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.589343 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.615858 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.646314 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.658022 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.659420 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.675775 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.694937 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.694977 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.694987 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.695003 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.695055 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.714995 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.756228 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.797283 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.797317 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.797325 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.797337 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.797346 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.800843 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.834226 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.874094 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.899072 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.899115 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.899127 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.899143 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.899155 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:52Z","lastTransitionTime":"2026-01-08T23:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.914613 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.953084 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.962347 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:29:47.61226331 +0000 UTC Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.963113 4945 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 67h13m54.649155223s for next certificate rotation Jan 08 23:15:52 crc kubenswrapper[4945]: I0108 23:15:52.994782 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:52Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.002207 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.002391 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.002514 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.002622 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.002711 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.034888 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.074405 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.105621 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.105681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.105698 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.105718 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.105734 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.113863 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.141907 4945 generic.go:334] "Generic (PLEG): container finished" podID="8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042" containerID="d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb" exitCode=0 Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.142172 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" event={"ID":"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042","Type":"ContainerDied","Data":"d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.155806 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.195682 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.207630 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.207685 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.207699 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.207722 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.207736 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.238956 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.273129 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.311220 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.311263 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.311272 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.311293 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.311303 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.317401 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.360051 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.395037 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.413752 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.413794 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.413802 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.413816 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.413827 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.436618 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.475482 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.513394 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.516271 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.516336 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.516351 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.516374 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.516388 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.556945 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.594196 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.619394 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.619457 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.619472 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.619494 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.619508 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.635234 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.675733 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.721926 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.721975 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.721984 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.722049 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.722066 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.728640 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.756277 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.794757 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.824443 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.824757 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.824861 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.824951 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.825058 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.836801 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.878954 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.922102 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.926842 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.926874 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.926885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.926900 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.926909 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:53Z","lastTransitionTime":"2026-01-08T23:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.953822 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:53Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.999550 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:53 crc kubenswrapper[4945]: I0108 23:15:53.999592 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:53.999544 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:54 crc kubenswrapper[4945]: E0108 23:15:53.999726 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:15:54 crc kubenswrapper[4945]: E0108 23:15:53.999841 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:15:54 crc kubenswrapper[4945]: E0108 23:15:53.999985 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.029046 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.029076 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.029085 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.029096 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.029105 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.132183 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.132221 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.132229 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.132242 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.132252 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.150121 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.152778 4945 generic.go:334] "Generic (PLEG): container finished" podID="8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042" containerID="914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc" exitCode=0 Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.152815 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" event={"ID":"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042","Type":"ContainerDied","Data":"914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.176905 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.189785 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.204847 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.228030 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.234974 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.235041 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.235055 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.235073 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.235085 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.250312 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.262948 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.275774 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.289642 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.316476 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.337653 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.337696 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.337705 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.337720 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.337732 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.354660 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.396367 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.434854 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.440191 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.440239 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.440253 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.440273 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.440285 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.475840 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.517838 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.543263 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.543325 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.543338 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.543369 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.543387 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.555204 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:54Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.646334 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.646367 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.646375 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.646388 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.646397 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.749031 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.749273 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.749350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.749380 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.749394 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.851924 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.851959 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.851968 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.851986 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.852002 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.955174 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.955226 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.955239 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.955257 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:54 crc kubenswrapper[4945]: I0108 23:15:54.955269 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:54Z","lastTransitionTime":"2026-01-08T23:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.063260 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.063363 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.063378 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.063399 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.063416 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.160605 4945 generic.go:334] "Generic (PLEG): container finished" podID="8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042" containerID="880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740" exitCode=0 Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.160669 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" event={"ID":"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042","Type":"ContainerDied","Data":"880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.166728 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.166788 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.166811 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.166844 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.166866 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.180417 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.195308 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.214102 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.237395 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.266337 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.269305 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.269383 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.269406 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.269437 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.269459 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.283522 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.297287 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.308899 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.325191 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.335461 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.345969 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.355197 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.365878 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.371646 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.371676 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.371688 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.371706 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.371718 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.378702 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.388893 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:55Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.474451 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.474503 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.474513 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.474531 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.474549 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.576614 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.576644 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.576653 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.576666 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.576676 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.679223 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.679295 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.679320 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.679352 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.679376 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.697811 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.697960 4945 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.698171 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:03.698059402 +0000 UTC m=+34.009218388 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.782172 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.782231 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.782243 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.782261 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.782273 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.799028 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.799177 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799264 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:16:03.799235297 +0000 UTC m=+34.110394263 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799324 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799350 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.799361 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799369 4945 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799440 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799466 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799481 4945 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799511 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:03.799492063 +0000 UTC m=+34.110651019 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799546 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:03.799524314 +0000 UTC m=+34.110683300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.799425 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799638 4945 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:55 crc kubenswrapper[4945]: E0108 23:15:55.799713 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:03.799698838 +0000 UTC m=+34.110857894 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.884357 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.884407 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.884424 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.884447 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.884463 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.987546 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.987617 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.987643 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.987673 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:55 crc kubenswrapper[4945]: I0108 23:15:55.987694 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:55Z","lastTransitionTime":"2026-01-08T23:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.000329 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.000351 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.000573 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:56 crc kubenswrapper[4945]: E0108 23:15:56.000487 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:15:56 crc kubenswrapper[4945]: E0108 23:15:56.000741 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:15:56 crc kubenswrapper[4945]: E0108 23:15:56.000843 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.090343 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.090393 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.090406 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.090426 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.090441 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.171676 4945 generic.go:334] "Generic (PLEG): container finished" podID="8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042" containerID="960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17" exitCode=0 Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.171761 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" event={"ID":"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042","Type":"ContainerDied","Data":"960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.178739 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.179090 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.192742 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.192909 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.192944 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.192958 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.194103 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.194147 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.204193 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.205512 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.226616 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.251581 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.281856 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.294702 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.296734 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.296780 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.296793 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.296813 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.296827 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.307763 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.318698 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.333713 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.345079 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.358905 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.372817 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.386530 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.399536 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.399579 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.399592 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.399610 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.399610 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.399627 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.412319 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.425492 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.442297 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.459843 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.474957 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.486221 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.500796 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.501855 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.501885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.501894 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.501909 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.501919 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.522070 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.532259 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.543149 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.554077 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.565445 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.575778 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.585487 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.599898 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.603663 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.603704 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.603717 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.603735 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.603747 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.613220 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:56Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.708669 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.708722 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.708734 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.708752 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.708771 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.811614 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.811708 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.811726 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.811748 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.811766 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.913927 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.913984 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.914021 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.914046 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:56 crc kubenswrapper[4945]: I0108 23:15:56.914060 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:56Z","lastTransitionTime":"2026-01-08T23:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.016122 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.016349 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.016418 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.016485 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.016552 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.118755 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.119315 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.119341 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.119365 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.119385 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.188050 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" event={"ID":"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042","Type":"ContainerStarted","Data":"d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.188949 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.189021 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.208410 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.223917 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.223928 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.224411 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.224433 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.224453 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.224468 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.232987 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.259594 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.278353 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.297415 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.315641 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.326527 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.326585 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.326603 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.326711 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.326729 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.347353 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.366057 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.386372 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.413268 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.429101 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.429170 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.429180 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.429194 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.429205 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.430503 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.444513 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.457670 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.472750 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.490527 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.505872 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.518890 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.532060 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.532088 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.532097 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.532111 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.532120 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.546203 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.561983 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.575758 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.590098 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.611806 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.626618 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.635418 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.635477 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.635507 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.635531 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.635549 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.655916 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.675555 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.692730 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.708489 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.721803 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.737723 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.738945 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.739052 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.739079 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.739114 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.739139 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.758486 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:57Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.842809 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.842893 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.842920 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.842950 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.843207 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.945455 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.945503 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.945514 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.945530 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:57 crc kubenswrapper[4945]: I0108 23:15:57.945543 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:57Z","lastTransitionTime":"2026-01-08T23:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.000198 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.000235 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.000217 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.000389 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.000575 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.000743 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.047725 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.047762 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.047772 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.047784 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.047793 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.149862 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.149905 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.149916 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.149933 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.149943 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.252462 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.252507 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.252517 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.252535 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.252548 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.354444 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.354487 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.354497 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.354517 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.354527 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.440405 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.440430 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.440438 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.440452 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.440461 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.454240 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:58Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.458222 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.458255 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.458267 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.458284 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.458295 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.472144 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:58Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.475733 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.475842 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.475947 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.476040 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.476135 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.488123 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:58Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.491102 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.491201 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.491266 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.491325 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.491399 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.501927 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:58Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.504860 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.504985 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.505069 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.505127 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.505195 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.517566 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:58Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:58 crc kubenswrapper[4945]: E0108 23:15:58.517884 4945 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.520230 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.520276 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.520288 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.520302 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.520314 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.622916 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.622950 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.622959 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.622973 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.622982 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.725852 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.725909 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.725926 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.725949 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.725966 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.828627 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.828687 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.828704 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.828726 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.828743 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.931148 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.931316 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.931337 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.931359 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:58 crc kubenswrapper[4945]: I0108 23:15:58.931378 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:58Z","lastTransitionTime":"2026-01-08T23:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.034142 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.034190 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.034202 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.034219 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.034231 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.136663 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.136717 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.136729 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.136747 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.136761 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.195586 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/0.log" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.198630 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c" exitCode=1 Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.198694 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.199841 4945 scope.go:117] "RemoveContainer" containerID="eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.217802 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.239204 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.239486 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.239510 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.239521 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.239538 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.239547 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.273813 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.296529 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.315160 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.340399 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.341809 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.341842 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.341856 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.341873 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.341886 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.352904 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.373829 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:15:58.446533 6244 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0108 23:15:58.446548 6244 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0108 23:15:58.446559 6244 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0108 23:15:58.446565 6244 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0108 23:15:58.446584 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:15:58.446580 6244 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0108 23:15:58.446600 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0108 23:15:58.446607 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0108 23:15:58.446620 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0108 23:15:58.446630 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0108 23:15:58.446640 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0108 23:15:58.446647 6244 factory.go:656] Stopping watch factory\\\\nI0108 23:15:58.446650 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0108 23:15:58.446659 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0108 23:15:58.446685 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0108 23\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.385648 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.396729 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.410219 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.420169 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.430500 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.444297 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.444331 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.444344 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.444361 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.444374 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.448087 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.462851 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:15:59Z is after 2025-08-24T17:21:41Z" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.546367 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.546408 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.546421 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.546439 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.546451 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.648778 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.648811 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.648820 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.648832 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.648841 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.750984 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.751057 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.751070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.751090 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.751103 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.853847 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.853888 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.853899 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.853915 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.853926 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.956497 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.956536 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.956548 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.956564 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.956574 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:15:59Z","lastTransitionTime":"2026-01-08T23:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.999349 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:15:59 crc kubenswrapper[4945]: E0108 23:15:59.999489 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.999644 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:15:59 crc kubenswrapper[4945]: E0108 23:15:59.999808 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:15:59 crc kubenswrapper[4945]: I0108 23:15:59.999883 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:00 crc kubenswrapper[4945]: E0108 23:16:00.000037 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.020790 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.040777 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.059213 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.059257 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.059269 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.059288 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.059301 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.061417 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.081113 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.095635 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.116843 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.132896 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.155766 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:15:58.446533 6244 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0108 23:15:58.446548 6244 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0108 23:15:58.446559 6244 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0108 23:15:58.446565 6244 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0108 23:15:58.446584 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:15:58.446580 6244 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0108 23:15:58.446600 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0108 23:15:58.446607 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0108 23:15:58.446620 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0108 23:15:58.446630 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0108 23:15:58.446640 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0108 23:15:58.446647 6244 factory.go:656] Stopping watch factory\\\\nI0108 23:15:58.446650 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0108 23:15:58.446659 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0108 23:15:58.446685 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0108 23\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.161379 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.161532 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.161622 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.161710 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.161792 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.171243 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.184440 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.198881 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.203374 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/0.log" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.205977 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.206406 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.213329 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.223298 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.235927 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.246201 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.256870 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.263626 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.263659 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.263670 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.263686 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.263697 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.269014 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.280569 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.290401 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.299193 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.311990 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.324248 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.336855 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.356776 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.365730 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.365782 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.365795 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.365811 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.365823 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.372382 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.385017 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.398770 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.411675 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.423349 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.440472 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:15:58.446533 6244 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0108 23:15:58.446548 6244 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0108 23:15:58.446559 6244 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0108 23:15:58.446565 6244 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0108 23:15:58.446584 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:15:58.446580 6244 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0108 23:15:58.446600 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0108 23:15:58.446607 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0108 23:15:58.446620 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0108 23:15:58.446630 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0108 23:15:58.446640 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0108 23:15:58.446647 6244 factory.go:656] Stopping watch factory\\\\nI0108 23:15:58.446650 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0108 23:15:58.446659 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0108 23:15:58.446685 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0108 23\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.468380 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.468421 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.468435 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.468457 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.468470 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.571145 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.571213 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.571237 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.571268 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.571292 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.674282 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.674338 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.674349 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.674367 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.674380 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.776965 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.777032 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.777045 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.777064 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.777076 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.879497 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.879541 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.879555 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.879575 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.879590 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.982039 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.982075 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.982085 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.982101 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:00 crc kubenswrapper[4945]: I0108 23:16:00.982113 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:00Z","lastTransitionTime":"2026-01-08T23:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.084342 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.084390 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.084409 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.084430 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.084446 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.186705 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.186747 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.186760 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.186778 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.186791 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.210699 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/1.log" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.211779 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/0.log" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.217522 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c" exitCode=1 Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.217585 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.217660 4945 scope.go:117] "RemoveContainer" containerID="eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.219448 4945 scope.go:117] "RemoveContainer" containerID="94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c" Jan 08 23:16:01 crc kubenswrapper[4945]: E0108 23:16:01.219776 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.237095 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.250989 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.268251 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.286571 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.289164 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.289201 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.289212 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.289230 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.289242 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.304783 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eacb79c0403805c83529ee4438eff4f4df5455d203f9c37d0f6911155ffffe0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:15:58Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:15:58.446533 6244 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0108 23:15:58.446548 6244 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0108 23:15:58.446559 6244 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0108 23:15:58.446565 6244 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0108 23:15:58.446584 6244 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:15:58.446580 6244 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0108 23:15:58.446600 6244 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0108 23:15:58.446607 6244 handler.go:208] Removed *v1.Node event handler 7\\\\nI0108 23:15:58.446620 6244 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0108 23:15:58.446630 6244 handler.go:208] Removed *v1.Node event handler 2\\\\nI0108 23:15:58.446640 6244 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0108 23:15:58.446647 6244 factory.go:656] Stopping watch factory\\\\nI0108 23:15:58.446650 6244 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0108 23:15:58.446659 6244 ovnkube.go:599] Stopped ovnkube\\\\nI0108 23:15:58.446685 6244 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0108 23\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.316337 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.333712 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.348089 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.358984 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.368891 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.382821 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.391940 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.392016 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.392032 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.392052 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.392068 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.397036 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.410889 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.424080 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.437715 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:01Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.495173 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.495208 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.495218 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.495233 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.495246 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.597539 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.597588 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.597599 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.597616 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.597628 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.700185 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.700242 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.700256 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.700274 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.700287 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.802594 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.802647 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.802661 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.802680 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.802692 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.906882 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.906942 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.906960 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.906985 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.907029 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:01Z","lastTransitionTime":"2026-01-08T23:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:01 crc kubenswrapper[4945]: I0108 23:16:01.999582 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:01 crc kubenswrapper[4945]: E0108 23:16:01.999710 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.000132 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.000282 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:02 crc kubenswrapper[4945]: E0108 23:16:02.000434 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:02 crc kubenswrapper[4945]: E0108 23:16:02.000593 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.008643 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.008843 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.008953 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.009069 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.009170 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.112014 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.112063 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.112079 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.112101 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.112117 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.214472 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.214515 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.214523 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.214536 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.214546 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.222051 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/1.log" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.226383 4945 scope.go:117] "RemoveContainer" containerID="94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c" Jan 08 23:16:02 crc kubenswrapper[4945]: E0108 23:16:02.226539 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.260849 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.294062 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.312110 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.313829 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct"] Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.314335 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.316146 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.316317 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.316819 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.316839 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.316852 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.316866 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.316876 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.326239 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.337564 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.350576 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.361008 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.374344 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.391174 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.400754 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.410216 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.419357 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.419427 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.419438 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.419495 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.419506 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.420512 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.428539 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.438087 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.449432 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.460172 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.467027 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06296d36-6978-4968-b8fc-430bdd945d17-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.467080 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d78v\" (UniqueName: \"kubernetes.io/projected/06296d36-6978-4968-b8fc-430bdd945d17-kube-api-access-5d78v\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.467099 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06296d36-6978-4968-b8fc-430bdd945d17-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.467118 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06296d36-6978-4968-b8fc-430bdd945d17-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.470558 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.487155 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.496725 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.505536 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.515529 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.521084 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.521115 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.521124 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.521136 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.521144 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.524499 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.531842 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.542471 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.553039 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.562536 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.568026 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06296d36-6978-4968-b8fc-430bdd945d17-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.568059 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06296d36-6978-4968-b8fc-430bdd945d17-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.568102 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06296d36-6978-4968-b8fc-430bdd945d17-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.568133 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d78v\" (UniqueName: \"kubernetes.io/projected/06296d36-6978-4968-b8fc-430bdd945d17-kube-api-access-5d78v\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.568688 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06296d36-6978-4968-b8fc-430bdd945d17-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.568810 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06296d36-6978-4968-b8fc-430bdd945d17-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.574804 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06296d36-6978-4968-b8fc-430bdd945d17-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.580511 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.582858 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d78v\" (UniqueName: \"kubernetes.io/projected/06296d36-6978-4968-b8fc-430bdd945d17-kube-api-access-5d78v\") pod \"ovnkube-control-plane-749d76644c-8khct\" (UID: \"06296d36-6978-4968-b8fc-430bdd945d17\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.593426 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.603179 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.615452 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.623238 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.623404 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.623493 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.623581 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.623666 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.625407 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.629147 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:02Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:02 crc kubenswrapper[4945]: W0108 23:16:02.636218 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06296d36_6978_4968_b8fc_430bdd945d17.slice/crio-1d871e31afb0ad14f07f847a074c5e84d990a38928714d34c49979e76201392f WatchSource:0}: Error finding container 1d871e31afb0ad14f07f847a074c5e84d990a38928714d34c49979e76201392f: Status 404 returned error can't find the container with id 1d871e31afb0ad14f07f847a074c5e84d990a38928714d34c49979e76201392f Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.726266 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.726324 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.726339 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.726362 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.726377 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.828702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.828737 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.828746 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.828763 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.828774 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.931205 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.931245 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.931253 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.931267 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:02 crc kubenswrapper[4945]: I0108 23:16:02.931276 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:02Z","lastTransitionTime":"2026-01-08T23:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.032872 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.032940 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.032963 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.033029 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.033049 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.134970 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.135037 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.135047 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.135063 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.135073 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.229644 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" event={"ID":"06296d36-6978-4968-b8fc-430bdd945d17","Type":"ContainerStarted","Data":"440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.229983 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" event={"ID":"06296d36-6978-4968-b8fc-430bdd945d17","Type":"ContainerStarted","Data":"5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.230011 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" event={"ID":"06296d36-6978-4968-b8fc-430bdd945d17","Type":"ContainerStarted","Data":"1d871e31afb0ad14f07f847a074c5e84d990a38928714d34c49979e76201392f"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.237829 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.237872 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.237884 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.237903 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.237916 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.242536 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.257734 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.277183 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.291814 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.313472 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.326667 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.336367 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.339704 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.339727 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.339735 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.339748 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.339757 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.347692 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.359630 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.376054 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.387358 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.397769 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.407273 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.417496 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.426865 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.437924 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.442201 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.442237 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.442247 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.442261 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.442272 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.543888 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.543924 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.543933 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.543946 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.543956 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.646245 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.646317 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.646341 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.646372 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.646396 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.749033 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.749074 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.749085 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.749102 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.749114 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.751482 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-g8gcl"] Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.752221 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.752333 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.763539 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.776121 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.781047 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.781207 4945 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.781293 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:19.781274483 +0000 UTC m=+50.092433429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.788424 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.798119 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.810960 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.821239 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.837289 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.851558 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.851596 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.851607 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.851624 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.851637 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.853974 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.864580 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.874181 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.881623 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.881729 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.881757 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qq2h\" (UniqueName: \"kubernetes.io/projected/53cbedd0-f69d-4a28-9077-13fed644be95-kube-api-access-8qq2h\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.881786 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.881806 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:16:19.881785491 +0000 UTC m=+50.192944437 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.881848 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.881891 4945 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.881904 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.881926 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.881940 4945 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.881948 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:19.881932105 +0000 UTC m=+50.193091151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.881936 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.881976 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:19.881964585 +0000 UTC m=+50.193123641 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.882060 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.882075 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.882086 4945 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.882118 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:19.882110299 +0000 UTC m=+50.193269245 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.884404 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.892557 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.900789 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.914719 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.924494 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.936521 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.945312 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:03Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.953849 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.953897 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.953913 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.953935 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.953950 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:03Z","lastTransitionTime":"2026-01-08T23:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.983083 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.983186 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qq2h\" (UniqueName: \"kubernetes.io/projected/53cbedd0-f69d-4a28-9077-13fed644be95-kube-api-access-8qq2h\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.983317 4945 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: E0108 23:16:03.983409 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs podName:53cbedd0-f69d-4a28-9077-13fed644be95 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:04.483384236 +0000 UTC m=+34.794543202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs") pod "network-metrics-daemon-g8gcl" (UID: "53cbedd0-f69d-4a28-9077-13fed644be95") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.999758 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.999797 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:03 crc kubenswrapper[4945]: I0108 23:16:03.999836 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:04 crc kubenswrapper[4945]: E0108 23:16:03.999880 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:04 crc kubenswrapper[4945]: E0108 23:16:03.999978 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:04 crc kubenswrapper[4945]: E0108 23:16:04.000097 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.004587 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qq2h\" (UniqueName: \"kubernetes.io/projected/53cbedd0-f69d-4a28-9077-13fed644be95-kube-api-access-8qq2h\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.056791 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.056821 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.056830 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.056843 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.056852 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.159529 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.159612 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.159631 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.159658 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.159676 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.263960 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.264556 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.264633 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.264742 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.264813 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.366985 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.367057 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.367068 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.367083 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.367095 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.469786 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.469838 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.469852 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.469870 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.469881 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.488650 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:04 crc kubenswrapper[4945]: E0108 23:16:04.488852 4945 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:04 crc kubenswrapper[4945]: E0108 23:16:04.488909 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs podName:53cbedd0-f69d-4a28-9077-13fed644be95 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:05.488891458 +0000 UTC m=+35.800050414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs") pod "network-metrics-daemon-g8gcl" (UID: "53cbedd0-f69d-4a28-9077-13fed644be95") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.572773 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.572818 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.572827 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.572844 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.572854 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.675622 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.675676 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.675689 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.675709 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.675724 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.778096 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.778151 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.778166 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.778186 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.778202 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.880822 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.880858 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.880871 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.880885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.880896 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.984091 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.984126 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.984136 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.984153 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.984164 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:04Z","lastTransitionTime":"2026-01-08T23:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:04 crc kubenswrapper[4945]: I0108 23:16:04.999299 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:04 crc kubenswrapper[4945]: E0108 23:16:04.999511 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.086080 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.086134 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.086148 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.086177 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.086208 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.188242 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.188274 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.188284 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.188300 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.188312 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.276115 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.290803 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.290839 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.290850 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.290866 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.290876 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.296500 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.310772 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.328752 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.341248 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.351364 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.372825 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.383964 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.393739 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.393785 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.393819 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.393836 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.393848 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.396364 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.407819 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.424975 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.436300 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.445961 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.456011 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.464489 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.472581 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.483900 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.493516 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:05Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.496677 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.496700 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.496709 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.496722 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.496731 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.498293 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:05 crc kubenswrapper[4945]: E0108 23:16:05.498394 4945 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:05 crc kubenswrapper[4945]: E0108 23:16:05.498435 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs podName:53cbedd0-f69d-4a28-9077-13fed644be95 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:07.498423049 +0000 UTC m=+37.809581995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs") pod "network-metrics-daemon-g8gcl" (UID: "53cbedd0-f69d-4a28-9077-13fed644be95") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.599149 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.599214 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.599232 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.599256 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.599273 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.700956 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.701052 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.701070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.701092 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.701109 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.803408 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.803470 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.803489 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.803514 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.803534 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.907524 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.907567 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.907579 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.907595 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:05 crc kubenswrapper[4945]: I0108 23:16:05.907609 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:05Z","lastTransitionTime":"2026-01-08T23:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.000323 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.000411 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.000556 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:06 crc kubenswrapper[4945]: E0108 23:16:06.000549 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:06 crc kubenswrapper[4945]: E0108 23:16:06.000671 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:06 crc kubenswrapper[4945]: E0108 23:16:06.000761 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.010187 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.010242 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.010258 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.010275 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.010289 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.112087 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.112131 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.112145 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.112167 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.112181 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.215297 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.215361 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.215380 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.215407 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.215425 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.318863 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.318939 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.318964 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.319031 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.319058 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.421657 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.421703 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.421714 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.421732 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.421750 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.524814 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.524859 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.524871 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.524895 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.524907 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.627469 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.627523 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.627533 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.627549 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.627561 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.729408 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.729440 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.729451 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.729467 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.729479 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.833171 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.833266 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.833284 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.833340 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.833359 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.936119 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.936170 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.936187 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.936209 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.936227 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:06Z","lastTransitionTime":"2026-01-08T23:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:06 crc kubenswrapper[4945]: I0108 23:16:06.999649 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:07 crc kubenswrapper[4945]: E0108 23:16:07.000026 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.038515 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.038571 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.038582 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.038603 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.038616 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.142325 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.142381 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.142395 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.142409 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.142418 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.244062 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.244111 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.244125 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.244142 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.244157 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.347024 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.347089 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.347108 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.347133 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.347151 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.450078 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.450116 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.450125 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.450139 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.450148 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.572056 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:07 crc kubenswrapper[4945]: E0108 23:16:07.572321 4945 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:07 crc kubenswrapper[4945]: E0108 23:16:07.572421 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs podName:53cbedd0-f69d-4a28-9077-13fed644be95 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:11.572394606 +0000 UTC m=+41.883553612 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs") pod "network-metrics-daemon-g8gcl" (UID: "53cbedd0-f69d-4a28-9077-13fed644be95") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.573570 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.573601 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.573614 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.573630 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.573641 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.676389 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.676434 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.676444 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.676457 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.676467 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.779367 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.779415 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.779433 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.779468 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.779481 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.883175 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.883216 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.883224 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.883237 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.883245 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.986961 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.987053 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.987072 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.987104 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:07 crc kubenswrapper[4945]: I0108 23:16:07.987123 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:07Z","lastTransitionTime":"2026-01-08T23:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.000410 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.000479 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.000531 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.000605 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.000748 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.000903 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.090884 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.090949 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.090973 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.091051 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.091081 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.194502 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.194558 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.194583 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.194616 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.194640 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.297903 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.297977 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.298056 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.298088 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.298112 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.400857 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.400916 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.400934 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.400959 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.400977 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.503924 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.503964 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.503973 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.503989 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.504010 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.599905 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.599969 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.599985 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.600039 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.600057 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.621034 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:08Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.626378 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.626428 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.626445 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.626471 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.626488 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.644885 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:08Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.649017 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.649053 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.649064 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.649081 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.649093 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.668625 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:08Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.673896 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.673956 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.673977 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.674028 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.674046 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.695803 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:08Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.700629 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.700695 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.700717 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.700744 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.700763 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.722204 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:08Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:08 crc kubenswrapper[4945]: E0108 23:16:08.722564 4945 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.724869 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.724934 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.724952 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.724979 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.725024 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.828053 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.828131 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.828154 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.828189 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.828213 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.931844 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.931915 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.931933 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.931959 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:08 crc kubenswrapper[4945]: I0108 23:16:08.931984 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:08Z","lastTransitionTime":"2026-01-08T23:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.000419 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:09 crc kubenswrapper[4945]: E0108 23:16:09.000616 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.035264 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.035303 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.035316 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.035331 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.035342 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.137214 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.137247 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.137256 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.137268 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.137276 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.240243 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.240299 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.240309 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.240322 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.240333 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.343702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.343765 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.343782 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.343805 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.343822 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.446414 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.446515 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.446549 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.446581 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.446602 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.548948 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.549072 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.549100 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.549132 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.549161 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.651894 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.651952 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.651972 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.652028 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.652051 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.755375 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.755414 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.755425 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.755439 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.755449 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.858208 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.858245 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.858254 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.858284 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.858294 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.961194 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.961230 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.961240 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.961252 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:09 crc kubenswrapper[4945]: I0108 23:16:09.961260 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:09Z","lastTransitionTime":"2026-01-08T23:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:09.999918 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.000056 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:09.999948 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:10 crc kubenswrapper[4945]: E0108 23:16:10.000184 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:10 crc kubenswrapper[4945]: E0108 23:16:10.000330 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:10 crc kubenswrapper[4945]: E0108 23:16:10.000400 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.015477 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.028983 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.048211 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.063465 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.063803 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.063828 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.063841 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.063855 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.063864 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.082068 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.104161 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.119575 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.135192 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.149501 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.166090 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.166127 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.166139 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.166153 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.166165 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.173333 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.184976 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.201043 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.240418 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.266305 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.268107 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.268133 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.268142 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.268177 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.268188 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.277639 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.290987 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.302100 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:10Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.371066 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.371144 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.371159 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.371184 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.371198 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.473905 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.473942 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.473951 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.473963 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.473976 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.575898 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.575939 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.575950 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.575966 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.575977 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.677758 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.677791 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.677800 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.677812 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.677821 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.780295 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.780346 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.780358 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.780376 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.780389 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.883639 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.883676 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.883685 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.883699 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.883713 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.985749 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.985782 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.985791 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.985805 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:10 crc kubenswrapper[4945]: I0108 23:16:10.985825 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:10Z","lastTransitionTime":"2026-01-08T23:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.000119 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:11 crc kubenswrapper[4945]: E0108 23:16:11.000501 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.088131 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.088169 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.088181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.088195 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.088205 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.190139 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.190184 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.190195 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.190207 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.190217 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.292035 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.292077 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.292090 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.292103 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.292114 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.394337 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.394377 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.394387 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.394400 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.394415 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.497084 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.497131 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.497147 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.497181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.497206 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.599939 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.600052 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.600083 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.600112 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.600134 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.614943 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:11 crc kubenswrapper[4945]: E0108 23:16:11.615179 4945 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:11 crc kubenswrapper[4945]: E0108 23:16:11.615299 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs podName:53cbedd0-f69d-4a28-9077-13fed644be95 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:19.61526399 +0000 UTC m=+49.926422976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs") pod "network-metrics-daemon-g8gcl" (UID: "53cbedd0-f69d-4a28-9077-13fed644be95") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.702836 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.702889 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.702907 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.702926 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.702941 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.805655 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.805731 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.805751 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.805774 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.805795 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.908253 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.908314 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.908330 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.908355 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:11 crc kubenswrapper[4945]: I0108 23:16:11.908372 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:11Z","lastTransitionTime":"2026-01-08T23:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.000364 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:12 crc kubenswrapper[4945]: E0108 23:16:12.000546 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.000615 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:12 crc kubenswrapper[4945]: E0108 23:16:12.000839 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.000853 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:12 crc kubenswrapper[4945]: E0108 23:16:12.001052 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.010354 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.010411 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.010420 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.010434 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.010445 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.112922 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.113025 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.113050 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.113080 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.113101 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.216360 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.216417 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.216439 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.216484 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.216505 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.320080 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.320143 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.320160 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.320183 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.320200 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.424348 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.424416 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.424454 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.424485 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.424509 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.528242 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.528386 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.528406 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.528430 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.528448 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.631070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.631125 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.631140 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.631165 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.631182 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.733525 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.733610 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.733631 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.733657 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.733704 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.837326 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.837622 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.837687 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.837749 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.837829 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.940611 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.940903 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.941090 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.941208 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.941300 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:12Z","lastTransitionTime":"2026-01-08T23:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:12 crc kubenswrapper[4945]: I0108 23:16:12.999680 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:13 crc kubenswrapper[4945]: E0108 23:16:13.000355 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.000722 4945 scope.go:117] "RemoveContainer" containerID="94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.045487 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.045808 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.045826 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.045851 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.045870 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.148905 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.148968 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.149031 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.149068 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.149092 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.252463 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.253096 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.253391 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.253756 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.253793 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.265460 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/1.log" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.269409 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.269815 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.292757 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.311932 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.324936 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.351394 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.356375 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.356416 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.356427 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.356443 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.356455 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.365371 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.380710 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.400797 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.424742 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.446445 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.459774 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.459862 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.459880 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.459909 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.459930 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.474031 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.504499 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.518391 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.529115 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.541178 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.551738 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.562936 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.563009 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.563025 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.563046 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.563058 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.566829 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.581710 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:13Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.665896 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.665933 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.665944 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.665960 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.665973 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.768294 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.768379 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.768396 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.768427 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.768442 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.870728 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.870793 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.870845 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.870875 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.870894 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.974714 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.975229 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.975446 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.975626 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:13 crc kubenswrapper[4945]: I0108 23:16:13.975794 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:13Z","lastTransitionTime":"2026-01-08T23:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.000574 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.000639 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:14 crc kubenswrapper[4945]: E0108 23:16:14.000831 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.000986 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:14 crc kubenswrapper[4945]: E0108 23:16:14.001168 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:14 crc kubenswrapper[4945]: E0108 23:16:14.001392 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.078954 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.079081 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.079103 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.079132 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.079154 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.182559 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.182674 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.182703 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.182737 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.182759 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.278172 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/2.log" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.279211 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/1.log" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.284592 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1" exitCode=1 Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.284692 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.284771 4945 scope.go:117] "RemoveContainer" containerID="94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.285340 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.285413 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.285441 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.285473 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.285500 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.286452 4945 scope.go:117] "RemoveContainer" containerID="4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1" Jan 08 23:16:14 crc kubenswrapper[4945]: E0108 23:16:14.286950 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.309464 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.332507 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.353561 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.377808 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.388684 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.388747 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.388770 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.388792 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.388810 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.394427 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.407989 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.427460 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.448677 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.465589 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.482253 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.491262 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.491310 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.491322 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.491352 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.491365 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.510233 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.532910 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.553726 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.580412 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.594517 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.594570 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.594583 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.594603 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.594616 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.601822 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.620987 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.644720 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94cc63abfffdb370406c1e0cc30a3b8cc30920fc2af7db4c1f4069611c453e6c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:00Z\\\",\\\"message\\\":\\\"ion (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0108 23:16:00.078935 6376 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.078987 6376 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:00.079690 6376 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079806 6376 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.079950 6376 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.080238 6376 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:00.081011 6376 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0108 23:16:00.081045 6376 factory.go:656] Stopping watch factory\\\\nI0108 23:16:00.081062 6376 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:14Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.696904 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.696939 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.696952 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.696973 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.697012 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.799431 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.799502 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.799522 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.799552 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.799569 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.902036 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.902097 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.902116 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.902143 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.902161 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:14Z","lastTransitionTime":"2026-01-08T23:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:14 crc kubenswrapper[4945]: I0108 23:16:14.999531 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:14 crc kubenswrapper[4945]: E0108 23:16:14.999738 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.004602 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.004656 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.004674 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.004722 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.004743 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.107763 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.107806 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.107822 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.107836 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.107845 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.211196 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.211276 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.211298 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.211323 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.211340 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.293524 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/2.log" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.299818 4945 scope.go:117] "RemoveContainer" containerID="4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1" Jan 08 23:16:15 crc kubenswrapper[4945]: E0108 23:16:15.300143 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.314177 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.314310 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.314394 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.314435 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.314517 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.317610 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.334817 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.350949 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.371425 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.388307 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.403404 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.416360 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.417566 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.417609 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.417622 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.417642 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.417656 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.438574 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.460456 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.485866 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.500582 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.520507 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.520549 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.520561 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.520578 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.520591 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.526416 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.548984 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.567590 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.586386 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.603892 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.622978 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.623038 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.623050 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.623068 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.623080 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.630135 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:15Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.725652 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.725692 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.725701 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.725716 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.725726 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.828248 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.828295 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.828304 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.828318 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.828330 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.931139 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.931197 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.931209 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.931229 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:15 crc kubenswrapper[4945]: I0108 23:16:15.931242 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:15Z","lastTransitionTime":"2026-01-08T23:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.000041 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.000140 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:16 crc kubenswrapper[4945]: E0108 23:16:16.000243 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.000264 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:16 crc kubenswrapper[4945]: E0108 23:16:16.000403 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:16 crc kubenswrapper[4945]: E0108 23:16:16.000520 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.033677 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.033736 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.033753 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.033776 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.033792 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.136913 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.136959 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.136968 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.136981 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.137009 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.240260 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.240320 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.240337 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.240362 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.240379 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.344258 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.344348 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.344372 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.344403 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.344444 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.447784 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.447860 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.447884 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.447912 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.447933 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.550346 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.550409 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.550426 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.550453 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.550470 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.653401 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.653439 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.653449 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.653462 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.653472 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.755495 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.755535 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.755543 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.755558 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.755567 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.857724 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.857767 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.857779 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.857795 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.857820 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.961372 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.961420 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.961429 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.961444 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.961455 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:16Z","lastTransitionTime":"2026-01-08T23:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:16 crc kubenswrapper[4945]: I0108 23:16:16.999561 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:16 crc kubenswrapper[4945]: E0108 23:16:16.999770 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.064835 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.064904 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.064922 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.064948 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.064972 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.167507 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.167573 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.167591 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.167616 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.167633 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.270280 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.270355 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.270381 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.270411 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.270435 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.373178 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.373280 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.373299 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.373326 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.373345 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.475522 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.475563 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.475572 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.475585 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.475601 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.617940 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.618033 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.618060 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.618091 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.618112 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.720139 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.720169 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.720180 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.720194 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.720204 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.821809 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.821847 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.821860 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.821874 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.821885 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.925223 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.925273 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.925285 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.925301 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.925312 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:17Z","lastTransitionTime":"2026-01-08T23:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:17 crc kubenswrapper[4945]: I0108 23:16:17.999635 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:17.999743 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:18 crc kubenswrapper[4945]: E0108 23:16:17.999768 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:17.999647 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:18 crc kubenswrapper[4945]: E0108 23:16:17.999889 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:18 crc kubenswrapper[4945]: E0108 23:16:18.000101 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.027479 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.027554 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.027579 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.027607 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.027630 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.130205 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.130261 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.130279 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.130303 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.130321 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.233637 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.233692 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.233711 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.233736 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.233753 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.336697 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.336759 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.336782 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.336811 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.336832 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.439742 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.439812 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.439834 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.439861 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.439883 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.543117 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.543171 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.543188 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.543211 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.543233 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.646736 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.646794 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.646812 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.646841 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.646859 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.749429 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.749495 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.749514 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.749539 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.749556 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.852605 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.852943 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.853349 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.853641 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.853940 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.957091 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.957141 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.957168 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.957182 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.957193 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:18Z","lastTransitionTime":"2026-01-08T23:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:18 crc kubenswrapper[4945]: I0108 23:16:18.999813 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:18.999968 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.053371 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.053446 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.053506 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.053532 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.053549 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.075333 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:19Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.080859 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.080913 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.080931 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.080955 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.080972 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.100701 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:19Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.104734 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.104782 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.104791 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.104808 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.104819 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.116688 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:19Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.120534 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.120910 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.121188 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.121419 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.121579 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.134920 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:19Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.138552 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.138591 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.138602 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.138618 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.138630 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.151859 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:19Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.152193 4945 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.153776 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.153885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.153970 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.154087 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.154187 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.257638 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.257693 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.257712 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.257738 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.257757 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.360606 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.360951 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.361391 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.361785 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.362206 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.466188 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.466242 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.466264 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.466290 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.466310 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.569159 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.569550 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.569676 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.569796 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.569903 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.641559 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.641770 4945 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.641846 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs podName:53cbedd0-f69d-4a28-9077-13fed644be95 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:35.641822372 +0000 UTC m=+65.952981358 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs") pod "network-metrics-daemon-g8gcl" (UID: "53cbedd0-f69d-4a28-9077-13fed644be95") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.673069 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.673136 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.673156 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.673180 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.673198 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.776539 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.776672 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.776689 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.776716 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.776732 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.843669 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.843798 4945 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.843907 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:51.843880404 +0000 UTC m=+82.155039390 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.879929 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.879989 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.880035 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.880054 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.880066 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.945067 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.945191 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945228 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:16:51.945206462 +0000 UTC m=+82.256365408 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.945266 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.945309 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945342 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945367 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945382 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945385 4945 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945393 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945403 4945 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945430 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:51.945423367 +0000 UTC m=+82.256582313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945462 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:51.945437898 +0000 UTC m=+82.256596884 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945671 4945 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: E0108 23:16:19.945872 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:16:51.945830727 +0000 UTC m=+82.256989713 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.983288 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.983350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.983374 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.983404 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.983426 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:19Z","lastTransitionTime":"2026-01-08T23:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:19 crc kubenswrapper[4945]: I0108 23:16:19.999716 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:19.999949 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:20 crc kubenswrapper[4945]: E0108 23:16:20.000157 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.000196 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:20 crc kubenswrapper[4945]: E0108 23:16:20.000301 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:20 crc kubenswrapper[4945]: E0108 23:16:20.000523 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.032754 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.049387 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.065353 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.078331 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.086213 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.086288 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.086312 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.086351 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.086379 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.095479 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.108498 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.128985 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.149865 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.162236 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.175486 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.189309 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.189360 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.189386 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.189416 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.189439 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.193432 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.203367 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.213603 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.227300 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.239310 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.251077 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.263541 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:20Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.292677 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.292823 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.292845 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.293017 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.293054 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.396118 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.396161 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.396172 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.396189 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.396201 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.498369 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.498403 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.498414 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.498431 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.498443 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.601448 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.601509 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.601533 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.601561 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.601622 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.704350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.705473 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.705530 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.705558 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.705577 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.808657 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.808730 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.808748 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.808774 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.808794 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.912341 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.912555 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.912580 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.912605 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.912623 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:20Z","lastTransitionTime":"2026-01-08T23:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:20 crc kubenswrapper[4945]: I0108 23:16:20.999474 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:20 crc kubenswrapper[4945]: E0108 23:16:20.999673 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.015708 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.015762 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.015781 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.015804 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.015822 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.119246 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.119301 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.119313 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.119333 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.119353 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.222295 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.222565 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.222639 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.222713 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.222789 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.325425 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.325510 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.325530 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.325929 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.325965 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.428830 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.429189 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.429199 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.429215 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.429225 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.531922 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.531958 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.531967 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.531981 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.532002 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.634366 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.634457 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.634478 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.634503 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.634520 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.737303 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.737366 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.737384 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.737406 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.737422 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.840351 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.840418 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.840443 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.840474 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.840510 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.943684 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.943748 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.943765 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.943789 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:21 crc kubenswrapper[4945]: I0108 23:16:21.943809 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:21Z","lastTransitionTime":"2026-01-08T23:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.000327 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.000403 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:22 crc kubenswrapper[4945]: E0108 23:16:22.000523 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.000575 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:22 crc kubenswrapper[4945]: E0108 23:16:22.000626 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:22 crc kubenswrapper[4945]: E0108 23:16:22.000794 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.047200 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.047276 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.047302 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.047331 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.047354 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.150406 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.150562 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.150581 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.150605 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.150621 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.253356 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.253827 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.254060 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.254279 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.254477 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.358178 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.358259 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.358284 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.358317 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.358340 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.461062 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.461123 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.461141 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.461165 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.461185 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.564347 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.564662 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.564753 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.564857 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.564956 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.668182 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.668281 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.668349 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.668385 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.668409 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.704097 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.718431 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.721891 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.741585 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.768887 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.771205 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.771260 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.771277 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.771306 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.771322 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.782641 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.796084 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.812204 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.828597 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.843704 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.857259 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.876250 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.876304 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.876333 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.876363 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.876384 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.881804 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.902390 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.923166 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.939452 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.973815 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.979911 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.979971 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.980025 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.980053 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.980072 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:22Z","lastTransitionTime":"2026-01-08T23:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:22 crc kubenswrapper[4945]: I0108 23:16:22.995070 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:22Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.000024 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:23 crc kubenswrapper[4945]: E0108 23:16:23.000277 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.011523 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:23Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.028914 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:23Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.083057 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.083131 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.083154 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.083222 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.083244 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.185163 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.185194 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.185204 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.185219 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.185231 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.287760 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.287820 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.287846 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.287892 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.287919 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.390951 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.391055 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.391079 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.391149 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.391222 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.494950 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.495046 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.495064 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.495092 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.495109 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.597756 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.597803 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.597814 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.597832 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.597845 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.700920 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.700975 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.701013 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.701033 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.701045 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.804181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.804270 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.804293 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.804323 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.804346 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.907640 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.907677 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.907685 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.907700 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.907710 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:23Z","lastTransitionTime":"2026-01-08T23:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.999692 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:23 crc kubenswrapper[4945]: I0108 23:16:23.999762 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:23 crc kubenswrapper[4945]: E0108 23:16:23.999840 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:24 crc kubenswrapper[4945]: E0108 23:16:23.999930 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:23.999654 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:24 crc kubenswrapper[4945]: E0108 23:16:24.000263 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.010112 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.010172 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.010190 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.010212 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.010230 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.112873 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.112934 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.112951 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.112980 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.113027 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.215584 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.215662 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.215688 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.215723 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.215747 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.318577 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.318639 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.318652 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.318668 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.318679 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.420867 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.420925 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.420942 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.420966 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.420986 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.524544 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.524615 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.524638 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.524666 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.524688 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.628797 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.628854 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.628871 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.628895 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.628913 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.731058 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.731112 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.731128 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.731150 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.731167 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.837456 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.837530 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.837555 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.837589 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.837681 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.941588 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.941653 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.941684 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.941727 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.941751 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:24Z","lastTransitionTime":"2026-01-08T23:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:24 crc kubenswrapper[4945]: I0108 23:16:24.999316 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:24 crc kubenswrapper[4945]: E0108 23:16:24.999570 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.045044 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.045105 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.045123 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.045146 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.045163 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.147411 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.147495 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.147525 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.147557 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.147582 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.251555 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.251636 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.251660 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.251691 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.251713 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.355029 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.355080 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.355097 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.355123 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.355139 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.458179 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.458228 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.458238 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.458251 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.458261 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.560712 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.560745 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.560754 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.560767 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.560777 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.663045 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.663078 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.663086 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.663099 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.663108 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.765733 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.765786 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.765801 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.765820 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.765832 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.868607 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.868713 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.868737 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.868766 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.868789 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.971713 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.971772 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.971789 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.971812 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.971832 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:25Z","lastTransitionTime":"2026-01-08T23:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.999536 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.999575 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:25 crc kubenswrapper[4945]: I0108 23:16:25.999739 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:25 crc kubenswrapper[4945]: E0108 23:16:25.999734 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:25 crc kubenswrapper[4945]: E0108 23:16:25.999837 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:26 crc kubenswrapper[4945]: E0108 23:16:25.999976 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.074110 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.074143 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.074151 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.074164 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.074173 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.176829 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.176864 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.176874 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.176887 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.176896 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.279595 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.279645 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.279657 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.279673 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.279685 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.381277 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.381305 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.381317 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.381332 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.381344 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.484184 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.484243 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.484254 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.484271 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.484282 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.586907 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.586938 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.586948 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.586961 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.586968 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.689555 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.689817 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.689974 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.690181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.690367 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.793652 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.793693 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.793701 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.793714 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.793724 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.895399 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.895862 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.895927 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.896043 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.896145 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.998546 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.998599 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.998616 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.998643 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.998659 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:26Z","lastTransitionTime":"2026-01-08T23:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:26 crc kubenswrapper[4945]: I0108 23:16:26.999335 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:26 crc kubenswrapper[4945]: E0108 23:16:26.999514 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.000124 4945 scope.go:117] "RemoveContainer" containerID="4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1" Jan 08 23:16:27 crc kubenswrapper[4945]: E0108 23:16:27.000374 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.106273 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.106335 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.106348 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.106366 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.106383 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.209156 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.209218 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.209237 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.209265 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.209283 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.311684 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.312187 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.312445 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.312857 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.313120 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.416216 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.416289 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.416311 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.416341 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.416367 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.519326 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.519380 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.519400 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.519422 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.519438 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.622923 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.657792 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.658230 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.658299 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.658329 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.761501 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.761678 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.761760 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.761823 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.761842 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.864879 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.864955 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.865038 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.865076 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.865099 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.968629 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.968672 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.968681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.968696 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.968706 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:27Z","lastTransitionTime":"2026-01-08T23:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.999556 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.999614 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:27 crc kubenswrapper[4945]: E0108 23:16:27.999749 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:27 crc kubenswrapper[4945]: I0108 23:16:27.999779 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:28 crc kubenswrapper[4945]: E0108 23:16:27.999904 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:28 crc kubenswrapper[4945]: E0108 23:16:28.000032 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.071603 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.071666 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.071684 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.071709 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.071727 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.174590 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.174692 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.174712 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.174733 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.174748 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.277724 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.277781 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.277813 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.277836 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.277854 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.380681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.380767 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.380788 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.380815 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.380834 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.483648 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.483680 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.483689 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.483700 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.483708 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.586634 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.586679 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.586688 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.586700 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.586708 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.689436 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.689462 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.689471 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.689484 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.689495 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.792426 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.792481 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.792498 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.792524 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.792542 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.894876 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.894932 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.894955 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.894987 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.895049 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.998320 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.998391 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.998410 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.998434 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.998451 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:28Z","lastTransitionTime":"2026-01-08T23:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:28 crc kubenswrapper[4945]: I0108 23:16:28.999840 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:29 crc kubenswrapper[4945]: E0108 23:16:29.000043 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.101749 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.101816 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.101843 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.101872 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.101897 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.204812 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.204865 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.204882 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.204909 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.204927 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.308233 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.308293 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.308310 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.308332 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.308353 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.345665 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.345716 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.345747 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.345782 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.345808 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: E0108 23:16:29.371543 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:29Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.377083 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.377182 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.377233 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.377260 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.377277 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: E0108 23:16:29.398267 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:29Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.403303 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.403373 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.403391 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.403417 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.403437 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: E0108 23:16:29.422973 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:29Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.428901 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.428946 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.428989 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.429041 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.429089 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: E0108 23:16:29.449936 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:29Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.454768 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.454885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.454907 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.454929 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.454945 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: E0108 23:16:29.474315 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:29Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:29 crc kubenswrapper[4945]: E0108 23:16:29.474557 4945 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.476600 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.476637 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.476654 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.476677 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.476697 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.579631 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.579675 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.579732 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.579754 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.579770 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.683103 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.683163 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.683181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.683203 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.683222 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.786963 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.787101 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.787165 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.787191 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.787208 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.890609 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.890671 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.890696 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.890726 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.890789 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.993819 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.994184 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.994418 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.994642 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.994892 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:29Z","lastTransitionTime":"2026-01-08T23:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:29 crc kubenswrapper[4945]: I0108 23:16:29.999398 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:29.999481 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:30 crc kubenswrapper[4945]: E0108 23:16:29.999573 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:30 crc kubenswrapper[4945]: E0108 23:16:29.999785 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:29.999863 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:30 crc kubenswrapper[4945]: E0108 23:16:30.000096 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.019730 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.091339 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.097411 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.097471 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.097489 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.097515 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.097533 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.118571 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.136973 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.156551 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.174817 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.194404 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.202902 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.203021 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.203038 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.203060 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.203078 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.216102 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.237538 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.256243 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.275378 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.298709 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.306399 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.306479 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.306504 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.306553 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.306578 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.318453 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.355834 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.380976 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.401125 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.408273 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.408304 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.408315 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.408331 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.408340 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.418192 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.440161 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:30Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.513843 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.514091 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.514265 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.514409 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.514611 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.617340 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.617368 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.617376 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.617389 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.617398 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.719820 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.719885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.719902 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.719925 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.719942 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.822956 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.823032 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.823048 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.823086 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.823098 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.926107 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.926201 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.926221 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.926251 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:30 crc kubenswrapper[4945]: I0108 23:16:30.926270 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:30Z","lastTransitionTime":"2026-01-08T23:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:30.999861 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:31 crc kubenswrapper[4945]: E0108 23:16:31.000111 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.029416 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.029494 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.029514 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.029540 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.029561 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.132406 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.132750 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.132874 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.133022 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.133163 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.236308 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.236358 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.236372 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.236390 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.236401 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.339190 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.339267 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.339286 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.339313 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.339333 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.442543 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.442612 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.442631 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.442660 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.442678 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.546808 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.546853 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.546863 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.546881 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.546894 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.649216 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.649254 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.649265 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.649306 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.649319 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.751463 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.751507 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.751518 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.751535 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.751545 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.855442 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.855502 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.855524 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.855552 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.855573 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.960527 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.960573 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.960590 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.960611 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:31 crc kubenswrapper[4945]: I0108 23:16:31.960627 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:31Z","lastTransitionTime":"2026-01-08T23:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.003964 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:32 crc kubenswrapper[4945]: E0108 23:16:32.004154 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.004261 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:32 crc kubenswrapper[4945]: E0108 23:16:32.004344 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.004411 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:32 crc kubenswrapper[4945]: E0108 23:16:32.004483 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.063181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.063239 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.063262 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.063291 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.063315 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.166941 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.167026 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.167049 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.167075 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.167097 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.269930 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.269958 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.269968 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.269980 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.269988 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.374321 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.374387 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.374404 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.374426 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.374444 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.477833 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.477876 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.477908 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.477923 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.477933 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.581334 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.581372 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.581380 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.581395 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.581403 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.684065 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.684414 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.684564 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.684709 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.684845 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.788317 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.788350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.788358 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.788371 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.788380 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.891483 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.891551 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.891569 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.891594 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.891615 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.995139 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.995184 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.995193 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.995207 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:32 crc kubenswrapper[4945]: I0108 23:16:32.995216 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:32Z","lastTransitionTime":"2026-01-08T23:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.000160 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:33 crc kubenswrapper[4945]: E0108 23:16:33.000362 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.104304 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.104402 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.104459 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.104500 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.104529 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.207214 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.207259 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.207271 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.207286 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.207298 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.311281 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.311338 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.311350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.311370 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.311383 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.413811 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.413863 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.413876 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.413893 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.413906 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.516371 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.516451 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.516472 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.516506 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.516532 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.618774 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.618841 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.618851 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.618867 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.618877 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.722052 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.722095 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.722105 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.722122 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.722135 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.825251 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.825308 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.825320 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.825341 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.825354 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.928054 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.928107 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.928119 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.928134 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.928145 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:33Z","lastTransitionTime":"2026-01-08T23:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.999596 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.999620 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:33 crc kubenswrapper[4945]: I0108 23:16:33.999848 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:34 crc kubenswrapper[4945]: E0108 23:16:34.000101 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:34 crc kubenswrapper[4945]: E0108 23:16:34.000286 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:34 crc kubenswrapper[4945]: E0108 23:16:34.000483 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.030632 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.030686 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.030701 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.030721 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.030738 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.135064 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.135105 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.135115 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.135132 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.135146 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.238730 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.238783 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.238794 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.238816 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.238831 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.342429 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.342476 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.342489 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.342511 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.342525 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.445961 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.446044 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.446058 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.446078 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.446090 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.549253 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.549307 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.549321 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.549342 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.549355 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.652440 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.652493 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.652504 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.652521 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.652535 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.755773 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.755926 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.755951 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.755980 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.756029 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.858948 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.859009 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.859021 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.859040 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.859052 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.961073 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.961110 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.961118 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.961130 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:34 crc kubenswrapper[4945]: I0108 23:16:34.961139 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:34Z","lastTransitionTime":"2026-01-08T23:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.000030 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:35 crc kubenswrapper[4945]: E0108 23:16:35.000489 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.063447 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.063506 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.063524 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.063548 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.063565 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.166727 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.167524 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.167743 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.167871 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.168025 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.270561 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.270619 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.270637 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.270661 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.270677 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.373285 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.373327 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.373338 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.373354 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.373372 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.476188 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.476228 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.476237 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.476252 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.476261 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.578586 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.578647 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.578664 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.578688 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.578709 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.682583 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.682643 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.682658 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.682681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.682701 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.722046 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:35 crc kubenswrapper[4945]: E0108 23:16:35.722384 4945 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:35 crc kubenswrapper[4945]: E0108 23:16:35.722780 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs podName:53cbedd0-f69d-4a28-9077-13fed644be95 nodeName:}" failed. No retries permitted until 2026-01-08 23:17:07.722751167 +0000 UTC m=+98.033910143 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs") pod "network-metrics-daemon-g8gcl" (UID: "53cbedd0-f69d-4a28-9077-13fed644be95") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.785739 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.785799 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.785822 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.785890 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.785912 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.888754 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.888818 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.888835 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.888859 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.888877 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.991395 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.991452 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.991477 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.991505 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:35 crc kubenswrapper[4945]: I0108 23:16:35.991525 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:35Z","lastTransitionTime":"2026-01-08T23:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.000351 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.000483 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.000552 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:36 crc kubenswrapper[4945]: E0108 23:16:36.000640 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:36 crc kubenswrapper[4945]: E0108 23:16:36.000803 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:36 crc kubenswrapper[4945]: E0108 23:16:36.000928 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.012676 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.093947 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.094299 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.094413 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.094515 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.094605 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.197889 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.197948 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.197965 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.197983 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.198012 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.300042 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.300312 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.300392 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.300484 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.300562 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.402986 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.403233 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.403331 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.403398 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.403460 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.505333 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.505374 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.505383 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.505399 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.505412 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.607644 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.607674 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.607682 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.607695 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.607703 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.709633 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.709892 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.709965 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.710070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.710165 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.812880 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.813275 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.813289 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.813316 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.813328 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.915306 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.915345 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.915353 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.915366 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.915376 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:36Z","lastTransitionTime":"2026-01-08T23:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:36 crc kubenswrapper[4945]: I0108 23:16:36.999374 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:36 crc kubenswrapper[4945]: E0108 23:16:36.999488 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.017857 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.017885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.017893 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.017906 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.017916 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.120296 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.120360 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.120379 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.120403 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.120420 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.222199 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.222234 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.222242 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.222254 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.222263 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.325492 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.325553 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.325576 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.325605 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.325627 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.377241 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/0.log" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.377290 4945 generic.go:334] "Generic (PLEG): container finished" podID="0fa9b342-4b22-49db-9022-2dd852e7d835" containerID="39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11" exitCode=1 Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.377321 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsh4d" event={"ID":"0fa9b342-4b22-49db-9022-2dd852e7d835","Type":"ContainerDied","Data":"39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.377666 4945 scope.go:117] "RemoveContainer" containerID="39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.394603 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.410241 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.428050 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.428087 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.428098 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.428113 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.428125 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.431885 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.447055 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.468954 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:36Z\\\",\\\"message\\\":\\\"2026-01-08T23:15:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33\\\\n2026-01-08T23:15:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33 to /host/opt/cni/bin/\\\\n2026-01-08T23:15:51Z [verbose] multus-daemon started\\\\n2026-01-08T23:15:51Z [verbose] Readiness Indicator file check\\\\n2026-01-08T23:16:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.484883 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.499403 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.517046 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.528397 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.531339 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.531471 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.531566 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.531648 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.531712 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.538443 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.551214 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.563322 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.573614 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.585196 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.595170 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.604051 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a22048a-379c-4439-a231-421a4f376667\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.623502 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.634100 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.634139 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.634150 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.634168 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.634177 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.635302 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.645656 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:37Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.735785 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.735817 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.735825 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.735837 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.735846 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.837852 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.837889 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.837898 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.837913 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.837922 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.940667 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.940717 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.940734 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.940755 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.940772 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:37Z","lastTransitionTime":"2026-01-08T23:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.999377 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.999386 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:37 crc kubenswrapper[4945]: E0108 23:16:37.999598 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:37 crc kubenswrapper[4945]: I0108 23:16:37.999403 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:37 crc kubenswrapper[4945]: E0108 23:16:37.999701 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:37 crc kubenswrapper[4945]: E0108 23:16:37.999793 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.042782 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.042821 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.042831 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.042843 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.042852 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.145420 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.145450 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.145459 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.145472 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.145482 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.247582 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.247633 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.247646 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.247665 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.247677 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.349927 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.349976 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.350005 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.350021 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.350032 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.381752 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/0.log" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.381809 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsh4d" event={"ID":"0fa9b342-4b22-49db-9022-2dd852e7d835","Type":"ContainerStarted","Data":"614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.401968 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.411787 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.438598 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.455187 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.455268 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.455281 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.455299 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.455310 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.456753 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.474260 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.492283 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.503201 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.515231 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.529349 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:36Z\\\",\\\"message\\\":\\\"2026-01-08T23:15:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33\\\\n2026-01-08T23:15:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33 to /host/opt/cni/bin/\\\\n2026-01-08T23:15:51Z [verbose] multus-daemon started\\\\n2026-01-08T23:15:51Z [verbose] Readiness Indicator file check\\\\n2026-01-08T23:16:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.541527 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.554468 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.559195 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.559232 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.559260 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.559284 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.559331 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.565620 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.575533 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a22048a-379c-4439-a231-421a4f376667\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.603361 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.615448 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.626401 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.638780 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.656645 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.661855 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.661879 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.661888 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.661900 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.661910 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.668595 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:38Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.764082 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.764142 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.764153 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.764169 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.764186 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.866260 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.866289 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.866298 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.866310 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.866319 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.968488 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.968526 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.968537 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.968552 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.968563 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:38Z","lastTransitionTime":"2026-01-08T23:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:38 crc kubenswrapper[4945]: I0108 23:16:38.999776 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:39 crc kubenswrapper[4945]: E0108 23:16:39.000061 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.070672 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.070717 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.070726 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.070740 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.070750 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.172797 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.172832 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.172846 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.172863 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.172877 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.274625 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.274655 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.274663 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.274675 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.274683 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.377254 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.377287 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.377296 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.377310 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.377319 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.479477 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.479533 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.479551 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.479576 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.479593 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.581623 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.581661 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.581673 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.581689 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.581700 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.683840 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.683909 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.683918 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.683932 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.683941 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.784514 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.784558 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.784570 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.784589 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.784601 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: E0108 23:16:39.798010 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:39Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.801137 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.801169 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.801180 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.801195 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.801205 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: E0108 23:16:39.818065 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:39Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.822098 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.822144 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.822157 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.822174 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.822185 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: E0108 23:16:39.838103 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:39Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.841043 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.841068 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.841076 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.841087 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.841096 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: E0108 23:16:39.857650 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:39Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.862093 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.862119 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.862127 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.862140 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.862149 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: E0108 23:16:39.881636 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:39Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:39 crc kubenswrapper[4945]: E0108 23:16:39.881810 4945 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.883250 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.883273 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.883286 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.883300 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.883311 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.985519 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.985553 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.985564 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.985578 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.985589 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:39Z","lastTransitionTime":"2026-01-08T23:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:39 crc kubenswrapper[4945]: I0108 23:16:39.999901 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:39.999974 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:40 crc kubenswrapper[4945]: E0108 23:16:40.000106 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:40 crc kubenswrapper[4945]: E0108 23:16:40.000249 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.000419 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:40 crc kubenswrapper[4945]: E0108 23:16:40.000499 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.015128 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.029038 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.039665 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.052959 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a22048a-379c-4439-a231-421a4f376667\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.071753 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.087195 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.087265 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.087280 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.087301 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.087319 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.089466 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.101923 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.113863 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.126605 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.138254 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.153485 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.173873 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.186103 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.189603 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.189640 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.189650 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.189670 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.189682 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.199493 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.213905 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.227259 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.239181 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.249597 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.265680 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:36Z\\\",\\\"message\\\":\\\"2026-01-08T23:15:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33\\\\n2026-01-08T23:15:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33 to /host/opt/cni/bin/\\\\n2026-01-08T23:15:51Z [verbose] multus-daemon started\\\\n2026-01-08T23:15:51Z [verbose] Readiness Indicator file check\\\\n2026-01-08T23:16:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:40Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.294053 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.294589 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.294687 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.294784 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.294870 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.396735 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.396770 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.396779 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.396794 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.396804 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.499004 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.499050 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.499061 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.499078 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.499091 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.601737 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.601814 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.601894 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.601913 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.601925 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.704504 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.704543 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.704551 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.704565 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.704575 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.807395 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.807434 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.807447 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.807465 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.807477 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.910240 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.910273 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.910307 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.910325 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:40 crc kubenswrapper[4945]: I0108 23:16:40.910336 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:40Z","lastTransitionTime":"2026-01-08T23:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.000227 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:41 crc kubenswrapper[4945]: E0108 23:16:41.000363 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.013075 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.013129 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.013149 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.013169 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.013182 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.115312 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.115349 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.115358 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.115371 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.115380 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.217375 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.217424 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.217435 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.217452 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.217466 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.320632 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.320680 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.320690 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.320706 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.320716 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.422714 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.422755 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.422764 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.422778 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.422789 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.525301 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.525340 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.525349 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.525365 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.525374 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.627344 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.627386 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.627398 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.627413 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.627424 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.729715 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.729764 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.729778 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.729797 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.729808 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.831868 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.831905 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.831913 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.831927 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.831936 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.934691 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.934723 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.934733 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.934746 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.934755 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:41Z","lastTransitionTime":"2026-01-08T23:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:41 crc kubenswrapper[4945]: I0108 23:16:41.999809 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:41.999892 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:41.999822 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:42 crc kubenswrapper[4945]: E0108 23:16:41.999934 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:42 crc kubenswrapper[4945]: E0108 23:16:42.000048 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:42 crc kubenswrapper[4945]: E0108 23:16:42.000127 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.000940 4945 scope.go:117] "RemoveContainer" containerID="4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.037110 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.037168 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.037179 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.037193 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.037202 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.139509 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.139554 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.139566 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.139585 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.139596 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.242157 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.242206 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.242219 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.242236 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.242248 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.344068 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.344113 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.344124 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.344141 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.344166 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.407696 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/2.log" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.418277 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.419090 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.437776 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.447619 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.447669 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.447682 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.447705 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.447717 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.451499 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.470009 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.479986 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.489387 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.500408 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:36Z\\\",\\\"message\\\":\\\"2026-01-08T23:15:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33\\\\n2026-01-08T23:15:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33 to /host/opt/cni/bin/\\\\n2026-01-08T23:15:51Z [verbose] multus-daemon started\\\\n2026-01-08T23:15:51Z [verbose] Readiness Indicator file check\\\\n2026-01-08T23:16:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.509973 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.521067 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.534339 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.547227 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.549776 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.549816 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.549826 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.549845 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.549859 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.562820 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.575384 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.594574 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.605450 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.622295 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.637528 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.651706 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.651756 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.651769 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.651790 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.651813 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.652705 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a22048a-379c-4439-a231-421a4f376667\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.675364 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.693190 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:42Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.753399 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.753430 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.753438 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.753467 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.753478 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.855865 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.855977 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.856016 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.856042 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.856061 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.958626 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.958684 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.958695 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.958713 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:42 crc kubenswrapper[4945]: I0108 23:16:42.958724 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:42Z","lastTransitionTime":"2026-01-08T23:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.000263 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:43 crc kubenswrapper[4945]: E0108 23:16:43.000424 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.061698 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.061750 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.061763 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.061780 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.061793 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.164135 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.164189 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.164201 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.164218 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.164232 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.266301 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.266345 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.266356 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.266370 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.266380 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.369059 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.369138 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.369162 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.369193 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.369215 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.422801 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/3.log" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.423355 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/2.log" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.425773 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" exitCode=1 Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.425820 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.425892 4945 scope.go:117] "RemoveContainer" containerID="4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.426456 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:16:43 crc kubenswrapper[4945]: E0108 23:16:43.426625 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.438649 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.448170 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a22048a-379c-4439-a231-421a4f376667\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.472232 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.472303 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.472313 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.472330 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.472343 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.473062 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.484745 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.495060 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.504170 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.517758 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.529078 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.539195 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.553865 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdfa3b34ed9dd9f77e71bbd7ed95ec6036c9c2c9bad44fb0a3e61a9cd39e1c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:14Z\\\",\\\"message\\\":\\\"ers/factory.go:160\\\\nI0108 23:16:13.978557 6599 factory.go:656] Stopping watch factory\\\\nI0108 23:16:13.978587 6599 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978881 6599 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.978915 6599 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978934 6599 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.979174 6599 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0108 23:16:13.978946 6599 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0108 23:16:13.979749 6599 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:42Z\\\",\\\"message\\\":\\\"473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/console_TCP_cluster\\\\\\\", UUID:\\\\\\\"d7d7b270-1480-47f8-bdf9-690dbab310cb\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/console_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.194\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0108 23:16:42.825959 6995 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.562551 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.573369 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.574802 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.575017 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.575038 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.575055 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.575070 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.583127 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.593671 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.601208 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.609047 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.619671 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:36Z\\\",\\\"message\\\":\\\"2026-01-08T23:15:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33\\\\n2026-01-08T23:15:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33 to /host/opt/cni/bin/\\\\n2026-01-08T23:15:51Z [verbose] multus-daemon started\\\\n2026-01-08T23:15:51Z [verbose] Readiness Indicator file check\\\\n2026-01-08T23:16:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.635130 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.645775 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:43Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.677719 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.677754 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.677763 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.677779 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.677790 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.780544 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.780601 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.780618 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.780645 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.780662 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.883188 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.883224 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.883234 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.883249 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.883260 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.985729 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.985770 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.985779 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.985794 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:43 crc kubenswrapper[4945]: I0108 23:16:43.985804 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:43Z","lastTransitionTime":"2026-01-08T23:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.000183 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.000231 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.000300 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:44 crc kubenswrapper[4945]: E0108 23:16:44.000310 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:44 crc kubenswrapper[4945]: E0108 23:16:44.000383 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:44 crc kubenswrapper[4945]: E0108 23:16:44.000447 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.087803 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.087849 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.087857 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.087869 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.087879 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.190034 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.190062 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.190071 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.190086 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.190095 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.293103 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.293139 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.293148 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.293162 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.293171 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.396332 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.396385 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.396403 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.396428 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.396453 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.432079 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/3.log" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.436707 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:16:44 crc kubenswrapper[4945]: E0108 23:16:44.436963 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.453745 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.476618 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.501317 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.501401 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.501425 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.501460 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.501498 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.502473 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.524203 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.543902 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.561600 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.578206 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a22048a-379c-4439-a231-421a4f376667\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.605202 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.607119 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.607190 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.607210 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.607237 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.607258 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.632740 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.653868 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.670609 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.697243 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:42Z\\\",\\\"message\\\":\\\"473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/console_TCP_cluster\\\\\\\", UUID:\\\\\\\"d7d7b270-1480-47f8-bdf9-690dbab310cb\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/console_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.194\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0108 23:16:42.825959 6995 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.710590 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.710648 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.710665 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.710691 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.710709 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.712050 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.729086 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.746502 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:36Z\\\",\\\"message\\\":\\\"2026-01-08T23:15:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33\\\\n2026-01-08T23:15:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33 to /host/opt/cni/bin/\\\\n2026-01-08T23:15:51Z [verbose] multus-daemon started\\\\n2026-01-08T23:15:51Z [verbose] Readiness Indicator file check\\\\n2026-01-08T23:16:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.758532 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.772205 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.787648 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.801052 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:44Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.814703 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.814783 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.814797 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.814817 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.814836 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.918166 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.918216 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.918228 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.918246 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.918257 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:44Z","lastTransitionTime":"2026-01-08T23:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:44 crc kubenswrapper[4945]: I0108 23:16:44.999407 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:44 crc kubenswrapper[4945]: E0108 23:16:44.999543 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.020031 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.020070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.020082 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.020096 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.020107 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.123092 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.123157 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.123174 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.123198 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.123216 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.225597 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.225651 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.225663 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.225680 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.225693 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.328611 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.328666 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.328684 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.328709 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.328728 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.431741 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.431819 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.431839 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.431868 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.431887 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.534451 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.534496 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.534507 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.534522 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.534533 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.636876 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.636958 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.636978 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.637063 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.637092 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.739970 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.740058 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.740070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.740096 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.740115 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.843193 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.843263 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.843287 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.843324 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.843347 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.947352 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.947438 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.947472 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.947508 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:45 crc kubenswrapper[4945]: I0108 23:16:45.947573 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:45Z","lastTransitionTime":"2026-01-08T23:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:45.999924 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.000134 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.000435 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:46 crc kubenswrapper[4945]: E0108 23:16:46.000471 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:46 crc kubenswrapper[4945]: E0108 23:16:46.000532 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:46 crc kubenswrapper[4945]: E0108 23:16:46.000626 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.051294 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.051354 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.051364 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.051385 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.051395 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.154944 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.155105 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.155169 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.155197 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.155472 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.258690 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.258749 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.258764 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.258785 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.258802 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.361922 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.361970 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.361983 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.362024 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.362037 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.464699 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.464754 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.464765 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.464784 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.464796 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.567886 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.567971 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.568024 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.568056 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.568077 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.671104 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.671157 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.671170 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.671193 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.671209 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.774160 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.774239 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.774263 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.774297 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.774321 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.877708 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.877768 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.877788 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.877817 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.877842 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.981593 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.981663 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.981681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.981712 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:46 crc kubenswrapper[4945]: I0108 23:16:46.981732 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:46Z","lastTransitionTime":"2026-01-08T23:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.000260 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:47 crc kubenswrapper[4945]: E0108 23:16:47.000521 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.085396 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.085443 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.085453 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.085470 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.085481 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.195623 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.195702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.195741 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.195770 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.195791 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.299634 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.299705 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.299725 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.299753 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.299772 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.404313 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.404437 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.404463 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.404494 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.404514 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.507582 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.507635 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.507646 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.507666 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.507678 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.611579 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.611663 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.611681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.611714 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.611736 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.715598 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.715678 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.715701 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.715733 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.715755 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.819803 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.819869 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.819888 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.819917 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.819938 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.923502 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.923560 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.923573 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.923593 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:47 crc kubenswrapper[4945]: I0108 23:16:47.923610 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:47Z","lastTransitionTime":"2026-01-08T23:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.000333 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.000462 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:48 crc kubenswrapper[4945]: E0108 23:16:48.000540 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.000500 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:48 crc kubenswrapper[4945]: E0108 23:16:48.000776 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:48 crc kubenswrapper[4945]: E0108 23:16:48.000940 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.026792 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.026864 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.026885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.026915 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.026942 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.130150 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.130238 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.130258 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.130290 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.130312 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.233299 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.233333 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.233341 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.233355 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.233365 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.336642 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.336713 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.336753 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.336781 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.336800 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.439830 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.439912 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.439933 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.439965 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.439989 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.542775 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.542814 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.542823 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.542837 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.542848 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.646276 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.646319 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.646331 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.646350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.646362 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.750629 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.750671 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.750684 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.750702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.750714 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.854736 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.854799 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.854824 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.854849 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.854864 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.958692 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.958771 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.958795 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.958831 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.958863 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:48Z","lastTransitionTime":"2026-01-08T23:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:48 crc kubenswrapper[4945]: I0108 23:16:48.999861 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:49 crc kubenswrapper[4945]: E0108 23:16:49.000183 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.062439 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.062529 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.062554 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.062587 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.062608 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.166563 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.166630 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.166639 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.166683 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.166693 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.270965 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.271203 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.271296 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.271397 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.271517 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.375638 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.375717 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.375756 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.375779 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.375791 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.479044 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.479090 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.479109 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.479128 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.479140 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.582210 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.582256 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.582286 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.582303 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.582314 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.684851 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.684895 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.684916 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.684930 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.684939 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.786737 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.786798 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.786807 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.786823 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.786833 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.888745 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.888792 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.888800 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.888815 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.888825 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.991254 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.991296 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.991306 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.991322 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:49 crc kubenswrapper[4945]: I0108 23:16:49.991333 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:49Z","lastTransitionTime":"2026-01-08T23:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.000084 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.000164 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.000227 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.000269 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.000350 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.000461 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.023625 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.035568 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.053403 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:42Z\\\",\\\"message\\\":\\\"473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/console_TCP_cluster\\\\\\\", UUID:\\\\\\\"d7d7b270-1480-47f8-bdf9-690dbab310cb\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/console_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.194\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0108 23:16:42.825959 6995 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.066120 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.077316 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.090085 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.094479 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.094504 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.094513 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.094528 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.094539 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.102342 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.113048 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.125311 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:36Z\\\",\\\"message\\\":\\\"2026-01-08T23:15:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33\\\\n2026-01-08T23:15:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33 to /host/opt/cni/bin/\\\\n2026-01-08T23:15:51Z [verbose] multus-daemon started\\\\n2026-01-08T23:15:51Z [verbose] Readiness Indicator file check\\\\n2026-01-08T23:16:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.134533 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.145255 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.157943 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.166952 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a22048a-379c-4439-a231-421a4f376667\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.199273 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.204122 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.204145 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.204153 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.204165 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.204174 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.208281 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.208335 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.208350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.208371 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.208387 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.230117 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.235570 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.235621 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.235638 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.235660 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.235675 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.237785 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.247594 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.250711 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.250737 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.250745 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.250758 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.250766 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.252612 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.263923 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.266450 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.266857 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.266894 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.266906 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.266921 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.266933 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.279330 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.282092 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.282113 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.282121 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.282134 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.282142 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.283598 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.293401 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.295092 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:16:50Z is after 2025-08-24T17:21:41Z" Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.295195 4945 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.305811 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.305848 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.305862 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.305879 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.305891 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.411814 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.411905 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.411933 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.411952 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.411969 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.514517 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.514553 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.514561 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.514574 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.514586 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.616604 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.616643 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.616652 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.616667 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.616677 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.719117 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.719157 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.719166 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.719181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.719193 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.821662 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.821722 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.821772 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.821797 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.821814 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.925098 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.925142 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.925151 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.925166 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.925174 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:50Z","lastTransitionTime":"2026-01-08T23:16:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:50 crc kubenswrapper[4945]: I0108 23:16:50.999366 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:50 crc kubenswrapper[4945]: E0108 23:16:50.999500 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.027125 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.027152 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.027161 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.027172 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.027181 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.128921 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.128961 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.128970 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.129009 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.129025 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.231177 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.231216 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.231227 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.231243 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.231253 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.333556 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.333596 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.333604 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.333618 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.333631 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.436254 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.436308 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.436324 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.436346 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.436360 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.538763 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.538801 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.538809 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.538822 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.538833 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.640859 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.640904 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.640916 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.640932 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.640944 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.743620 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.743663 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.743678 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.743697 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.743711 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.846142 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.846227 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.846240 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.846257 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.846268 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.913400 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:51 crc kubenswrapper[4945]: E0108 23:16:51.913499 4945 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:16:51 crc kubenswrapper[4945]: E0108 23:16:51.913557 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:17:55.913540175 +0000 UTC m=+146.224699141 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.948983 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.949046 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.949056 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.949070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.949081 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:51Z","lastTransitionTime":"2026-01-08T23:16:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:51 crc kubenswrapper[4945]: I0108 23:16:51.999777 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:51 crc kubenswrapper[4945]: E0108 23:16:51.999887 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:51.999919 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:51.999777 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.000096 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.000276 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.013961 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.014071 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014119 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:56.014094306 +0000 UTC m=+146.325253262 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.014167 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014188 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014206 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014220 4945 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014253 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-08 23:17:56.01424354 +0000 UTC m=+146.325402486 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.014214 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014276 4945 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014308 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-08 23:17:56.014299471 +0000 UTC m=+146.325458427 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014311 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014327 4945 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014337 4945 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.014404 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-08 23:17:56.014393003 +0000 UTC m=+146.325551949 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.051404 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.051447 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.051459 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.051476 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.051487 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.154395 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.154560 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.154569 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.154586 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.154603 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.256962 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.257059 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.257077 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.257104 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.257142 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.360057 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.360111 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.360120 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.360132 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.360140 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.467829 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.467878 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.467890 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.467912 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.467926 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.570171 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.570205 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.570214 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.570293 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.570304 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.672038 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.672105 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.672117 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.672131 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.672140 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.774440 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.774485 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.774496 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.774517 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.774532 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.876787 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.876831 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.876843 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.876857 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.876867 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.979194 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.979239 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.979255 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.979276 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.979292 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:52Z","lastTransitionTime":"2026-01-08T23:16:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:52 crc kubenswrapper[4945]: I0108 23:16:52.999592 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:52 crc kubenswrapper[4945]: E0108 23:16:52.999724 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.081702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.081770 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.081781 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.081796 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.081805 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.184383 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.184456 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.184468 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.184483 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.184493 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.287282 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.287313 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.287322 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.287336 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.287345 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.389278 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.389346 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.389359 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.389376 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.389388 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.491457 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.491522 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.491535 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.491552 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.491564 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.594871 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.594913 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.594922 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.594934 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.594943 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.697400 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.697434 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.697444 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.697460 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.697471 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.800610 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.800658 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.800675 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.800697 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.800717 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.903433 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.903473 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.903482 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.903494 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:53 crc kubenswrapper[4945]: I0108 23:16:53.903505 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:53Z","lastTransitionTime":"2026-01-08T23:16:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.000110 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.000191 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.000290 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:54 crc kubenswrapper[4945]: E0108 23:16:54.000486 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:54 crc kubenswrapper[4945]: E0108 23:16:54.000957 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:54 crc kubenswrapper[4945]: E0108 23:16:54.001159 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.005186 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.005207 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.005216 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.005226 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.005234 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.107771 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.107817 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.107827 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.107841 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.107852 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.210110 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.210153 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.210164 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.210178 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.210189 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.312438 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.312482 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.312493 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.312512 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.312524 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.414451 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.414508 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.414520 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.414540 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.414553 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.517534 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.517589 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.517601 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.517616 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.517625 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.620888 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.621028 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.621056 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.621112 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.621151 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.723767 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.723864 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.723901 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.723930 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.723951 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.827337 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.827392 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.827401 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.827414 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.827444 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.930466 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.930530 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.930552 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.930580 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.930603 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:54Z","lastTransitionTime":"2026-01-08T23:16:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:54 crc kubenswrapper[4945]: I0108 23:16:54.999862 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:55 crc kubenswrapper[4945]: E0108 23:16:55.000113 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.034018 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.034084 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.034101 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.034123 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.034142 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.136337 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.136374 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.136389 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.136410 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.136424 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.239751 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.240250 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.240344 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.240475 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.240744 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.343801 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.344188 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.344444 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.344649 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.344746 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.447064 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.447342 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.447443 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.447541 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.447623 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.550199 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.550255 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.550272 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.550297 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.550316 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.652351 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.652411 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.652424 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.652443 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.652455 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.754349 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.754411 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.754426 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.754442 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.754453 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.856368 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.856448 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.856458 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.856472 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.856483 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.959017 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.959047 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.959058 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.959073 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:55 crc kubenswrapper[4945]: I0108 23:16:55.959085 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:55Z","lastTransitionTime":"2026-01-08T23:16:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.000063 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.000155 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:56 crc kubenswrapper[4945]: E0108 23:16:56.000195 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:56 crc kubenswrapper[4945]: E0108 23:16:56.000299 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.000439 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:56 crc kubenswrapper[4945]: E0108 23:16:56.000525 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.062371 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.062429 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.062447 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.062470 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.062490 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.165378 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.165413 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.165423 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.165501 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.165516 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.268909 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.268987 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.269049 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.269079 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.269105 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.372629 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.372725 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.372743 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.372767 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.372784 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.475553 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.475593 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.475610 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.475629 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.475641 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.578900 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.578987 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.579049 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.579088 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.579111 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.682211 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.682262 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.682272 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.682287 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.682296 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.785925 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.786039 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.786063 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.786091 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.786112 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.889969 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.890068 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.890092 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.890121 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.890144 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.993405 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.993465 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.993483 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.993509 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.993527 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:56Z","lastTransitionTime":"2026-01-08T23:16:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:56 crc kubenswrapper[4945]: I0108 23:16:56.999811 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:57 crc kubenswrapper[4945]: E0108 23:16:57.000035 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.095670 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.095715 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.095725 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.095739 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.095748 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.199123 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.199190 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.199205 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.199224 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.199239 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.301424 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.301463 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.301473 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.301491 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.301502 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.403988 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.404068 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.404084 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.404101 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.404113 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.506767 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.506808 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.506820 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.506836 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.506848 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.609585 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.609627 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.609639 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.609660 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.609682 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.712630 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.712723 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.712737 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.712757 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.712777 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.815374 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.815434 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.815457 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.815472 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.815484 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.917820 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.917880 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.917896 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.917922 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:57 crc kubenswrapper[4945]: I0108 23:16:57.917939 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:57Z","lastTransitionTime":"2026-01-08T23:16:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.000268 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.000297 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.000297 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:16:58 crc kubenswrapper[4945]: E0108 23:16:58.000422 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:16:58 crc kubenswrapper[4945]: E0108 23:16:58.000526 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:16:58 crc kubenswrapper[4945]: E0108 23:16:58.000597 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.020233 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.020266 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.020277 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.020291 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.020303 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.123053 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.123104 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.123115 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.123130 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.123141 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.225756 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.225827 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.225849 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.225875 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.225893 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.328609 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.328646 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.328656 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.328672 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.328682 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.432039 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.432113 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.432133 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.432158 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.432175 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.534901 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.534955 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.534966 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.534982 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.535023 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.637263 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.637309 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.637323 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.637343 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.637359 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.740519 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.740573 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.740590 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.740609 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.740625 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.842862 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.842932 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.842955 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.842984 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.843038 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.945942 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.946088 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.946109 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.946140 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.946158 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:58Z","lastTransitionTime":"2026-01-08T23:16:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:58 crc kubenswrapper[4945]: I0108 23:16:58.999440 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:16:59 crc kubenswrapper[4945]: E0108 23:16:58.999928 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.000236 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:16:59 crc kubenswrapper[4945]: E0108 23:16:59.000668 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.048957 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.049041 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.049066 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.049094 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.049115 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.151754 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.151792 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.151802 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.151816 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.151825 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.254059 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.254111 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.254122 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.254140 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.254153 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.356420 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.356472 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.356485 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.356504 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.356521 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.459825 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.459890 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.459909 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.459966 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.459986 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.562723 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.562763 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.562774 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.562790 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.562801 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.665425 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.665478 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.665501 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.665529 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.665551 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.768804 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.768903 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.768927 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.768956 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.768976 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.872224 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.872288 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.872311 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.872344 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.872367 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.975027 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.975089 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.975105 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.975129 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:16:59 crc kubenswrapper[4945]: I0108 23:16:59.975147 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:16:59Z","lastTransitionTime":"2026-01-08T23:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.000281 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.000370 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.000475 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.000956 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.001359 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.001544 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.015235 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"694a1575-6630-406f-93e7-ef55359bc79c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a2cd0d9149b0ea3d95116408b34ccb272ab230360cf2ee02ce874fe8345e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spvcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vbm95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.035676 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e4ebcfd-e9bd-49a7-a0a0-edefafbc2042\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d40d06c6a7ef52398c82e81a61bac8b058d8c1351d06c2c6ad91e99aa4cfc3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9be376008630ba1913f38d9729cc55de2bcf9959bd3c9518ee44133096b1a74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1fa4060061d8e59c9e03b7d5918fb959f69ab2e9186cbb007159fe80fdb14ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d9d83f42cdefbec4ccd710d53fa73ac7c272536a6f2e05334fbc0de42f6e97eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://914a239e8a297e524bc87e117779767d8c6d44a48a2f54114febd0c3912874bc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://880345126a62309506136d6196f21bc2aeedc2fe760784a901152cf47a782740\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://960333041c26bafbdabe61e3180effae9b0a94be7523fa2bf507e0b3b170cd17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bc5zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pd2nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.048061 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06296d36-6978-4968-b8fc-430bdd945d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5157f19f42c4b4109a716812a2982626f1eebab0859e362d97d98356ad58d63a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://440419ccbacd155cc8a6f9a9aee19884bb960ad62dc05d6fb70dc4f088f36cae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5d78v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8khct\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.061813 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a22048a-379c-4439-a231-421a4f376667\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2c88c1ec183edf26ebbcc75ef548776b5c006c50b07f606d3bcbd453a42ca08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84389aed88d61180b6a553220ab7cef789f0495f402589ae47fd706b7aa70419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.077771 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.077804 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.077814 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.077828 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.077838 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.085091 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfa0113a-e1ae-4977-9468-7c7fbeeeffb7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff34c29c7d2b7070f3a8f0f2909685724643213469aaa984d0075fae448e6cf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26c3a170c0cb556470851f6094bcbd597a2a164ab54cf1fbdb84c892641cdddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91fcd4670c2d6efea9ec64cd17c0eb6784b0850943fd4116bfc923aa63f939fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a9b3f36dfa5d44f6a117d45aeb88cc0092c43d9f0f0badf43a6f36d64b7848\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f80d7b8ce50aff913faedcc6b5a7a0bb968b9df962555fe72b77f8a94e77d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4807c739fef249c9f10ced9d47ca8f2c74502cf2130406a31f433593b87d28c2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ede9e0d9f54df7231df2b39c3594a6350877819e918d57f5da1d34c4f2e0727\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12d587dc4946fe887f853febfc5ac65f17c7c62e37d15032a33d534cff7f8072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.098730 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f499c197-c4c1-4fc7-95b2-c797e8ce9682\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"message\\\":\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0108 23:15:48.010029 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0108 23:15:48.010058 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010065 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0108 23:15:48.010072 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0108 23:15:48.010077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0108 23:15:48.010081 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0108 23:15:48.010086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0108 23:15:48.010270 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0108 23:15:48.014909 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-198955155/tls.crt::/tmp/serving-cert-198955155/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1767914132\\\\\\\\\\\\\\\" (2026-01-08 23:15:31 +0000 UTC to 2026-02-07 23:15:32 +0000 UTC (now=2026-01-08 23:15:48.014881758 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015067 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1767914147\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1767914147\\\\\\\\\\\\\\\" (2026-01-08 22:15:47 +0000 UTC to 2027-01-08 22:15:47 +0000 UTC (now=2026-01-08 23:15:48.015048822 +0000 UTC))\\\\\\\"\\\\nI0108 23:15:48.015084 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0108 23:15:48.015102 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nF0108 23:15:48.015171 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.114399 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a2cf860-f78b-453b-962e-d647440a0869\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d36680efef4e861275546b973f2f5951f12bd9a1345e7e42a074f63a239e39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a8fbe7c9002a40a9710659bf5e26884c493bb4f04a1605011f74cf208e6d42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8668fcd7154876b41602c1afc458c8680fbe1076da40d0ba670399fb1c815f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425b5378f41cec1792b418187b7aca8e768d8b9536c476bf11929e943fb5d175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.128886 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"021fd206-5467-4717-9d6a-bd61f7bd8c21\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ddc193cc4a12f4048f224f513dcb4a3b84fd7d1f11100450ae939691782abb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d60d6b306cece92a9f4e9fec27e58392716876d9b21bd1c87e6744a08b66ccd7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f5ef95a02d542433cf3e1b2e8042f05b518996e701f49681e45a9a3e11121ec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.140965 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.161255 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e12d0822-44c5-4bf0-a785-cf478c66210f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:42Z\\\",\\\"message\\\":\\\"473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/console_TCP_cluster\\\\\\\", UUID:\\\\\\\"d7d7b270-1480-47f8-bdf9-690dbab310cb\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/console_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/console\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.194\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0108 23:16:42.825959 6995 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:16:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vx7px\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gcbcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.172533 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xlbqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a7c4784-65bf-4adf-b855-a397fc1e794b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://546379852b6edc995c9c41a7ab0a7a027546941374e24b7ad55a506d29e28a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnlj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xlbqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.180579 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.180626 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.180639 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.180658 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.180670 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.188481 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dsh4d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fa9b342-4b22-49db-9022-2dd852e7d835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-08T23:16:36Z\\\",\\\"message\\\":\\\"2026-01-08T23:15:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33\\\\n2026-01-08T23:15:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ca0354b9-be71-4082-8ad4-d0e60975bb33 to /host/opt/cni/bin/\\\\n2026-01-08T23:15:51Z [verbose] multus-daemon started\\\\n2026-01-08T23:15:51Z [verbose] Readiness Indicator file check\\\\n2026-01-08T23:16:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88vq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dsh4d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.198790 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cbedd0-f69d-4a28-9077-13fed644be95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:16:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8qq2h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:16:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-g8gcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.216070 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.228900 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44d52a1c95c6827e22f93a41fa825beb3aae8cfd3147c64aecc7b109de1c82cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.241430 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.251941 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5cvrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9606db62-4e63-4b79-b069-289e09548144\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd90b1496393bb430d8710f51569627778f2e26cabc6a4fa4bbc415ff15f5b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lqcfh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-08T23:15:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5cvrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.264327 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52814823fd3d970919833f9dc4f7d124b82a478436a5b568879b115f6a1184b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.279078 4945 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-08T23:15:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e02a89f22e5f8d00e7163552b560ce4f4a46552bc424518caf577ab0a027731\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e09c855f11ee04813ba830d68f45bfcd7f7fa2d27622cb0db1a322c5a708fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-08T23:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.282791 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.282829 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.282838 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.282851 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.282861 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.313299 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.313355 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.313370 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.313391 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.313406 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.327234 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.330366 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.330398 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.330408 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.330426 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.330438 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.343337 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.346481 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.346542 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.346551 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.346565 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.346576 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.356920 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.359730 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.359749 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.359757 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.359768 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.359776 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.370124 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.372750 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.372872 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.372957 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.373062 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.373138 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.385371 4945 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-08T23:17:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3499f74f-1067-4bdd-9043-f09d8e65e05d\\\",\\\"systemUUID\\\":\\\"96e5d795-e15f-4ef9-8efa-c5f4d0e8076a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-08T23:17:00Z is after 2025-08-24T17:21:41Z" Jan 08 23:17:00 crc kubenswrapper[4945]: E0108 23:17:00.385478 4945 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.387280 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.387318 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.387327 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.387343 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.387353 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.489237 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.489301 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.489320 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.489346 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.489369 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.591313 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.591358 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.591369 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.591386 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.591396 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.694351 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.694380 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.694389 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.694404 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.694412 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.796717 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.796741 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.796750 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.796763 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.796772 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.900041 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.900091 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.900108 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.900131 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:00 crc kubenswrapper[4945]: I0108 23:17:00.900147 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:00Z","lastTransitionTime":"2026-01-08T23:17:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.000244 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:01 crc kubenswrapper[4945]: E0108 23:17:01.000513 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.002463 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.002545 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.002572 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.002603 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.002630 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.104409 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.104448 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.104458 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.104471 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.104482 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.207062 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.207099 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.207111 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.207127 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.207138 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.309731 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.309784 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.309799 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.309817 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.309828 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.412824 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.412861 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.412869 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.412885 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.412893 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.515103 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.515138 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.515148 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.515161 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.515170 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.617753 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.617794 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.617805 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.617823 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.617836 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.720973 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.721023 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.721035 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.721049 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.721059 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.822894 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.822927 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.822940 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.822955 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.822967 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.925575 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.925605 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.925616 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.925631 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.925644 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:01Z","lastTransitionTime":"2026-01-08T23:17:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.999562 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.999611 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:01 crc kubenswrapper[4945]: I0108 23:17:01.999611 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:01 crc kubenswrapper[4945]: E0108 23:17:01.999701 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:01 crc kubenswrapper[4945]: E0108 23:17:01.999783 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:02 crc kubenswrapper[4945]: E0108 23:17:02.000067 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.026955 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.027019 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.027031 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.027044 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.027053 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.129049 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.129102 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.129113 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.129128 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.129141 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.231481 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.231514 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.231527 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.231544 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.231553 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.333958 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.334015 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.334024 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.334037 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.334045 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.435652 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.435681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.435690 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.435702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.435710 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.537224 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.537260 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.537270 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.537285 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.537296 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.639782 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.639815 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.639829 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.639843 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.639852 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.741805 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.741836 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.741845 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.741861 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.741873 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.845342 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.845387 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.845399 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.845418 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.845434 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.947881 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.947917 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.947929 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.947944 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.947957 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:02Z","lastTransitionTime":"2026-01-08T23:17:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:02 crc kubenswrapper[4945]: I0108 23:17:02.999494 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:02 crc kubenswrapper[4945]: E0108 23:17:02.999650 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.050013 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.050060 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.050070 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.050084 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.050092 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.153010 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.153067 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.153086 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.153108 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.153125 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.255214 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.255257 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.255269 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.255281 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.255290 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.357634 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.357673 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.357683 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.357696 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.357706 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.459401 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.459451 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.459463 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.459480 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.459493 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.561758 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.561807 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.561820 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.561839 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.561847 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.664136 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.664195 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.664213 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.664233 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.664248 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.766671 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.766744 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.766756 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.766804 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.766817 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.869169 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.869208 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.869219 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.869236 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.869248 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.971648 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.971692 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.971700 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.971716 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:03 crc kubenswrapper[4945]: I0108 23:17:03.971725 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:03Z","lastTransitionTime":"2026-01-08T23:17:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:03.999980 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.000034 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:04 crc kubenswrapper[4945]: E0108 23:17:04.000132 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.000139 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:04 crc kubenswrapper[4945]: E0108 23:17:04.000287 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:04 crc kubenswrapper[4945]: E0108 23:17:04.000316 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.074079 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.074131 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.074143 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.074181 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.074191 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.176468 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.176509 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.176518 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.176534 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.176544 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.278536 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.278583 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.278592 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.278609 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.278635 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.381150 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.381189 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.381200 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.381215 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.381225 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.512086 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.512118 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.512128 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.512143 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.512154 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.614527 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.614572 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.614584 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.614597 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.614606 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.717268 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.717298 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.717308 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.717321 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.717330 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.819195 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.819296 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.819321 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.819348 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.819370 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.921486 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.921530 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.921540 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.921557 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.921570 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:04Z","lastTransitionTime":"2026-01-08T23:17:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:04 crc kubenswrapper[4945]: I0108 23:17:04.999289 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:04 crc kubenswrapper[4945]: E0108 23:17:04.999426 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.023870 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.023924 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.023939 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.023957 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.023969 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.126258 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.126292 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.126300 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.126312 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.126321 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.228395 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.228440 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.228451 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.228464 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.228474 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.330881 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.330927 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.330938 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.330952 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.330961 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.432702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.432743 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.432754 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.432769 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.432780 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.535704 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.535777 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.535789 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.535810 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.535833 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.639024 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.639119 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.639136 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.639156 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.639169 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.741615 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.741657 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.741667 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.741681 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.741692 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.845219 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.845298 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.845322 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.845356 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.845383 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.948280 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.948315 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.948322 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.948337 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.948345 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:05Z","lastTransitionTime":"2026-01-08T23:17:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:05 crc kubenswrapper[4945]: I0108 23:17:05.999800 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:06 crc kubenswrapper[4945]: E0108 23:17:05.999892 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:05.999902 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:05.999944 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:06 crc kubenswrapper[4945]: E0108 23:17:06.000110 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:06 crc kubenswrapper[4945]: E0108 23:17:06.000321 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.050301 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.050342 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.050350 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.050364 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.050374 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.154303 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.154701 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.154842 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.155024 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.155165 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.259559 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.259625 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.259643 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.259671 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.259688 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.362334 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.362374 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.362382 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.362395 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.362403 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.464501 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.464551 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.464568 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.464590 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.464608 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.568294 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.568347 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.568365 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.568390 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.568407 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.670605 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.670675 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.670694 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.670722 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.670739 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.773096 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.773137 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.773147 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.773162 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.773171 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.875327 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.875374 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.875389 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.875409 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.875424 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.978431 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.978510 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.978532 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.978558 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.978575 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:06Z","lastTransitionTime":"2026-01-08T23:17:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:06 crc kubenswrapper[4945]: I0108 23:17:06.999656 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:07 crc kubenswrapper[4945]: E0108 23:17:07.000204 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.081548 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.081634 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.081661 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.081693 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.081714 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.184644 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.184708 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.184726 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.184751 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.184769 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.287582 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.287653 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.287671 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.287695 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.287712 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.390180 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.390234 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.390246 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.390265 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.390278 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.493285 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.493356 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.493379 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.493410 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.493433 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.596167 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.596216 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.596227 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.596245 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.596256 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.698821 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.698893 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.698932 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.698965 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.699055 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.792276 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:07 crc kubenswrapper[4945]: E0108 23:17:07.792440 4945 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:17:07 crc kubenswrapper[4945]: E0108 23:17:07.792495 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs podName:53cbedd0-f69d-4a28-9077-13fed644be95 nodeName:}" failed. No retries permitted until 2026-01-08 23:18:11.792478879 +0000 UTC m=+162.103637825 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs") pod "network-metrics-daemon-g8gcl" (UID: "53cbedd0-f69d-4a28-9077-13fed644be95") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.802205 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.802250 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.802261 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.802277 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.802289 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.904686 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.904729 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.904745 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.904761 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.904771 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:07Z","lastTransitionTime":"2026-01-08T23:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.999314 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.999325 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:07 crc kubenswrapper[4945]: E0108 23:17:07.999542 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:07 crc kubenswrapper[4945]: I0108 23:17:07.999603 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:07 crc kubenswrapper[4945]: E0108 23:17:07.999692 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:08 crc kubenswrapper[4945]: E0108 23:17:07.999849 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.007080 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.007127 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.007140 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.007158 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.007170 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.109297 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.109372 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.109396 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.109426 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.109448 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.211945 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.212009 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.212019 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.212033 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.212043 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.313924 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.313959 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.313969 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.313984 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.314016 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.416326 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.416360 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.416368 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.416380 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.416389 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.518827 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.518865 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.518876 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.518892 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.518903 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.621266 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.621306 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.621318 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.621335 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.621346 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.723904 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.723948 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.723964 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.723986 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.724025 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.826215 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.826260 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.826269 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.826283 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.826292 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.928132 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.928166 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.928175 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.928188 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:08 crc kubenswrapper[4945]: I0108 23:17:08.928198 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:08Z","lastTransitionTime":"2026-01-08T23:17:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.000249 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:09 crc kubenswrapper[4945]: E0108 23:17:09.000700 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.030965 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.031016 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.031027 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.031042 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.031051 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.133289 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.133354 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.133380 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.133411 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.133428 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.236172 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.236280 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.236301 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.236327 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.236345 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.339786 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.339886 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.339910 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.339939 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.339962 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.442758 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.442784 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.442793 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.442805 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.442815 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.544672 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.544710 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.544721 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.544736 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.544746 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.648699 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.648910 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.649019 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.649099 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.649170 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.752575 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.752637 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.752658 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.752683 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.752706 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.854865 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.854937 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.854974 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.855035 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.855060 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.958062 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.958132 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.958153 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.958182 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:09 crc kubenswrapper[4945]: I0108 23:17:09.958208 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:09Z","lastTransitionTime":"2026-01-08T23:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.000165 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.000224 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:10 crc kubenswrapper[4945]: E0108 23:17:10.000293 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.000348 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:10 crc kubenswrapper[4945]: E0108 23:17:10.000470 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:10 crc kubenswrapper[4945]: E0108 23:17:10.000585 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.028198 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.028172667 podStartE2EDuration="48.028172667s" podCreationTimestamp="2026-01-08 23:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.020391124 +0000 UTC m=+100.331550070" watchObservedRunningTime="2026-01-08 23:17:10.028172667 +0000 UTC m=+100.339331653" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.060195 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.060234 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.060245 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.060261 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.060273 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:10Z","lastTransitionTime":"2026-01-08T23:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.068469 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podStartSLOduration=81.068451301 podStartE2EDuration="1m21.068451301s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.043364875 +0000 UTC m=+100.354523891" watchObservedRunningTime="2026-01-08 23:17:10.068451301 +0000 UTC m=+100.379610247" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.068696 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pd2nq" podStartSLOduration=81.068687517 podStartE2EDuration="1m21.068687517s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.068435311 +0000 UTC m=+100.379594257" watchObservedRunningTime="2026-01-08 23:17:10.068687517 +0000 UTC m=+100.379846473" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.081382 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8khct" podStartSLOduration=81.081364179 podStartE2EDuration="1m21.081364179s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.08103149 +0000 UTC m=+100.392190466" watchObservedRunningTime="2026-01-08 23:17:10.081364179 +0000 UTC m=+100.392523125" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.132491 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.132475246 podStartE2EDuration="1m18.132475246s" podCreationTimestamp="2026-01-08 23:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.132384734 +0000 UTC m=+100.443543690" watchObservedRunningTime="2026-01-08 23:17:10.132475246 +0000 UTC m=+100.443634192" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.133066 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=34.133060181 podStartE2EDuration="34.133060181s" podCreationTimestamp="2026-01-08 23:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.10015928 +0000 UTC m=+100.411318226" watchObservedRunningTime="2026-01-08 23:17:10.133060181 +0000 UTC m=+100.444219127" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.148183 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.148168066 podStartE2EDuration="1m22.148168066s" podCreationTimestamp="2026-01-08 23:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.147443357 +0000 UTC m=+100.458602303" watchObservedRunningTime="2026-01-08 23:17:10.148168066 +0000 UTC m=+100.459327002" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.163413 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.163473 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.163483 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.163496 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.163506 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:10Z","lastTransitionTime":"2026-01-08T23:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.175343 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.175328037 podStartE2EDuration="1m22.175328037s" podCreationTimestamp="2026-01-08 23:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.163347524 +0000 UTC m=+100.474506470" watchObservedRunningTime="2026-01-08 23:17:10.175328037 +0000 UTC m=+100.486486983" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.205583 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5cvrd" podStartSLOduration=81.205566378 podStartE2EDuration="1m21.205566378s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.205337302 +0000 UTC m=+100.516496268" watchObservedRunningTime="2026-01-08 23:17:10.205566378 +0000 UTC m=+100.516725324" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.220412 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xlbqw" podStartSLOduration=81.220399366 podStartE2EDuration="1m21.220399366s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.219896913 +0000 UTC m=+100.531055859" watchObservedRunningTime="2026-01-08 23:17:10.220399366 +0000 UTC m=+100.531558312" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.242165 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dsh4d" podStartSLOduration=81.242144465 podStartE2EDuration="1m21.242144465s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:10.231879997 +0000 UTC m=+100.543038943" watchObservedRunningTime="2026-01-08 23:17:10.242144465 +0000 UTC m=+100.553303411" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.265405 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.265453 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.265464 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.265477 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.265487 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:10Z","lastTransitionTime":"2026-01-08T23:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.367250 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.367287 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.367296 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.367310 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.367320 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:10Z","lastTransitionTime":"2026-01-08T23:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.469816 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.469847 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.469855 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.469867 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.469877 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:10Z","lastTransitionTime":"2026-01-08T23:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.572612 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.572685 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.572702 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.572717 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.572729 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:10Z","lastTransitionTime":"2026-01-08T23:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.609095 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.609160 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.609174 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.609190 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.609200 4945 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-08T23:17:10Z","lastTransitionTime":"2026-01-08T23:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.649328 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk"] Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.649851 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.652921 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.653207 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.653544 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.655071 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.735651 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08700c2d-cccd-4fa7-bcd4-432f26620965-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.735699 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08700c2d-cccd-4fa7-bcd4-432f26620965-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.735743 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/08700c2d-cccd-4fa7-bcd4-432f26620965-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.735852 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08700c2d-cccd-4fa7-bcd4-432f26620965-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.735930 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/08700c2d-cccd-4fa7-bcd4-432f26620965-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.836859 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/08700c2d-cccd-4fa7-bcd4-432f26620965-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.836964 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08700c2d-cccd-4fa7-bcd4-432f26620965-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.837038 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08700c2d-cccd-4fa7-bcd4-432f26620965-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.837126 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/08700c2d-cccd-4fa7-bcd4-432f26620965-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.837175 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08700c2d-cccd-4fa7-bcd4-432f26620965-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.837510 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/08700c2d-cccd-4fa7-bcd4-432f26620965-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.837564 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/08700c2d-cccd-4fa7-bcd4-432f26620965-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.838018 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08700c2d-cccd-4fa7-bcd4-432f26620965-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.849964 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08700c2d-cccd-4fa7-bcd4-432f26620965-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.854725 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08700c2d-cccd-4fa7-bcd4-432f26620965-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z7wjk\" (UID: \"08700c2d-cccd-4fa7-bcd4-432f26620965\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.968221 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" Jan 08 23:17:10 crc kubenswrapper[4945]: I0108 23:17:10.999301 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:10 crc kubenswrapper[4945]: E0108 23:17:10.999444 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:11 crc kubenswrapper[4945]: I0108 23:17:11.000408 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:17:11 crc kubenswrapper[4945]: E0108 23:17:11.000659 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:17:11 crc kubenswrapper[4945]: I0108 23:17:11.532895 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" event={"ID":"08700c2d-cccd-4fa7-bcd4-432f26620965","Type":"ContainerStarted","Data":"8359087292f6c2685bfcdb0bc90c7875d6b75bf83dff503b6723c79fd4d07c76"} Jan 08 23:17:11 crc kubenswrapper[4945]: I0108 23:17:11.532950 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" event={"ID":"08700c2d-cccd-4fa7-bcd4-432f26620965","Type":"ContainerStarted","Data":"5600fbb4957a8460c350e5851be652e92382ddc17e0ba719cb724f4e627fc7ee"} Jan 08 23:17:11 crc kubenswrapper[4945]: I0108 23:17:11.545943 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z7wjk" podStartSLOduration=82.545918854 podStartE2EDuration="1m22.545918854s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:11.545746489 +0000 UTC m=+101.856905455" watchObservedRunningTime="2026-01-08 23:17:11.545918854 +0000 UTC m=+101.857077820" Jan 08 23:17:12 crc kubenswrapper[4945]: I0108 23:17:12.000182 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:12 crc kubenswrapper[4945]: I0108 23:17:12.000245 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:12 crc kubenswrapper[4945]: I0108 23:17:12.000339 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:12 crc kubenswrapper[4945]: E0108 23:17:12.000353 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:12 crc kubenswrapper[4945]: E0108 23:17:12.000469 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:12 crc kubenswrapper[4945]: E0108 23:17:12.000561 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:13 crc kubenswrapper[4945]: I0108 23:17:12.999974 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:13 crc kubenswrapper[4945]: E0108 23:17:13.000109 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:13 crc kubenswrapper[4945]: I0108 23:17:13.999615 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:13 crc kubenswrapper[4945]: I0108 23:17:13.999658 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:13 crc kubenswrapper[4945]: I0108 23:17:13.999640 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:13 crc kubenswrapper[4945]: E0108 23:17:13.999746 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:13 crc kubenswrapper[4945]: E0108 23:17:13.999796 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:13 crc kubenswrapper[4945]: E0108 23:17:13.999883 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:14 crc kubenswrapper[4945]: I0108 23:17:14.999471 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:15 crc kubenswrapper[4945]: E0108 23:17:14.999608 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:16 crc kubenswrapper[4945]: I0108 23:17:16.000217 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:16 crc kubenswrapper[4945]: I0108 23:17:16.000252 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:16 crc kubenswrapper[4945]: I0108 23:17:16.000295 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:16 crc kubenswrapper[4945]: E0108 23:17:16.000446 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:16 crc kubenswrapper[4945]: E0108 23:17:16.000556 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:16 crc kubenswrapper[4945]: E0108 23:17:16.000674 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:17 crc kubenswrapper[4945]: I0108 23:17:17.000204 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:17 crc kubenswrapper[4945]: E0108 23:17:17.000324 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:18 crc kubenswrapper[4945]: I0108 23:17:18.000119 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:18 crc kubenswrapper[4945]: I0108 23:17:18.000216 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:18 crc kubenswrapper[4945]: I0108 23:17:18.000359 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:18 crc kubenswrapper[4945]: E0108 23:17:18.000351 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:18 crc kubenswrapper[4945]: E0108 23:17:18.000454 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:18 crc kubenswrapper[4945]: E0108 23:17:18.000532 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:19 crc kubenswrapper[4945]: I0108 23:17:19.000353 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:19 crc kubenswrapper[4945]: E0108 23:17:19.000934 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:20 crc kubenswrapper[4945]: I0108 23:17:20.000764 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:20 crc kubenswrapper[4945]: I0108 23:17:20.000887 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:20 crc kubenswrapper[4945]: E0108 23:17:20.002107 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:20 crc kubenswrapper[4945]: I0108 23:17:20.002125 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:20 crc kubenswrapper[4945]: E0108 23:17:20.002222 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:20 crc kubenswrapper[4945]: E0108 23:17:20.002327 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:20 crc kubenswrapper[4945]: I0108 23:17:20.999459 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:20 crc kubenswrapper[4945]: E0108 23:17:20.999619 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:21 crc kubenswrapper[4945]: I0108 23:17:21.999598 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:21 crc kubenswrapper[4945]: I0108 23:17:21.999689 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:22 crc kubenswrapper[4945]: I0108 23:17:21.999624 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:22 crc kubenswrapper[4945]: E0108 23:17:21.999838 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:22 crc kubenswrapper[4945]: E0108 23:17:21.999897 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:22 crc kubenswrapper[4945]: E0108 23:17:22.000013 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:22 crc kubenswrapper[4945]: I0108 23:17:22.001548 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:17:22 crc kubenswrapper[4945]: E0108 23:17:22.001845 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gcbcl_openshift-ovn-kubernetes(e12d0822-44c5-4bf0-a785-cf478c66210f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" Jan 08 23:17:23 crc kubenswrapper[4945]: I0108 23:17:23.000244 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:23 crc kubenswrapper[4945]: E0108 23:17:23.001117 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:23 crc kubenswrapper[4945]: I0108 23:17:23.586526 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/1.log" Jan 08 23:17:23 crc kubenswrapper[4945]: I0108 23:17:23.587206 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/0.log" Jan 08 23:17:23 crc kubenswrapper[4945]: I0108 23:17:23.587264 4945 generic.go:334] "Generic (PLEG): container finished" podID="0fa9b342-4b22-49db-9022-2dd852e7d835" containerID="614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921" exitCode=1 Jan 08 23:17:23 crc kubenswrapper[4945]: I0108 23:17:23.587304 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsh4d" event={"ID":"0fa9b342-4b22-49db-9022-2dd852e7d835","Type":"ContainerDied","Data":"614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921"} Jan 08 23:17:23 crc kubenswrapper[4945]: I0108 23:17:23.587350 4945 scope.go:117] "RemoveContainer" containerID="39bfd4237e6a7d443510674556deecaa7a90fc27bccf96f8c8ce08933e91ee11" Jan 08 23:17:23 crc kubenswrapper[4945]: I0108 23:17:23.587877 4945 scope.go:117] "RemoveContainer" containerID="614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921" Jan 08 23:17:23 crc kubenswrapper[4945]: E0108 23:17:23.588131 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dsh4d_openshift-multus(0fa9b342-4b22-49db-9022-2dd852e7d835)\"" pod="openshift-multus/multus-dsh4d" podUID="0fa9b342-4b22-49db-9022-2dd852e7d835" Jan 08 23:17:23 crc kubenswrapper[4945]: I0108 23:17:23.999551 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:24 crc kubenswrapper[4945]: I0108 23:17:23.999650 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:24 crc kubenswrapper[4945]: E0108 23:17:23.999774 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:24 crc kubenswrapper[4945]: E0108 23:17:23.999956 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:24 crc kubenswrapper[4945]: I0108 23:17:24.000121 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:24 crc kubenswrapper[4945]: E0108 23:17:24.000196 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:24 crc kubenswrapper[4945]: I0108 23:17:24.592322 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/1.log" Jan 08 23:17:24 crc kubenswrapper[4945]: I0108 23:17:24.999542 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:25 crc kubenswrapper[4945]: E0108 23:17:24.999698 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:26 crc kubenswrapper[4945]: I0108 23:17:26.000148 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:26 crc kubenswrapper[4945]: I0108 23:17:26.000158 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:26 crc kubenswrapper[4945]: E0108 23:17:26.000386 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:26 crc kubenswrapper[4945]: E0108 23:17:26.000491 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:26 crc kubenswrapper[4945]: I0108 23:17:26.001085 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:26 crc kubenswrapper[4945]: E0108 23:17:26.001184 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:26 crc kubenswrapper[4945]: I0108 23:17:26.999635 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:27 crc kubenswrapper[4945]: E0108 23:17:27.000328 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:28 crc kubenswrapper[4945]: I0108 23:17:28.000036 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:28 crc kubenswrapper[4945]: I0108 23:17:28.000088 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:28 crc kubenswrapper[4945]: I0108 23:17:28.000132 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:28 crc kubenswrapper[4945]: E0108 23:17:28.000249 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:28 crc kubenswrapper[4945]: E0108 23:17:28.000353 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:28 crc kubenswrapper[4945]: E0108 23:17:28.000466 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:28 crc kubenswrapper[4945]: I0108 23:17:28.999717 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:28 crc kubenswrapper[4945]: E0108 23:17:28.999858 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:29 crc kubenswrapper[4945]: E0108 23:17:29.958654 4945 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 08 23:17:29 crc kubenswrapper[4945]: I0108 23:17:29.999744 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:30 crc kubenswrapper[4945]: E0108 23:17:30.001966 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:30 crc kubenswrapper[4945]: I0108 23:17:30.002110 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:30 crc kubenswrapper[4945]: I0108 23:17:30.002238 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:30 crc kubenswrapper[4945]: E0108 23:17:30.002271 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:30 crc kubenswrapper[4945]: E0108 23:17:30.002398 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:30 crc kubenswrapper[4945]: E0108 23:17:30.092655 4945 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 08 23:17:31 crc kubenswrapper[4945]: I0108 23:17:31.000114 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:31 crc kubenswrapper[4945]: E0108 23:17:31.000254 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:31 crc kubenswrapper[4945]: I0108 23:17:31.999547 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:31 crc kubenswrapper[4945]: I0108 23:17:31.999583 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:31 crc kubenswrapper[4945]: I0108 23:17:31.999781 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:32 crc kubenswrapper[4945]: E0108 23:17:31.999880 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:32 crc kubenswrapper[4945]: E0108 23:17:31.999935 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:32 crc kubenswrapper[4945]: E0108 23:17:32.000131 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:32 crc kubenswrapper[4945]: I0108 23:17:32.999304 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:33 crc kubenswrapper[4945]: E0108 23:17:32.999533 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:33 crc kubenswrapper[4945]: I0108 23:17:33.999871 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:33 crc kubenswrapper[4945]: I0108 23:17:33.999941 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:33 crc kubenswrapper[4945]: I0108 23:17:33.999910 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:34 crc kubenswrapper[4945]: E0108 23:17:34.000147 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:34 crc kubenswrapper[4945]: E0108 23:17:34.000216 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:34 crc kubenswrapper[4945]: E0108 23:17:34.000310 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:34 crc kubenswrapper[4945]: I0108 23:17:34.999863 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:35 crc kubenswrapper[4945]: E0108 23:17:35.000025 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:35 crc kubenswrapper[4945]: E0108 23:17:35.093918 4945 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 08 23:17:35 crc kubenswrapper[4945]: I0108 23:17:35.999896 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.000045 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.000155 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:36 crc kubenswrapper[4945]: E0108 23:17:36.000268 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.000378 4945 scope.go:117] "RemoveContainer" containerID="614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921" Jan 08 23:17:36 crc kubenswrapper[4945]: E0108 23:17:36.000599 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:36 crc kubenswrapper[4945]: E0108 23:17:36.000745 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.001934 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.642113 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/3.log" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.645876 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerStarted","Data":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.646416 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.648296 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/1.log" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.648351 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsh4d" event={"ID":"0fa9b342-4b22-49db-9022-2dd852e7d835","Type":"ContainerStarted","Data":"df77f4c64c58686ccf143a83433aacc0707b1ac8a2d94b795b5f4e382d2142d8"} Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.679458 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podStartSLOduration=107.679432644 podStartE2EDuration="1m47.679432644s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:36.678309864 +0000 UTC m=+126.989468850" watchObservedRunningTime="2026-01-08 23:17:36.679432644 +0000 UTC m=+126.990591600" Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.815081 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-g8gcl"] Jan 08 23:17:36 crc kubenswrapper[4945]: I0108 23:17:36.815320 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:36 crc kubenswrapper[4945]: E0108 23:17:36.815505 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:38 crc kubenswrapper[4945]: I0108 23:17:38.098114 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:38 crc kubenswrapper[4945]: E0108 23:17:38.098265 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:38 crc kubenswrapper[4945]: I0108 23:17:38.098306 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:38 crc kubenswrapper[4945]: E0108 23:17:38.098442 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:38 crc kubenswrapper[4945]: I0108 23:17:38.098488 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:38 crc kubenswrapper[4945]: E0108 23:17:38.098600 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:39 crc kubenswrapper[4945]: I0108 23:17:39.001195 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:39 crc kubenswrapper[4945]: E0108 23:17:39.001398 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-g8gcl" podUID="53cbedd0-f69d-4a28-9077-13fed644be95" Jan 08 23:17:40 crc kubenswrapper[4945]: I0108 23:17:39.999714 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:40 crc kubenswrapper[4945]: I0108 23:17:40.000081 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:40 crc kubenswrapper[4945]: I0108 23:17:40.000704 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:40 crc kubenswrapper[4945]: E0108 23:17:40.002836 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 08 23:17:40 crc kubenswrapper[4945]: E0108 23:17:40.003046 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 08 23:17:40 crc kubenswrapper[4945]: E0108 23:17:40.003222 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:40.999964 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.003495 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.005228 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.165696 4945 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.213372 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.213881 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.216742 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.217186 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.217516 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76rmp"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.218130 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.219469 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.219814 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gvrb5"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.219894 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.220069 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.220198 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.220236 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.220396 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.221042 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.222736 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.223589 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lmmph"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.224075 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ppfqk"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.224391 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.225018 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.225607 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.226037 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.224109 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.227889 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-serving-cert\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.227930 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-service-ca-bundle\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.227953 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f15a080f-5182-49ab-bcb8-75d85654378a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.227978 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27kt\" (UniqueName: \"kubernetes.io/projected/f15a080f-5182-49ab-bcb8-75d85654378a-kube-api-access-m27kt\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228020 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-audit-policies\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228039 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/234da26f-da47-42e2-9041-57f2c7ba819f-machine-approver-tls\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228059 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-config\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228080 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f15a080f-5182-49ab-bcb8-75d85654378a-images\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228099 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-etcd-client\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228151 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-encryption-config\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228172 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/329c2ef0-0dff-43fd-b3e3-0a65aad24225-serving-cert\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228192 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/234da26f-da47-42e2-9041-57f2c7ba819f-auth-proxy-config\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228219 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkk6\" (UniqueName: \"kubernetes.io/projected/234da26f-da47-42e2-9041-57f2c7ba819f-kube-api-access-ckkk6\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228248 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228288 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjzw\" (UniqueName: \"kubernetes.io/projected/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-kube-api-access-vpjzw\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228320 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-client-ca\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228340 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2534d4c-181b-45a2-8fec-118b7f17d296-serving-cert\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228361 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228383 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxdmz\" (UniqueName: \"kubernetes.io/projected/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-kube-api-access-mxdmz\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228405 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228425 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228445 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-audit-dir\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228464 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-config\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228485 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-config\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228506 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv9gz\" (UniqueName: \"kubernetes.io/projected/329c2ef0-0dff-43fd-b3e3-0a65aad24225-kube-api-access-fv9gz\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228531 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15a080f-5182-49ab-bcb8-75d85654378a-config\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttkrg\" (UniqueName: \"kubernetes.io/projected/d2534d4c-181b-45a2-8fec-118b7f17d296-kube-api-access-ttkrg\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228586 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-client-ca\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228608 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234da26f-da47-42e2-9041-57f2c7ba819f-config\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.228628 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-serving-cert\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.233305 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hj428"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.234330 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.235068 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.235378 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-n2sv5"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.235698 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-n2sv5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.236004 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.254815 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.255032 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.255142 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.255053 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.254927 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.255504 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.255926 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.257582 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.257829 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.258062 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.258264 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.258546 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.258624 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.258724 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.258855 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.259082 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.260524 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.260664 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.261049 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.261343 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.261649 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.261934 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.262243 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.262425 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.262480 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.266305 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.267124 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.267351 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.267488 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.267594 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.267705 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.267817 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.267918 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.268119 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.268336 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.268473 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.268704 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.270104 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.272209 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.272375 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.272414 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.272535 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.272599 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.272725 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.285346 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.286519 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.289388 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.289566 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.296795 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-62c5v"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.297367 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.298210 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.298640 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.299093 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.272790 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.300455 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.299472 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.311551 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.312758 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j8vl9"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.313674 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.314051 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.314081 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.314496 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.315390 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.315870 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.316226 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.320703 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.320961 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.322038 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.328539 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.328938 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329118 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329210 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329301 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkk6\" (UniqueName: \"kubernetes.io/projected/234da26f-da47-42e2-9041-57f2c7ba819f-kube-api-access-ckkk6\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329349 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329396 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dea31f18-7796-48e2-af32-a185d3221a4a-profile-collector-cert\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329426 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329454 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74e7ac98-1603-4ed9-9306-735d271e5142-audit-dir\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329488 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjzw\" (UniqueName: \"kubernetes.io/projected/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-kube-api-access-vpjzw\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329517 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b65nb\" (UniqueName: \"kubernetes.io/projected/e1b585b7-5543-4705-9167-d53bfc8c1f8d-kube-api-access-b65nb\") pod \"package-server-manager-789f6589d5-5fcjl\" (UID: \"e1b585b7-5543-4705-9167-d53bfc8c1f8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329547 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4787a9-c74e-4dc3-b90a-0ff81584f890-serving-cert\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329582 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-client-ca\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329610 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2534d4c-181b-45a2-8fec-118b7f17d296-serving-cert\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329640 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnlw\" (UniqueName: \"kubernetes.io/projected/1f4787a9-c74e-4dc3-b90a-0ff81584f890-kube-api-access-8lnlw\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329671 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329699 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxdmz\" (UniqueName: \"kubernetes.io/projected/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-kube-api-access-mxdmz\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329725 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329754 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-encryption-config\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329783 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329803 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-audit-dir\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329832 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-config\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329861 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dea31f18-7796-48e2-af32-a185d3221a4a-srv-cert\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329884 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-config\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329906 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv9gz\" (UniqueName: \"kubernetes.io/projected/329c2ef0-0dff-43fd-b3e3-0a65aad24225-kube-api-access-fv9gz\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329936 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15a080f-5182-49ab-bcb8-75d85654378a-config\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329963 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttkrg\" (UniqueName: \"kubernetes.io/projected/d2534d4c-181b-45a2-8fec-118b7f17d296-kube-api-access-ttkrg\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330027 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-client-ca\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330045 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/74e7ac98-1603-4ed9-9306-735d271e5142-node-pullsecrets\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330063 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-serving-cert\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330068 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330080 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234da26f-da47-42e2-9041-57f2c7ba819f-config\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330101 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn9x2\" (UniqueName: \"kubernetes.io/projected/1cce3808-cf0e-430b-bf61-b86cee0baf44-kube-api-access-sn9x2\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330117 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xws\" (UniqueName: \"kubernetes.io/projected/5ab5d977-272f-41d9-b787-ca6d00d1c668-kube-api-access-z2xws\") pod \"cluster-samples-operator-665b6dd947-jsmc9\" (UID: \"5ab5d977-272f-41d9-b787-ca6d00d1c668\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330135 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-serving-cert\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330150 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cce3808-cf0e-430b-bf61-b86cee0baf44-service-ca-bundle\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330166 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-config\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330193 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-stats-auth\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330209 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-etcd-client\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330229 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b585b7-5543-4705-9167-d53bfc8c1f8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5fcjl\" (UID: \"e1b585b7-5543-4705-9167-d53bfc8c1f8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330252 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jkf\" (UniqueName: \"kubernetes.io/projected/dea31f18-7796-48e2-af32-a185d3221a4a-kube-api-access-g7jkf\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330275 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-audit\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330295 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kktc4\" (UniqueName: \"kubernetes.io/projected/74e7ac98-1603-4ed9-9306-735d271e5142-kube-api-access-kktc4\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330315 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-serving-cert\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330335 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-service-ca-bundle\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330352 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f15a080f-5182-49ab-bcb8-75d85654378a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330369 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-image-import-ca\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330387 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f4787a9-c74e-4dc3-b90a-0ff81584f890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330407 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m27kt\" (UniqueName: \"kubernetes.io/projected/f15a080f-5182-49ab-bcb8-75d85654378a-kube-api-access-m27kt\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330426 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-audit-policies\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330442 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/234da26f-da47-42e2-9041-57f2c7ba819f-machine-approver-tls\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330457 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-config\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330473 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f15a080f-5182-49ab-bcb8-75d85654378a-images\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330489 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-default-certificate\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330506 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-etcd-client\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330523 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-encryption-config\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330540 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/329c2ef0-0dff-43fd-b3e3-0a65aad24225-serving-cert\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330556 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/234da26f-da47-42e2-9041-57f2c7ba819f-auth-proxy-config\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330573 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-metrics-certs\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330593 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-etcd-serving-ca\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330609 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab5d977-272f-41d9-b787-ca6d00d1c668-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jsmc9\" (UID: \"5ab5d977-272f-41d9-b787-ca6d00d1c668\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330912 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330949 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.330989 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7hvvw"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.336711 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pzbw5"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.337168 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zw5q5"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.337515 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-pmfct"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.332196 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-client-ca\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.331420 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.329177 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.332346 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.332397 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.338270 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.333925 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-config\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.339412 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-service-ca-bundle\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.343129 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/234da26f-da47-42e2-9041-57f2c7ba819f-auth-proxy-config\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.343648 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234da26f-da47-42e2-9041-57f2c7ba819f-config\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.343797 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15a080f-5182-49ab-bcb8-75d85654378a-config\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.344137 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.344709 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-client-ca\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.345501 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.345938 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-serving-cert\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.346173 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.347125 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.347202 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-audit-dir\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.347875 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-config\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.348456 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.348581 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/329c2ef0-0dff-43fd-b3e3-0a65aad24225-serving-cert\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.348625 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.348704 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.348822 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-config\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.348831 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.348910 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-encryption-config\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.348662 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.349429 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.349841 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.349954 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.350973 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f15a080f-5182-49ab-bcb8-75d85654378a-images\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.356735 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f15a080f-5182-49ab-bcb8-75d85654378a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.359184 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.359328 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-audit-policies\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.361081 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.362554 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329c2ef0-0dff-43fd-b3e3-0a65aad24225-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.362861 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.368161 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.368705 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.368739 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.369607 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/234da26f-da47-42e2-9041-57f2c7ba819f-machine-approver-tls\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.370651 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.373375 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2534d4c-181b-45a2-8fec-118b7f17d296-serving-cert\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.374071 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.393492 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.395437 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-etcd-client\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.396862 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.401099 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.401264 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.401560 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.402545 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxpmj"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.403613 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.405939 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjzw\" (UniqueName: \"kubernetes.io/projected/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-kube-api-access-vpjzw\") pod \"route-controller-manager-6576b87f9c-s87cv\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.408358 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.410332 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxdmz\" (UniqueName: \"kubernetes.io/projected/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-kube-api-access-mxdmz\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.413108 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.413405 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.413490 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.414140 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e95b2dd5-21a1-4f5f-9849-2b3ceba0d888-serving-cert\") pod \"apiserver-7bbb656c7d-8tv9n\" (UID: \"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.418818 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttkrg\" (UniqueName: \"kubernetes.io/projected/d2534d4c-181b-45a2-8fec-118b7f17d296-kube-api-access-ttkrg\") pod \"controller-manager-879f6c89f-gvrb5\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.419371 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m27kt\" (UniqueName: \"kubernetes.io/projected/f15a080f-5182-49ab-bcb8-75d85654378a-kube-api-access-m27kt\") pod \"machine-api-operator-5694c8668f-lmmph\" (UID: \"f15a080f-5182-49ab-bcb8-75d85654378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.419519 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkk6\" (UniqueName: \"kubernetes.io/projected/234da26f-da47-42e2-9041-57f2c7ba819f-kube-api-access-ckkk6\") pod \"machine-approver-56656f9798-wq66q\" (UID: \"234da26f-da47-42e2-9041-57f2c7ba819f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.419767 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv9gz\" (UniqueName: \"kubernetes.io/projected/329c2ef0-0dff-43fd-b3e3-0a65aad24225-kube-api-access-fv9gz\") pod \"authentication-operator-69f744f599-76rmp\" (UID: \"329c2ef0-0dff-43fd-b3e3-0a65aad24225\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.422942 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.424277 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.424603 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.427873 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.428838 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.429232 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.429259 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.429504 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.429782 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qn985"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.429946 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.430224 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.430317 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431446 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lnlw\" (UniqueName: \"kubernetes.io/projected/1f4787a9-c74e-4dc3-b90a-0ff81584f890-kube-api-access-8lnlw\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431493 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-encryption-config\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431553 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dea31f18-7796-48e2-af32-a185d3221a4a-srv-cert\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431597 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/74e7ac98-1603-4ed9-9306-735d271e5142-node-pullsecrets\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431631 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-serving-cert\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431659 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xws\" (UniqueName: \"kubernetes.io/projected/5ab5d977-272f-41d9-b787-ca6d00d1c668-kube-api-access-z2xws\") pod \"cluster-samples-operator-665b6dd947-jsmc9\" (UID: \"5ab5d977-272f-41d9-b787-ca6d00d1c668\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431688 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn9x2\" (UniqueName: \"kubernetes.io/projected/1cce3808-cf0e-430b-bf61-b86cee0baf44-kube-api-access-sn9x2\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431714 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-config\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431739 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cce3808-cf0e-430b-bf61-b86cee0baf44-service-ca-bundle\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431863 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/74e7ac98-1603-4ed9-9306-735d271e5142-node-pullsecrets\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.431635 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.432215 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-stats-auth\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.432241 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-etcd-client\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.432262 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-audit\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.433258 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.432279 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kktc4\" (UniqueName: \"kubernetes.io/projected/74e7ac98-1603-4ed9-9306-735d271e5142-kube-api-access-kktc4\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.433498 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b585b7-5543-4705-9167-d53bfc8c1f8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5fcjl\" (UID: \"e1b585b7-5543-4705-9167-d53bfc8c1f8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.433522 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jkf\" (UniqueName: \"kubernetes.io/projected/dea31f18-7796-48e2-af32-a185d3221a4a-kube-api-access-g7jkf\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434145 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-image-import-ca\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434173 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f4787a9-c74e-4dc3-b90a-0ff81584f890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434198 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-default-certificate\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434221 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-etcd-serving-ca\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434242 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab5d977-272f-41d9-b787-ca6d00d1c668-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jsmc9\" (UID: \"5ab5d977-272f-41d9-b787-ca6d00d1c668\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434265 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-metrics-certs\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434339 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dea31f18-7796-48e2-af32-a185d3221a4a-profile-collector-cert\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434363 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434384 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74e7ac98-1603-4ed9-9306-735d271e5142-audit-dir\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434408 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b65nb\" (UniqueName: \"kubernetes.io/projected/e1b585b7-5543-4705-9167-d53bfc8c1f8d-kube-api-access-b65nb\") pod \"package-server-manager-789f6589d5-5fcjl\" (UID: \"e1b585b7-5543-4705-9167-d53bfc8c1f8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.434435 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4787a9-c74e-4dc3-b90a-0ff81584f890-serving-cert\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.437286 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-etcd-serving-ca\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.433666 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-config\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.438119 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-audit\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.438383 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-image-import-ca\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.438531 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/dea31f18-7796-48e2-af32-a185d3221a4a-srv-cert\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.438762 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f4787a9-c74e-4dc3-b90a-0ff81584f890-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.438854 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-serving-cert\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.439133 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cce3808-cf0e-430b-bf61-b86cee0baf44-service-ca-bundle\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.439703 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b585b7-5543-4705-9167-d53bfc8c1f8d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5fcjl\" (UID: \"e1b585b7-5543-4705-9167-d53bfc8c1f8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.440043 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74e7ac98-1603-4ed9-9306-735d271e5142-audit-dir\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.440410 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f4787a9-c74e-4dc3-b90a-0ff81584f890-serving-cert\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.441076 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74e7ac98-1603-4ed9-9306-735d271e5142-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.441459 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.442937 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.445141 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.445929 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-etcd-client\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.446131 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.446157 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.446510 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.446855 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jkvh2"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.448133 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.448498 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.448705 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.448839 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.448923 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.448970 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.449047 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.449144 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.449186 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.449335 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.449506 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.449666 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8ldnl"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.449798 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.451584 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.451751 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452070 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-default-certificate\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452166 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-stats-auth\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452311 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74e7ac98-1603-4ed9-9306-735d271e5142-encryption-config\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452476 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452504 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lmmph"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452519 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j8vl9"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452532 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452498 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ab5d977-272f-41d9-b787-ca6d00d1c668-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jsmc9\" (UID: \"5ab5d977-272f-41d9-b787-ca6d00d1c668\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452617 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452736 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/dea31f18-7796-48e2-af32-a185d3221a4a-profile-collector-cert\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.452805 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-n2sv5"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.453972 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.455188 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76rmp"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.455937 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.457845 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.459172 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gvrb5"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.459239 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.459816 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5dvr4"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.460732 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5dvr4" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.460983 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7hvvw"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.463948 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.464137 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1cce3808-cf0e-430b-bf61-b86cee0baf44-metrics-certs\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.464216 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ppfqk"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.464285 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxpmj"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.465540 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zw5q5"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.467425 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pzbw5"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.467888 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.468912 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hj428"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.472915 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.474177 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jkvh2"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.475480 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.480588 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.490455 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.493185 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8ldnl"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.494953 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-79krd"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.497088 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.497199 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-79krd" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.498813 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.501746 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.502873 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.503984 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.505291 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.506365 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.506493 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.507362 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.508528 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5dvr4"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.509609 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pmfct"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.510768 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qn985"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.511950 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.513346 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.516775 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.517487 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.517943 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-79krd"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.520088 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-btrhh"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.520866 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.521272 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qg7b7"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.522819 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qg7b7"] Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.523075 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.536924 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.550365 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.556613 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.564533 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.578503 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.595900 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.598908 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.617143 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.629946 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.638460 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.642627 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.656664 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.676640 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.684364 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" event={"ID":"234da26f-da47-42e2-9041-57f2c7ba819f","Type":"ContainerStarted","Data":"2e4fe50250083b2ae1def457be3df34fa2a5ea8400341f3fed406f31c7759faf"} Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.696519 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.716197 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.751759 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.756516 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.780479 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.796461 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.858251 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.882710 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.897157 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.916708 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.935900 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.955823 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.976905 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.996529 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.999601 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:41 crc kubenswrapper[4945]: I0108 23:17:41.999885 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:41.999940 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.016569 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.036886 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.056610 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.075557 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.096252 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.097808 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-bound-sa-token\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.097866 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/884cb1ac-efad-4ead-b31f-7301081aa310-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.098163 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-trusted-ca\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.098349 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-registry-tls\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.098377 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/884cb1ac-efad-4ead-b31f-7301081aa310-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.098426 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkfm2\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-kube-api-access-tkfm2\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.098447 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-registry-certificates\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.098521 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.098974 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:42.598955106 +0000 UTC m=+132.910114052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.121097 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gvrb5"] Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.122687 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv"] Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.125021 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-76rmp"] Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.125603 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.126292 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lmmph"] Jan 08 23:17:42 crc kubenswrapper[4945]: W0108 23:17:42.127129 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf15a080f_5182_49ab_bcb8_75d85654378a.slice/crio-5c16101b0fed5c97758094e40f1b66eae31b1005a3f680e202c266a2e75bda6d WatchSource:0}: Error finding container 5c16101b0fed5c97758094e40f1b66eae31b1005a3f680e202c266a2e75bda6d: Status 404 returned error can't find the container with id 5c16101b0fed5c97758094e40f1b66eae31b1005a3f680e202c266a2e75bda6d Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.128304 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n"] Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.136471 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.157466 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.176360 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.199052 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.199376 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:42.699175368 +0000 UTC m=+133.010334314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.199407 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-bound-sa-token\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.199745 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/884cb1ac-efad-4ead-b31f-7301081aa310-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200145 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9ttz\" (UniqueName: \"kubernetes.io/projected/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-kube-api-access-t9ttz\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200176 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-trusted-ca\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200282 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200372 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jbcf\" (UniqueName: \"kubernetes.io/projected/297cbd4b-37f7-4ab3-82ea-da1872a05ef1-kube-api-access-9jbcf\") pod \"downloads-7954f5f757-n2sv5\" (UID: \"297cbd4b-37f7-4ab3-82ea-da1872a05ef1\") " pod="openshift-console/downloads-7954f5f757-n2sv5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200440 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-registry-tls\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200497 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/884cb1ac-efad-4ead-b31f-7301081aa310-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200628 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkfm2\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-kube-api-access-tkfm2\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200648 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-registry-certificates\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200772 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200817 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.200776 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/884cb1ac-efad-4ead-b31f-7301081aa310-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.202434 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-trusted-ca\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.205115 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:42.705083793 +0000 UTC m=+133.016242739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.209497 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/884cb1ac-efad-4ead-b31f-7301081aa310-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.212440 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-registry-certificates\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.213387 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-registry-tls\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.218186 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.224384 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.236860 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.256293 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.276024 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.297126 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.303924 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.304104 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:42.804081763 +0000 UTC m=+133.115240709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304238 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304297 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/099b4860-bccd-462a-8a0e-f28604353408-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304347 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304464 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6391ef36-ef23-42f8-ae03-7610ede1f819-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304513 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-serving-cert\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304534 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-ca\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304559 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304577 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e486d4f3-46ec-4d4f-9c66-adc95459c76a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304617 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9ttz\" (UniqueName: \"kubernetes.io/projected/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-kube-api-access-t9ttz\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304637 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9c69b35b-9333-43fd-8af9-1a046fa97995-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8ldnl\" (UID: \"9c69b35b-9333-43fd-8af9-1a046fa97995\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304656 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099b4860-bccd-462a-8a0e-f28604353408-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304674 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc9d9307-1c9b-4162-9ffb-2493af6c4b54-cert\") pod \"ingress-canary-5dvr4\" (UID: \"fc9d9307-1c9b-4162-9ffb-2493af6c4b54\") " pod="openshift-ingress-canary/ingress-canary-5dvr4" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304720 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304744 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-mountpoint-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304784 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489254c-ff30-49b3-b896-b823f1ae8559-serving-cert\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304816 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304841 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304859 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9903e-6625-40a8-8e02-16b8175ff9bf-config\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304890 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-metrics-tls\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304908 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvsx\" (UniqueName: \"kubernetes.io/projected/fc9d9307-1c9b-4162-9ffb-2493af6c4b54-kube-api-access-9kvsx\") pod \"ingress-canary-5dvr4\" (UID: \"fc9d9307-1c9b-4162-9ffb-2493af6c4b54\") " pod="openshift-ingress-canary/ingress-canary-5dvr4" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304925 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f910f213-8fbe-44fe-888a-aea783dcd0ec-signing-cabundle\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304942 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.304961 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-audit-policies\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305011 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jbcf\" (UniqueName: \"kubernetes.io/projected/297cbd4b-37f7-4ab3-82ea-da1872a05ef1-kube-api-access-9jbcf\") pod \"downloads-7954f5f757-n2sv5\" (UID: \"297cbd4b-37f7-4ab3-82ea-da1872a05ef1\") " pod="openshift-console/downloads-7954f5f757-n2sv5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305028 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489254c-ff30-49b3-b896-b823f1ae8559-config\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305057 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt68q\" (UniqueName: \"kubernetes.io/projected/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-kube-api-access-qt68q\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305073 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305089 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-plugins-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305122 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/662bb234-dff9-4a44-9432-c2f864195ce0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79m2f\" (UID: \"662bb234-dff9-4a44-9432-c2f864195ce0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305138 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305158 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p7hf\" (UniqueName: \"kubernetes.io/projected/f910f213-8fbe-44fe-888a-aea783dcd0ec-kube-api-access-7p7hf\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305174 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e486d4f3-46ec-4d4f-9c66-adc95459c76a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305188 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdgb9\" (UniqueName: \"kubernetes.io/projected/93e10fcb-3cb5-454a-bcd1-1eae918e0601-kube-api-access-hdgb9\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.305204 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306656 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-client\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306731 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb8wn\" (UniqueName: \"kubernetes.io/projected/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-kube-api-access-cb8wn\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306763 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdw8\" (UniqueName: \"kubernetes.io/projected/490f6a3a-21e2-4264-8a92-75202ba3db64-kube-api-access-zsdw8\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306795 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9903e-6625-40a8-8e02-16b8175ff9bf-serving-cert\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306812 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdcm\" (UniqueName: \"kubernetes.io/projected/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-kube-api-access-cvdcm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306831 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv8w5\" (UniqueName: \"kubernetes.io/projected/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-kube-api-access-qv8w5\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306849 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e486d4f3-46ec-4d4f-9c66-adc95459c76a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306865 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prk48\" (UniqueName: \"kubernetes.io/projected/e486d4f3-46ec-4d4f-9c66-adc95459c76a-kube-api-access-prk48\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306880 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-config-volume\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306894 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-serving-cert\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306918 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmn2g\" (UniqueName: \"kubernetes.io/projected/b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9-kube-api-access-cmn2g\") pod \"dns-operator-744455d44c-jkvh2\" (UID: \"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9\") " pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306942 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blvl\" (UniqueName: \"kubernetes.io/projected/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-kube-api-access-2blvl\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306965 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6391ef36-ef23-42f8-ae03-7610ede1f819-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.306980 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wrwg\" (UniqueName: \"kubernetes.io/projected/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-kube-api-access-8wrwg\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307244 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-config-volume\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307267 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278wr\" (UniqueName: \"kubernetes.io/projected/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-kube-api-access-278wr\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307285 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g65pf\" (UniqueName: \"kubernetes.io/projected/662bb234-dff9-4a44-9432-c2f864195ce0-kube-api-access-g65pf\") pod \"control-plane-machine-set-operator-78cbb6b69f-79m2f\" (UID: \"662bb234-dff9-4a44-9432-c2f864195ce0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307307 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391ef36-ef23-42f8-ae03-7610ede1f819-config\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307327 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-oauth-serving-cert\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307342 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-console-config\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307367 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307568 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307596 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307762 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.307783 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0677ae19-e425-485b-9206-98c9ad11aea8-audit-dir\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.308564 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.308932 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f910f213-8fbe-44fe-888a-aea783dcd0ec-signing-key\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.308956 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-csi-data-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309044 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-config\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309063 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309092 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-metrics-tls\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309107 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvht8\" (UniqueName: \"kubernetes.io/projected/2cb9903e-6625-40a8-8e02-16b8175ff9bf-kube-api-access-zvht8\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309127 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hfxk\" (UniqueName: \"kubernetes.io/projected/9ea7f797-df27-4912-83e4-efe654ea3a2a-kube-api-access-7hfxk\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309143 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-registration-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309186 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-trusted-ca\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309202 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-apiservice-cert\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309365 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-node-bootstrap-token\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309398 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309426 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309487 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309713 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099b4860-bccd-462a-8a0e-f28604353408-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309754 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-service-ca\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309804 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-certs\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309826 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a8d494a-4ed4-4707-9354-98912697989d-proxy-tls\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309854 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a8d494a-4ed4-4707-9354-98912697989d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.309882 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-trusted-ca-bundle\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310051 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2k2\" (UniqueName: \"kubernetes.io/projected/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-kube-api-access-qw2k2\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310077 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-oauth-config\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310096 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5gqv\" (UniqueName: \"kubernetes.io/projected/1489254c-ff30-49b3-b896-b823f1ae8559-kube-api-access-l5gqv\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310252 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-config\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310281 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9-metrics-tls\") pod \"dns-operator-744455d44c-jkvh2\" (UID: \"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9\") " pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310298 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310434 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-secret-volume\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310479 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310518 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-socket-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-service-ca\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310613 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxskg\" (UniqueName: \"kubernetes.io/projected/e6710298-7872-4178-b6b8-730c3eddb965-kube-api-access-lxskg\") pod \"migrator-59844c95c7-kkxxb\" (UID: \"e6710298-7872-4178-b6b8-730c3eddb965\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310629 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310666 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310876 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310958 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1489254c-ff30-49b3-b896-b823f1ae8559-trusted-ca\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.310977 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311018 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2wsm\" (UniqueName: \"kubernetes.io/projected/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-kube-api-access-q2wsm\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311034 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-srv-cert\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311051 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzbj\" (UniqueName: \"kubernetes.io/projected/0677ae19-e425-485b-9206-98c9ad11aea8-kube-api-access-btzbj\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311074 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ptf4\" (UniqueName: \"kubernetes.io/projected/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-kube-api-access-7ptf4\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311102 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-webhook-cert\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311132 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw7ph\" (UniqueName: \"kubernetes.io/projected/8a8d494a-4ed4-4707-9354-98912697989d-kube-api-access-tw7ph\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311147 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-proxy-tls\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311164 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgk7h\" (UniqueName: \"kubernetes.io/projected/9c69b35b-9333-43fd-8af9-1a046fa97995-kube-api-access-kgk7h\") pod \"multus-admission-controller-857f4d67dd-8ldnl\" (UID: \"9c69b35b-9333-43fd-8af9-1a046fa97995\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311180 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-images\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311200 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-tmpfs\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.311377 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.311573 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:42.811562439 +0000 UTC m=+133.122721385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.312385 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.316180 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.342888 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.356977 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.375421 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.396197 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.411884 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.411975 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9903e-6625-40a8-8e02-16b8175ff9bf-serving-cert\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412021 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvdcm\" (UniqueName: \"kubernetes.io/projected/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-kube-api-access-cvdcm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412039 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv8w5\" (UniqueName: \"kubernetes.io/projected/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-kube-api-access-qv8w5\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412055 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e486d4f3-46ec-4d4f-9c66-adc95459c76a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412069 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prk48\" (UniqueName: \"kubernetes.io/projected/e486d4f3-46ec-4d4f-9c66-adc95459c76a-kube-api-access-prk48\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412085 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-config-volume\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412108 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-serving-cert\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412124 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmn2g\" (UniqueName: \"kubernetes.io/projected/b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9-kube-api-access-cmn2g\") pod \"dns-operator-744455d44c-jkvh2\" (UID: \"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9\") " pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412151 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blvl\" (UniqueName: \"kubernetes.io/projected/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-kube-api-access-2blvl\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412167 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6391ef36-ef23-42f8-ae03-7610ede1f819-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412185 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wrwg\" (UniqueName: \"kubernetes.io/projected/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-kube-api-access-8wrwg\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412208 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-config-volume\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412222 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278wr\" (UniqueName: \"kubernetes.io/projected/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-kube-api-access-278wr\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412237 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g65pf\" (UniqueName: \"kubernetes.io/projected/662bb234-dff9-4a44-9432-c2f864195ce0-kube-api-access-g65pf\") pod \"control-plane-machine-set-operator-78cbb6b69f-79m2f\" (UID: \"662bb234-dff9-4a44-9432-c2f864195ce0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412254 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391ef36-ef23-42f8-ae03-7610ede1f819-config\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412270 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-oauth-serving-cert\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412286 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-console-config\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412302 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412317 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412332 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412345 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.412376 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:42.912355975 +0000 UTC m=+133.223514921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412421 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0677ae19-e425-485b-9206-98c9ad11aea8-audit-dir\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412454 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412479 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f910f213-8fbe-44fe-888a-aea783dcd0ec-signing-key\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412495 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-csi-data-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412515 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-config\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412544 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-metrics-tls\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412564 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvht8\" (UniqueName: \"kubernetes.io/projected/2cb9903e-6625-40a8-8e02-16b8175ff9bf-kube-api-access-zvht8\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412582 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hfxk\" (UniqueName: \"kubernetes.io/projected/9ea7f797-df27-4912-83e4-efe654ea3a2a-kube-api-access-7hfxk\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412599 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-registration-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412619 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-trusted-ca\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412636 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-apiservice-cert\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412654 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-node-bootstrap-token\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412669 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412684 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412705 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099b4860-bccd-462a-8a0e-f28604353408-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412744 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-service-ca\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412760 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-certs\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412778 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a8d494a-4ed4-4707-9354-98912697989d-proxy-tls\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412795 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a8d494a-4ed4-4707-9354-98912697989d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412812 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-oauth-config\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412828 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-trusted-ca-bundle\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412845 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2k2\" (UniqueName: \"kubernetes.io/projected/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-kube-api-access-qw2k2\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412898 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5gqv\" (UniqueName: \"kubernetes.io/projected/1489254c-ff30-49b3-b896-b823f1ae8559-kube-api-access-l5gqv\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412904 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0677ae19-e425-485b-9206-98c9ad11aea8-audit-dir\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412916 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-config\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412941 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9-metrics-tls\") pod \"dns-operator-744455d44c-jkvh2\" (UID: \"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9\") " pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.412975 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413002 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-secret-volume\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413018 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413036 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-socket-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413050 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-service-ca\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413101 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxskg\" (UniqueName: \"kubernetes.io/projected/e6710298-7872-4178-b6b8-730c3eddb965-kube-api-access-lxskg\") pod \"migrator-59844c95c7-kkxxb\" (UID: \"e6710298-7872-4178-b6b8-730c3eddb965\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413117 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413140 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413155 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1489254c-ff30-49b3-b896-b823f1ae8559-trusted-ca\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413170 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413186 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2wsm\" (UniqueName: \"kubernetes.io/projected/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-kube-api-access-q2wsm\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413200 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-srv-cert\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413216 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btzbj\" (UniqueName: \"kubernetes.io/projected/0677ae19-e425-485b-9206-98c9ad11aea8-kube-api-access-btzbj\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413232 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ptf4\" (UniqueName: \"kubernetes.io/projected/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-kube-api-access-7ptf4\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413248 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-webhook-cert\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413264 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw7ph\" (UniqueName: \"kubernetes.io/projected/8a8d494a-4ed4-4707-9354-98912697989d-kube-api-access-tw7ph\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413284 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-proxy-tls\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413311 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgk7h\" (UniqueName: \"kubernetes.io/projected/9c69b35b-9333-43fd-8af9-1a046fa97995-kube-api-access-kgk7h\") pod \"multus-admission-controller-857f4d67dd-8ldnl\" (UID: \"9c69b35b-9333-43fd-8af9-1a046fa97995\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413327 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-images\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413343 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-tmpfs\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413367 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413383 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/099b4860-bccd-462a-8a0e-f28604353408-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413406 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413435 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6391ef36-ef23-42f8-ae03-7610ede1f819-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413449 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-serving-cert\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413464 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-ca\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413482 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413498 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e486d4f3-46ec-4d4f-9c66-adc95459c76a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413527 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9c69b35b-9333-43fd-8af9-1a046fa97995-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8ldnl\" (UID: \"9c69b35b-9333-43fd-8af9-1a046fa97995\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413541 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099b4860-bccd-462a-8a0e-f28604353408-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413557 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc9d9307-1c9b-4162-9ffb-2493af6c4b54-cert\") pod \"ingress-canary-5dvr4\" (UID: \"fc9d9307-1c9b-4162-9ffb-2493af6c4b54\") " pod="openshift-ingress-canary/ingress-canary-5dvr4" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413573 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413587 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-mountpoint-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413602 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489254c-ff30-49b3-b896-b823f1ae8559-serving-cert\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413620 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413638 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413654 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9903e-6625-40a8-8e02-16b8175ff9bf-config\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413669 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-metrics-tls\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413684 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvsx\" (UniqueName: \"kubernetes.io/projected/fc9d9307-1c9b-4162-9ffb-2493af6c4b54-kube-api-access-9kvsx\") pod \"ingress-canary-5dvr4\" (UID: \"fc9d9307-1c9b-4162-9ffb-2493af6c4b54\") " pod="openshift-ingress-canary/ingress-canary-5dvr4" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413701 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f910f213-8fbe-44fe-888a-aea783dcd0ec-signing-cabundle\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413718 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413734 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-audit-policies\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413755 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489254c-ff30-49b3-b896-b823f1ae8559-config\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413771 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt68q\" (UniqueName: \"kubernetes.io/projected/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-kube-api-access-qt68q\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413786 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413800 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-plugins-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413817 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/662bb234-dff9-4a44-9432-c2f864195ce0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79m2f\" (UID: \"662bb234-dff9-4a44-9432-c2f864195ce0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413833 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413850 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p7hf\" (UniqueName: \"kubernetes.io/projected/f910f213-8fbe-44fe-888a-aea783dcd0ec-kube-api-access-7p7hf\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413866 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e486d4f3-46ec-4d4f-9c66-adc95459c76a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413880 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdgb9\" (UniqueName: \"kubernetes.io/projected/93e10fcb-3cb5-454a-bcd1-1eae918e0601-kube-api-access-hdgb9\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413896 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413911 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-client\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413933 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb8wn\" (UniqueName: \"kubernetes.io/projected/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-kube-api-access-cb8wn\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413950 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdw8\" (UniqueName: \"kubernetes.io/projected/490f6a3a-21e2-4264-8a92-75202ba3db64-kube-api-access-zsdw8\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.415197 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-service-ca\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.416147 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a8d494a-4ed4-4707-9354-98912697989d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.416401 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.416924 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-oauth-serving-cert\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.417013 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-console-config\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.417829 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.417855 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.418328 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-trusted-ca-bundle\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.418518 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-csi-data-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.419558 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-config\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.419668 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.420027 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-mountpoint-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.420508 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.421591 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-trusted-ca\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.422134 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-oauth-config\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.422307 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489254c-ff30-49b3-b896-b823f1ae8559-config\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.422388 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e486d4f3-46ec-4d4f-9c66-adc95459c76a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.422754 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-ca\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.424550 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-serving-cert\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.424594 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-apiservice-cert\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.424620 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.424722 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-serving-cert\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.424726 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-profile-collector-cert\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.424973 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-socket-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.425133 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489254c-ff30-49b3-b896-b823f1ae8559-serving-cert\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.425594 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/662bb234-dff9-4a44-9432-c2f864195ce0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79m2f\" (UID: \"662bb234-dff9-4a44-9432-c2f864195ce0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.425677 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.425813 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f910f213-8fbe-44fe-888a-aea783dcd0ec-signing-key\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.413293 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-registration-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.426481 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-tmpfs\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.427371 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-plugins-dir\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.427558 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-audit-policies\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.428433 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1489254c-ff30-49b3-b896-b823f1ae8559-trusted-ca\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.428502 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f910f213-8fbe-44fe-888a-aea783dcd0ec-signing-cabundle\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.428707 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.428762 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-service-ca\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.428801 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.428852 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.428945 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-etcd-client\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.429020 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-webhook-cert\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.429235 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:42.929220647 +0000 UTC m=+133.240379593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.429878 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.430350 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.433489 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.434375 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.434481 4945 request.go:700] Waited for 1.002379884s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.434588 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.438509 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-secret-volume\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.438514 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e486d4f3-46ec-4d4f-9c66-adc95459c76a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.438514 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.438576 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-metrics-tls\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.448510 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lnlw\" (UniqueName: \"kubernetes.io/projected/1f4787a9-c74e-4dc3-b90a-0ff81584f890-kube-api-access-8lnlw\") pod \"openshift-config-operator-7777fb866f-hj428\" (UID: \"1f4787a9-c74e-4dc3-b90a-0ff81584f890\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.457544 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn9x2\" (UniqueName: \"kubernetes.io/projected/1cce3808-cf0e-430b-bf61-b86cee0baf44-kube-api-access-sn9x2\") pod \"router-default-5444994796-62c5v\" (UID: \"1cce3808-cf0e-430b-bf61-b86cee0baf44\") " pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.469841 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xws\" (UniqueName: \"kubernetes.io/projected/5ab5d977-272f-41d9-b787-ca6d00d1c668-kube-api-access-z2xws\") pod \"cluster-samples-operator-665b6dd947-jsmc9\" (UID: \"5ab5d977-272f-41d9-b787-ca6d00d1c668\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.491243 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jkf\" (UniqueName: \"kubernetes.io/projected/dea31f18-7796-48e2-af32-a185d3221a4a-kube-api-access-g7jkf\") pod \"catalog-operator-68c6474976-h66px\" (UID: \"dea31f18-7796-48e2-af32-a185d3221a4a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.511701 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kktc4\" (UniqueName: \"kubernetes.io/projected/74e7ac98-1603-4ed9-9306-735d271e5142-kube-api-access-kktc4\") pod \"apiserver-76f77b778f-ppfqk\" (UID: \"74e7ac98-1603-4ed9-9306-735d271e5142\") " pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.514656 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.515393 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.015374871 +0000 UTC m=+133.326533817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.516115 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.536863 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.556115 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.576409 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.583735 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6391ef36-ef23-42f8-ae03-7610ede1f819-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.595709 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.607879 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.616286 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.616858 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.116831125 +0000 UTC m=+133.427990061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.617259 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.628551 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.629357 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a8d494a-4ed4-4707-9354-98912697989d-proxy-tls\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.637383 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.641726 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6391ef36-ef23-42f8-ae03-7610ede1f819-config\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.654175 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.656020 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.672463 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9903e-6625-40a8-8e02-16b8175ff9bf-serving-cert\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.676624 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.681066 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-srv-cert\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.703964 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.704736 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" event={"ID":"d2534d4c-181b-45a2-8fec-118b7f17d296","Type":"ContainerStarted","Data":"8462206c011a7e9b68d92a88f253129996851fd3f4e310f5feb241a57284aa27"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.704765 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" event={"ID":"d2534d4c-181b-45a2-8fec-118b7f17d296","Type":"ContainerStarted","Data":"9682c12e0282266ab15b35f67a57576e762a341879b45217d9ab60f15b5a64e3"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.705109 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.706520 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.706772 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.708161 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" event={"ID":"cb8ba07b-804c-4712-9215-6c3ea4f0d96c","Type":"ContainerStarted","Data":"0221f941b02ab0b413610f25457de4876b70f88a740026465e92699eb83fdac5"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.708181 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" event={"ID":"cb8ba07b-804c-4712-9215-6c3ea4f0d96c","Type":"ContainerStarted","Data":"b4528f1c877f27a86c70be611cd16c2a7e45ed80d08762adf3759e7ebc4fa572"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.708743 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.710686 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9903e-6625-40a8-8e02-16b8175ff9bf-config\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.717924 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.720126 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.720479 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.220448416 +0000 UTC m=+133.531607362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.721190 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.721752 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.221736659 +0000 UTC m=+133.532895685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.723768 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" event={"ID":"329c2ef0-0dff-43fd-b3e3-0a65aad24225","Type":"ContainerStarted","Data":"a76f8ecbcfb3f1733ed8ef7736b5fc0d90f66fbe1179af98aa807b0dcf6d6620"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.723820 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" event={"ID":"329c2ef0-0dff-43fd-b3e3-0a65aad24225","Type":"ContainerStarted","Data":"43e82934034a0f3510c64381a945bb43a4cd9d2285527ad5aa7d78b554034b0e"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.729762 4945 generic.go:334] "Generic (PLEG): container finished" podID="e95b2dd5-21a1-4f5f-9849-2b3ceba0d888" containerID="c057cb78465977c623b2c8bb61b60fbafaf311b6264745d64fd31c715fe9b604" exitCode=0 Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.729887 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" event={"ID":"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888","Type":"ContainerDied","Data":"c057cb78465977c623b2c8bb61b60fbafaf311b6264745d64fd31c715fe9b604"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.729917 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" event={"ID":"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888","Type":"ContainerStarted","Data":"107b28764046862894f1443f645f4e1077c5c6f5cf605569235ab41e12f4b322"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.733603 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.735100 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" event={"ID":"234da26f-da47-42e2-9041-57f2c7ba819f","Type":"ContainerStarted","Data":"7d12546758a86654cfcf66991c5d462a347ae86010e3d9425f03be719609d789"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.735134 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" event={"ID":"234da26f-da47-42e2-9041-57f2c7ba819f","Type":"ContainerStarted","Data":"d3c6e008a5b2cdde5fb8e4b2e025dd41454826f1ea399fff721bbbb65abfd78d"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.735892 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.738022 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" event={"ID":"f15a080f-5182-49ab-bcb8-75d85654378a","Type":"ContainerStarted","Data":"e4bf86407920a541ad6d4799ffbf84f613bc76c743807f71919720ae829c1672"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.738060 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" event={"ID":"f15a080f-5182-49ab-bcb8-75d85654378a","Type":"ContainerStarted","Data":"04f5a9e1ac44cff2e6154def8e33d7f169a99f310c39d5d790750dae21979541"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.738093 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" event={"ID":"f15a080f-5182-49ab-bcb8-75d85654378a","Type":"ContainerStarted","Data":"5c16101b0fed5c97758094e40f1b66eae31b1005a3f680e202c266a2e75bda6d"} Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.751672 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.758287 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.777864 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.782520 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.801179 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.822965 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.828219 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.328186131 +0000 UTC m=+133.639345077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.836135 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.859728 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.862203 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.870903 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.874377 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.876094 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.897624 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.906235 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-config\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.928199 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.928746 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:42 crc kubenswrapper[4945]: E0108 23:17:42.929310 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.429297581 +0000 UTC m=+133.740456527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.941073 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ppfqk"] Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.960465 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.971016 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9"] Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.972868 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b65nb\" (UniqueName: \"kubernetes.io/projected/e1b585b7-5543-4705-9167-d53bfc8c1f8d-kube-api-access-b65nb\") pod \"package-server-manager-789f6589d5-5fcjl\" (UID: \"e1b585b7-5543-4705-9167-d53bfc8c1f8d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.976325 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.979249 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-proxy-tls\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:42 crc kubenswrapper[4945]: I0108 23:17:42.996897 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.011133 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9-metrics-tls\") pod \"dns-operator-744455d44c-jkvh2\" (UID: \"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9\") " pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.015449 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.016497 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.017311 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-images\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.029261 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hj428"] Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.029968 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.031567 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.531540302 +0000 UTC m=+133.842699248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.032002 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.032375 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.532368824 +0000 UTC m=+133.843527770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.043389 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.055790 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.077281 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.096105 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.104745 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/099b4860-bccd-462a-8a0e-f28604353408-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.115450 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.130953 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px"] Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.132659 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.133326 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.633304629 +0000 UTC m=+133.944463575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.140542 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.141550 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/099b4860-bccd-462a-8a0e-f28604353408-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.158339 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.182282 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.195974 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.218337 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.230350 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9c69b35b-9333-43fd-8af9-1a046fa97995-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8ldnl\" (UID: \"9c69b35b-9333-43fd-8af9-1a046fa97995\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.234444 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.234951 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.734938413 +0000 UTC m=+134.046097359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.263168 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.263350 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.285209 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.299279 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.307801 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-config-volume\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.339517 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.339563 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.340049 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.84003347 +0000 UTC m=+134.151192416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.359107 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.362457 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.371841 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc9d9307-1c9b-4162-9ffb-2493af6c4b54-cert\") pod \"ingress-canary-5dvr4\" (UID: \"fc9d9307-1c9b-4162-9ffb-2493af6c4b54\") " pod="openshift-ingress-canary/ingress-canary-5dvr4" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.376224 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.397289 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.398101 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-config-volume\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.402467 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl"] Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.416783 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.419123 4945 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.419207 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-certs podName:9ea7f797-df27-4912-83e4-efe654ea3a2a nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.919189802 +0000 UTC m=+134.230348748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-certs") pod "machine-config-server-btrhh" (UID: "9ea7f797-df27-4912-83e4-efe654ea3a2a") : failed to sync secret cache: timed out waiting for the condition Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.419393 4945 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.419501 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-node-bootstrap-token podName:9ea7f797-df27-4912-83e4-efe654ea3a2a nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.919463299 +0000 UTC m=+134.230622245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-node-bootstrap-token") pod "machine-config-server-btrhh" (UID: "9ea7f797-df27-4912-83e4-efe654ea3a2a") : failed to sync secret cache: timed out waiting for the condition Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.423789 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-metrics-tls\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.435123 4945 request.go:700] Waited for 1.937393832s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.436469 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.441449 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.442094 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:43.942073165 +0000 UTC m=+134.253232121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.456067 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.479606 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.495855 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.518575 4945 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.539675 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.542344 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.542959 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.042933748 +0000 UTC m=+134.354092694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.556757 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.598186 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.615982 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.636133 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.643705 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.644170 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.144156851 +0000 UTC m=+134.455315797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.656018 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.692951 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-bound-sa-token\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.713592 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkfm2\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-kube-api-access-tkfm2\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.739068 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9ttz\" (UniqueName: \"kubernetes.io/projected/e41e28f5-d0c1-4a39-ae5e-b85d00c488a4-kube-api-access-t9ttz\") pod \"openshift-apiserver-operator-796bbdcf4f-5d6z8\" (UID: \"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.747694 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.749057 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.249034772 +0000 UTC m=+134.560193718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.757580 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jbcf\" (UniqueName: \"kubernetes.io/projected/297cbd4b-37f7-4ab3-82ea-da1872a05ef1-kube-api-access-9jbcf\") pod \"downloads-7954f5f757-n2sv5\" (UID: \"297cbd4b-37f7-4ab3-82ea-da1872a05ef1\") " pod="openshift-console/downloads-7954f5f757-n2sv5" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.758211 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" event={"ID":"dea31f18-7796-48e2-af32-a185d3221a4a","Type":"ContainerStarted","Data":"595d289a97a23530dcf50821b6b6391919c7aa70895356a12ec25d4b4ad02ac9"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.758327 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" event={"ID":"dea31f18-7796-48e2-af32-a185d3221a4a","Type":"ContainerStarted","Data":"ca4a42df0784e466b2ddd208b5af18060cc005b81feff8403e1ac7f1c45a8a43"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.758908 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.760897 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" event={"ID":"e1b585b7-5543-4705-9167-d53bfc8c1f8d","Type":"ContainerStarted","Data":"69d66317882cca8c3d5fe846c6b61553b23af1dc52ed3b9f58c396bf2da498b9"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.761586 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" event={"ID":"e1b585b7-5543-4705-9167-d53bfc8c1f8d","Type":"ContainerStarted","Data":"6b9001a45b942eab7ab0f66412524c57c22d716c350ad155308d9eeeb4fa211c"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.761676 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" event={"ID":"e1b585b7-5543-4705-9167-d53bfc8c1f8d","Type":"ContainerStarted","Data":"a0d055f75c202b750d95762b7f282853f2cbdcc84e16917c1dbed2d17f76e588"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.761860 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.763288 4945 generic.go:334] "Generic (PLEG): container finished" podID="1f4787a9-c74e-4dc3-b90a-0ff81584f890" containerID="29567e092cc8b14677e2ac8a004de0de87b251a0f3db1205ed42e2ae2e7b9420" exitCode=0 Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.763413 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" event={"ID":"1f4787a9-c74e-4dc3-b90a-0ff81584f890","Type":"ContainerDied","Data":"29567e092cc8b14677e2ac8a004de0de87b251a0f3db1205ed42e2ae2e7b9420"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.763456 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" event={"ID":"1f4787a9-c74e-4dc3-b90a-0ff81584f890","Type":"ContainerStarted","Data":"db50f45ca7f13ccc21a803d3fc5eb996a433059c895d3660a9ba176f56075c1c"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.767511 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.767692 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-62c5v" event={"ID":"1cce3808-cf0e-430b-bf61-b86cee0baf44","Type":"ContainerStarted","Data":"809c22f6e47296b5f5856a31695dcb3a286c661adbef8a9b54ffb66356281d05"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.767737 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-62c5v" event={"ID":"1cce3808-cf0e-430b-bf61-b86cee0baf44","Type":"ContainerStarted","Data":"fe4c399035bc0f6e0f035b63f31267a2bd09c73cbd3482623adacca788bacc31"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.769620 4945 generic.go:334] "Generic (PLEG): container finished" podID="74e7ac98-1603-4ed9-9306-735d271e5142" containerID="032456f02cdb556830a1ac40549db419685e618c2382a22470fc06f529237cb0" exitCode=0 Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.769794 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" event={"ID":"74e7ac98-1603-4ed9-9306-735d271e5142","Type":"ContainerDied","Data":"032456f02cdb556830a1ac40549db419685e618c2382a22470fc06f529237cb0"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.769896 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" event={"ID":"74e7ac98-1603-4ed9-9306-735d271e5142","Type":"ContainerStarted","Data":"3f7b363feee5bf5dd95082686c6d24f66e9548fcaf1ac30eabe689c296171434"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.772339 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvdcm\" (UniqueName: \"kubernetes.io/projected/010d6ddb-7351-4f8b-9d3c-f74436a4bb0d-kube-api-access-cvdcm\") pod \"openshift-controller-manager-operator-756b6f6bc6-fwk56\" (UID: \"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.773728 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" event={"ID":"5ab5d977-272f-41d9-b787-ca6d00d1c668","Type":"ContainerStarted","Data":"80518791581c3fb73a2e906b1929ed9ef14344c2cdb62f81f57d8768cebe2dad"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.773943 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" event={"ID":"5ab5d977-272f-41d9-b787-ca6d00d1c668","Type":"ContainerStarted","Data":"5b49d24abc7bd440f96bcd793202e3c8bf9589eb24cb24d461235091bebd2327"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.774257 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" event={"ID":"5ab5d977-272f-41d9-b787-ca6d00d1c668","Type":"ContainerStarted","Data":"09899bea0c83a0f028f61036c789b54a1e7924518b398e3c3dca143f084a656d"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.777195 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" event={"ID":"e95b2dd5-21a1-4f5f-9849-2b3ceba0d888","Type":"ContainerStarted","Data":"24ee218a0fc310d63d11b9727d14af2c05372e101ff0725c1b7b828f624c541c"} Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.792625 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvht8\" (UniqueName: \"kubernetes.io/projected/2cb9903e-6625-40a8-8e02-16b8175ff9bf-kube-api-access-zvht8\") pod \"service-ca-operator-777779d784-gcfhk\" (UID: \"2cb9903e-6625-40a8-8e02-16b8175ff9bf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.815370 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hfxk\" (UniqueName: \"kubernetes.io/projected/9ea7f797-df27-4912-83e4-efe654ea3a2a-kube-api-access-7hfxk\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.836911 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdw8\" (UniqueName: \"kubernetes.io/projected/490f6a3a-21e2-4264-8a92-75202ba3db64-kube-api-access-zsdw8\") pod \"console-f9d7485db-pmfct\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.850697 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.851134 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.351114968 +0000 UTC m=+134.662274014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.862524 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6391ef36-ef23-42f8-ae03-7610ede1f819-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hc8bs\" (UID: \"6391ef36-ef23-42f8-ae03-7610ede1f819\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.870320 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-n2sv5" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.877138 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.882294 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wrwg\" (UniqueName: \"kubernetes.io/projected/ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5-kube-api-access-8wrwg\") pod \"csi-hostpathplugin-qg7b7\" (UID: \"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5\") " pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.895758 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv8w5\" (UniqueName: \"kubernetes.io/projected/2a8d3a02-1b4e-4258-91d0-1d1b4caeb852-kube-api-access-qv8w5\") pod \"packageserver-d55dfcdfc-glvqg\" (UID: \"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.911354 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e486d4f3-46ec-4d4f-9c66-adc95459c76a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.924235 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.930970 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prk48\" (UniqueName: \"kubernetes.io/projected/e486d4f3-46ec-4d4f-9c66-adc95459c76a-kube-api-access-prk48\") pod \"cluster-image-registry-operator-dc59b4c8b-dgtql\" (UID: \"e486d4f3-46ec-4d4f-9c66-adc95459c76a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.951740 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.954342 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.954557 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-node-bootstrap-token\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.954597 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-certs\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.954621 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.454602802 +0000 UTC m=+134.765761748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.954694 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:43 crc kubenswrapper[4945]: E0108 23:17:43.955203 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.455191158 +0000 UTC m=+134.766350104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.957982 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.958447 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-node-bootstrap-token\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.959555 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2k2\" (UniqueName: \"kubernetes.io/projected/b54d9295-2900-4e23-b8f0-a815fc8e9b7d-kube-api-access-qw2k2\") pod \"etcd-operator-b45778765-pzbw5\" (UID: \"b54d9295-2900-4e23-b8f0-a815fc8e9b7d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.960546 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9ea7f797-df27-4912-83e4-efe654ea3a2a-certs\") pod \"machine-config-server-btrhh\" (UID: \"9ea7f797-df27-4912-83e4-efe654ea3a2a\") " pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.964228 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.972810 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5gqv\" (UniqueName: \"kubernetes.io/projected/1489254c-ff30-49b3-b896-b823f1ae8559-kube-api-access-l5gqv\") pod \"console-operator-58897d9998-zw5q5\" (UID: \"1489254c-ff30-49b3-b896-b823f1ae8559\") " pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:43 crc kubenswrapper[4945]: I0108 23:17:43.982325 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.027435 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.055610 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.055948 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.555933728 +0000 UTC m=+134.867092674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.072050 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blvl\" (UniqueName: \"kubernetes.io/projected/a05cb0ce-c6e4-4ef9-b32e-8910443fc316-kube-api-access-2blvl\") pod \"olm-operator-6b444d44fb-kd4xr\" (UID: \"a05cb0ce-c6e4-4ef9-b32e-8910443fc316\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.072846 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvsx\" (UniqueName: \"kubernetes.io/projected/fc9d9307-1c9b-4162-9ffb-2493af6c4b54-kube-api-access-9kvsx\") pod \"ingress-canary-5dvr4\" (UID: \"fc9d9307-1c9b-4162-9ffb-2493af6c4b54\") " pod="openshift-ingress-canary/ingress-canary-5dvr4" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.078391 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdgb9\" (UniqueName: \"kubernetes.io/projected/93e10fcb-3cb5-454a-bcd1-1eae918e0601-kube-api-access-hdgb9\") pod \"marketplace-operator-79b997595-vxpmj\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.082571 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmn2g\" (UniqueName: \"kubernetes.io/projected/b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9-kube-api-access-cmn2g\") pod \"dns-operator-744455d44c-jkvh2\" (UID: \"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9\") " pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.092308 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.099197 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p7hf\" (UniqueName: \"kubernetes.io/projected/f910f213-8fbe-44fe-888a-aea783dcd0ec-kube-api-access-7p7hf\") pod \"service-ca-9c57cc56f-7hvvw\" (UID: \"f910f213-8fbe-44fe-888a-aea783dcd0ec\") " pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.105364 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.114439 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278wr\" (UniqueName: \"kubernetes.io/projected/de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d-kube-api-access-278wr\") pod \"dns-default-79krd\" (UID: \"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d\") " pod="openshift-dns/dns-default-79krd" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.135877 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g65pf\" (UniqueName: \"kubernetes.io/projected/662bb234-dff9-4a44-9432-c2f864195ce0-kube-api-access-g65pf\") pod \"control-plane-machine-set-operator-78cbb6b69f-79m2f\" (UID: \"662bb234-dff9-4a44-9432-c2f864195ce0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.148161 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b92f03d-b5af-485d-b1a6-6e9548b7c8ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-klv9q\" (UID: \"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.159712 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.159923 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.160326 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.660311295 +0000 UTC m=+134.971470241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.188797 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2wsm\" (UniqueName: \"kubernetes.io/projected/dfefef62-2fe7-47c6-9a57-dadd3fd6705d-kube-api-access-q2wsm\") pod \"kube-storage-version-migrator-operator-b67b599dd-cw648\" (UID: \"dfefef62-2fe7-47c6-9a57-dadd3fd6705d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.197051 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgk7h\" (UniqueName: \"kubernetes.io/projected/9c69b35b-9333-43fd-8af9-1a046fa97995-kube-api-access-kgk7h\") pod \"multus-admission-controller-857f4d67dd-8ldnl\" (UID: \"9c69b35b-9333-43fd-8af9-1a046fa97995\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.218099 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw7ph\" (UniqueName: \"kubernetes.io/projected/8a8d494a-4ed4-4707-9354-98912697989d-kube-api-access-tw7ph\") pod \"machine-config-controller-84d6567774-bwk8r\" (UID: \"8a8d494a-4ed4-4707-9354-98912697989d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.218308 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-79krd" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.220840 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5dvr4" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.221820 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btzbj\" (UniqueName: \"kubernetes.io/projected/0677ae19-e425-485b-9206-98c9ad11aea8-kube-api-access-btzbj\") pod \"oauth-openshift-558db77b4-qn985\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.229519 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-btrhh" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.239335 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.240416 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.253346 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.254258 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb8wn\" (UniqueName: \"kubernetes.io/projected/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-kube-api-access-cb8wn\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.271561 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.272207 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.772192134 +0000 UTC m=+135.083351080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.275494 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-n2sv5"] Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.283210 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ptf4\" (UniqueName: \"kubernetes.io/projected/6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d-kube-api-access-7ptf4\") pod \"machine-config-operator-74547568cd-fjdtg\" (UID: \"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.292643 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt68q\" (UniqueName: \"kubernetes.io/projected/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-kube-api-access-qt68q\") pod \"collect-profiles-29465235-sg2kt\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.292868 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.304429 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.307803 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxskg\" (UniqueName: \"kubernetes.io/projected/e6710298-7872-4178-b6b8-730c3eddb965-kube-api-access-lxskg\") pod \"migrator-59844c95c7-kkxxb\" (UID: \"e6710298-7872-4178-b6b8-730c3eddb965\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.310695 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.348174 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.349219 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56"] Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.356720 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/099b4860-bccd-462a-8a0e-f28604353408-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9zgl9\" (UID: \"099b4860-bccd-462a-8a0e-f28604353408\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.357427 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/def1ce3f-02ba-4056-80f8-b0ba00fa64b2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rlddw\" (UID: \"def1ce3f-02ba-4056-80f8-b0ba00fa64b2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.373776 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.374234 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.874221039 +0000 UTC m=+135.185379985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.379735 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.411389 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.425266 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.474429 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.480977 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.481506 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:44.981486063 +0000 UTC m=+135.292645009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.485370 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.498242 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.502604 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.572613 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.596805 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.597167 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.097156074 +0000 UTC m=+135.408315020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: W0108 23:17:44.611810 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod297cbd4b_37f7_4ab3_82ea_da1872a05ef1.slice/crio-1d1ff68f22f9b8f5e770457103721590dbcf493972905b82bb6ff100cc2ab40e WatchSource:0}: Error finding container 1d1ff68f22f9b8f5e770457103721590dbcf493972905b82bb6ff100cc2ab40e: Status 404 returned error can't find the container with id 1d1ff68f22f9b8f5e770457103721590dbcf493972905b82bb6ff100cc2ab40e Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.697500 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.697680 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.197665027 +0000 UTC m=+135.508823973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.698014 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.698441 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.198421577 +0000 UTC m=+135.509580583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.705592 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.718904 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:44 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:44 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:44 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.718955 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.810689 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-btrhh" event={"ID":"9ea7f797-df27-4912-83e4-efe654ea3a2a","Type":"ContainerStarted","Data":"14a24c3e3ff03b20a68f35427b81aec5c309873f0c39125422c38ca9e728d770"} Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.811655 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.812092 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.312075913 +0000 UTC m=+135.623234859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.827132 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-n2sv5" event={"ID":"297cbd4b-37f7-4ab3-82ea-da1872a05ef1","Type":"ContainerStarted","Data":"1d1ff68f22f9b8f5e770457103721590dbcf493972905b82bb6ff100cc2ab40e"} Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.838299 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" event={"ID":"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d","Type":"ContainerStarted","Data":"e25e3d81e092323f271919720e5bec962a3b03c4fe8a57cb03a369499e35eb01"} Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.854194 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" event={"ID":"1f4787a9-c74e-4dc3-b90a-0ff81584f890","Type":"ContainerStarted","Data":"ced57b2059ec4096d8119933bc88a4a9c2e7d5fc46c0c093ded16848be040d9d"} Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.854883 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.876365 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" event={"ID":"74e7ac98-1603-4ed9-9306-735d271e5142","Type":"ContainerStarted","Data":"048ff6d153d6f8bf787dd825e6667df8132428ea009a84b2abf6ccc09e533a22"} Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.913481 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:44 crc kubenswrapper[4945]: E0108 23:17:44.917090 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.417069367 +0000 UTC m=+135.728228313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.969687 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-76rmp" podStartSLOduration=115.969669337 podStartE2EDuration="1m55.969669337s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:44.968808944 +0000 UTC m=+135.279967890" watchObservedRunningTime="2026-01-08 23:17:44.969669337 +0000 UTC m=+135.280828283" Jan 08 23:17:44 crc kubenswrapper[4945]: I0108 23:17:44.969929 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" podStartSLOduration=115.969924494 podStartE2EDuration="1m55.969924494s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:44.93472169 +0000 UTC m=+135.245880636" watchObservedRunningTime="2026-01-08 23:17:44.969924494 +0000 UTC m=+135.281083440" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.013362 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w8ptz"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.014260 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.014651 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.015517 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.515497345 +0000 UTC m=+135.826656291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.017694 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.020356 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8ptz"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.022440 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jsmc9" podStartSLOduration=116.022423971 podStartE2EDuration="1m56.022423971s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:45.022115152 +0000 UTC m=+135.333274098" watchObservedRunningTime="2026-01-08 23:17:45.022423971 +0000 UTC m=+135.333582907" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.079703 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-h66px" podStartSLOduration=116.079690266 podStartE2EDuration="1m56.079690266s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:45.078795782 +0000 UTC m=+135.389954728" watchObservedRunningTime="2026-01-08 23:17:45.079690266 +0000 UTC m=+135.390849212" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.112452 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" podStartSLOduration=116.112433943 podStartE2EDuration="1m56.112433943s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:45.111368775 +0000 UTC m=+135.422527721" watchObservedRunningTime="2026-01-08 23:17:45.112433943 +0000 UTC m=+135.423592889" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.120175 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.120283 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-catalog-content\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.120319 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxsl6\" (UniqueName: \"kubernetes.io/projected/4befd52c-d042-4259-8998-533f8a61dddd-kube-api-access-qxsl6\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.120363 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-utilities\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.120702 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.620687014 +0000 UTC m=+135.931845970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.188057 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-62c5v" podStartSLOduration=116.18803696 podStartE2EDuration="1m56.18803696s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:45.152448886 +0000 UTC m=+135.463607842" watchObservedRunningTime="2026-01-08 23:17:45.18803696 +0000 UTC m=+135.499195906" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.205798 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vgr9g"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.210382 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.216227 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgr9g"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.221627 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.221842 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-utilities\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.221871 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-catalog-content\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.221901 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-catalog-content\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.221921 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxsl6\" (UniqueName: \"kubernetes.io/projected/4befd52c-d042-4259-8998-533f8a61dddd-kube-api-access-qxsl6\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.221939 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9l78\" (UniqueName: \"kubernetes.io/projected/dce1a9c0-149a-4062-a166-84829a9dc2ec-kube-api-access-w9l78\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.221972 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-utilities\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.222384 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-utilities\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.222447 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.722433071 +0000 UTC m=+136.033592017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.222645 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-catalog-content\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.239715 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.283228 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxsl6\" (UniqueName: \"kubernetes.io/projected/4befd52c-d042-4259-8998-533f8a61dddd-kube-api-access-qxsl6\") pod \"community-operators-w8ptz\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.322385 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9l78\" (UniqueName: \"kubernetes.io/projected/dce1a9c0-149a-4062-a166-84829a9dc2ec-kube-api-access-w9l78\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.322460 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.322512 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-utilities\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.322527 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-catalog-content\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.322909 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.822892764 +0000 UTC m=+136.134051710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.322935 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-utilities\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.323557 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-catalog-content\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.392496 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9l78\" (UniqueName: \"kubernetes.io/projected/dce1a9c0-149a-4062-a166-84829a9dc2ec-kube-api-access-w9l78\") pod \"certified-operators-vgr9g\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.402563 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w72wg"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.403489 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.420785 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w72wg"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.423446 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.423751 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-catalog-content\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.423811 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68jkz\" (UniqueName: \"kubernetes.io/projected/fca058c9-9d1b-41b4-b057-c90327ca3628-kube-api-access-68jkz\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.423835 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-utilities\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.423980 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:45.923961213 +0000 UTC m=+136.235120159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.435275 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.455132 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" podStartSLOduration=116.455114968 podStartE2EDuration="1m56.455114968s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:45.454344807 +0000 UTC m=+135.765503743" watchObservedRunningTime="2026-01-08 23:17:45.455114968 +0000 UTC m=+135.766273914" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.541729 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68jkz\" (UniqueName: \"kubernetes.io/projected/fca058c9-9d1b-41b4-b057-c90327ca3628-kube-api-access-68jkz\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.541776 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-utilities\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.541808 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.541900 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-catalog-content\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.542298 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-catalog-content\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.542760 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-utilities\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.543014 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.042982893 +0000 UTC m=+136.354141849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.571265 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68jkz\" (UniqueName: \"kubernetes.io/projected/fca058c9-9d1b-41b4-b057-c90327ca3628-kube-api-access-68jkz\") pod \"community-operators-w72wg\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.571518 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.629416 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dlpgj"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.630559 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.643829 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.644212 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.144197516 +0000 UTC m=+136.455356462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.644602 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.646421 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dlpgj"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.662911 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg"] Jan 08 23:17:45 crc kubenswrapper[4945]: W0108 23:17:45.680066 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb9903e_6625_40a8_8e02_16b8175ff9bf.slice/crio-49ae994e73287a320633fd0002c6285defc40ed2e601dcd1ee87cbdc26fcadce WatchSource:0}: Error finding container 49ae994e73287a320633fd0002c6285defc40ed2e601dcd1ee87cbdc26fcadce: Status 404 returned error can't find the container with id 49ae994e73287a320633fd0002c6285defc40ed2e601dcd1ee87cbdc26fcadce Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.687768 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pmfct"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.707118 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.715075 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:45 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:45 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:45 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.715367 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:45 crc kubenswrapper[4945]: W0108 23:17:45.722579 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41e28f5_d0c1_4a39_ae5e_b85d00c488a4.slice/crio-5b2e823ee750618f2b2c14ace72f9763689f7ce25ab036ea05f514979a8d459e WatchSource:0}: Error finding container 5b2e823ee750618f2b2c14ace72f9763689f7ce25ab036ea05f514979a8d459e: Status 404 returned error can't find the container with id 5b2e823ee750618f2b2c14ace72f9763689f7ce25ab036ea05f514979a8d459e Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.724020 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jkvh2"] Jan 08 23:17:45 crc kubenswrapper[4945]: W0108 23:17:45.734955 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a8d3a02_1b4e_4258_91d0_1d1b4caeb852.slice/crio-05adf2052c5d07cfd9af93f25e5ca0ff1d66858ca264b422eaa4fde445e66e4f WatchSource:0}: Error finding container 05adf2052c5d07cfd9af93f25e5ca0ff1d66858ca264b422eaa4fde445e66e4f: Status 404 returned error can't find the container with id 05adf2052c5d07cfd9af93f25e5ca0ff1d66858ca264b422eaa4fde445e66e4f Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.749535 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.749596 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-catalog-content\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.749692 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95dc\" (UniqueName: \"kubernetes.io/projected/069f874e-b727-46fb-85e6-b36d4921412f-kube-api-access-n95dc\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.749719 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-utilities\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.750031 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.250016172 +0000 UTC m=+136.561175128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: W0108 23:17:45.754757 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0c68805_16fc_47d5_8ca8_7ebd2e59fcb9.slice/crio-9a5db0bbbd940f01d85ee9466115c8c43e1de600f030d92272323d752f34c3a0 WatchSource:0}: Error finding container 9a5db0bbbd940f01d85ee9466115c8c43e1de600f030d92272323d752f34c3a0: Status 404 returned error can't find the container with id 9a5db0bbbd940f01d85ee9466115c8c43e1de600f030d92272323d752f34c3a0 Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.775592 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.813204 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.813903 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qg7b7"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.837035 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.854649 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.854958 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95dc\" (UniqueName: \"kubernetes.io/projected/069f874e-b727-46fb-85e6-b36d4921412f-kube-api-access-n95dc\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.855059 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-utilities\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.855188 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-catalog-content\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.855410 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.355395276 +0000 UTC m=+136.666554222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.855953 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-utilities\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.856278 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-catalog-content\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.891481 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmmph" podStartSLOduration=116.891464433 podStartE2EDuration="1m56.891464433s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:45.889744457 +0000 UTC m=+136.200903403" watchObservedRunningTime="2026-01-08 23:17:45.891464433 +0000 UTC m=+136.202623379" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.905801 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95dc\" (UniqueName: \"kubernetes.io/projected/069f874e-b727-46fb-85e6-b36d4921412f-kube-api-access-n95dc\") pod \"certified-operators-dlpgj\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.905847 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-79krd"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.906980 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql"] Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.910732 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" event={"ID":"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9","Type":"ContainerStarted","Data":"9a5db0bbbd940f01d85ee9466115c8c43e1de600f030d92272323d752f34c3a0"} Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.932597 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" event={"ID":"a05cb0ce-c6e4-4ef9-b32e-8910443fc316","Type":"ContainerStarted","Data":"6276b97ad2f1017911b2577b53e6ac389631e0a34958bd7100850972a47c776e"} Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.934024 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wq66q" podStartSLOduration=116.934014203 podStartE2EDuration="1m56.934014203s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:45.933311724 +0000 UTC m=+136.244470670" watchObservedRunningTime="2026-01-08 23:17:45.934014203 +0000 UTC m=+136.245173149" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.942735 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-btrhh" event={"ID":"9ea7f797-df27-4912-83e4-efe654ea3a2a","Type":"ContainerStarted","Data":"3b09dd95446024e260d6d2b7951607bc76db4f7a90a5922b16512e7d424a19c0"} Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.945973 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-n2sv5" event={"ID":"297cbd4b-37f7-4ab3-82ea-da1872a05ef1","Type":"ContainerStarted","Data":"393a37d9d097d48ca480e21c5ee49be99d9a19d7880f39d10abbfdbb74dff9a8"} Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.946612 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-n2sv5" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.952117 4945 patch_prober.go:28] interesting pod/downloads-7954f5f757-n2sv5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.952169 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-n2sv5" podUID="297cbd4b-37f7-4ab3-82ea-da1872a05ef1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.952549 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" event={"ID":"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4","Type":"ContainerStarted","Data":"5b2e823ee750618f2b2c14ace72f9763689f7ce25ab036ea05f514979a8d459e"} Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.955242 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" event={"ID":"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852","Type":"ContainerStarted","Data":"05adf2052c5d07cfd9af93f25e5ca0ff1d66858ca264b422eaa4fde445e66e4f"} Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.956142 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:45 crc kubenswrapper[4945]: E0108 23:17:45.956801 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.456789704 +0000 UTC m=+136.767948650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.966327 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" event={"ID":"010d6ddb-7351-4f8b-9d3c-f74436a4bb0d","Type":"ContainerStarted","Data":"cefa127dde4a2d26ab2cdec9ec8f42561d0d52a61cbc7777ea20f4bb10a28047"} Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.983307 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:17:45 crc kubenswrapper[4945]: I0108 23:17:45.991581 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pmfct" event={"ID":"490f6a3a-21e2-4264-8a92-75202ba3db64","Type":"ContainerStarted","Data":"46fe5819e68033243c5f61ea9954be646947bd6f99c5f158ddbdd1cfaef47667"} Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.060934 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.061942 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.561927032 +0000 UTC m=+136.873085978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.067078 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" podStartSLOduration=117.067061179 podStartE2EDuration="1m57.067061179s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:46.064545022 +0000 UTC m=+136.375703968" watchObservedRunningTime="2026-01-08 23:17:46.067061179 +0000 UTC m=+136.378220125" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.067888 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" event={"ID":"2cb9903e-6625-40a8-8e02-16b8175ff9bf","Type":"ContainerStarted","Data":"49ae994e73287a320633fd0002c6285defc40ed2e601dcd1ee87cbdc26fcadce"} Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.111499 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" event={"ID":"74e7ac98-1603-4ed9-9306-735d271e5142","Type":"ContainerStarted","Data":"95c4879ea1f0b4bde51d21bcdd7ec645e9da51345543bd840d52f7f43e4d084d"} Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.131424 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.162661 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.164039 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.664009188 +0000 UTC m=+136.975168134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.170007 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.170295 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.173503 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pzbw5"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.212644 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9"] Jan 08 23:17:46 crc kubenswrapper[4945]: W0108 23:17:46.229056 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b92f03d_b5af_485d_b1a6_6e9548b7c8ba.slice/crio-ee23ab63e001458379c5a585104708ca2f9af6d0106d45dcfd3f8bfda37eae4c WatchSource:0}: Error finding container ee23ab63e001458379c5a585104708ca2f9af6d0106d45dcfd3f8bfda37eae4c: Status 404 returned error can't find the container with id ee23ab63e001458379c5a585104708ca2f9af6d0106d45dcfd3f8bfda37eae4c Jan 08 23:17:46 crc kubenswrapper[4945]: W0108 23:17:46.268289 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod099b4860_bccd_462a_8a0e_f28604353408.slice/crio-2f20ba3b50a22fc922e8a34a51600f7b704aa50555e2f97937041dbb30538904 WatchSource:0}: Error finding container 2f20ba3b50a22fc922e8a34a51600f7b704aa50555e2f97937041dbb30538904: Status 404 returned error can't find the container with id 2f20ba3b50a22fc922e8a34a51600f7b704aa50555e2f97937041dbb30538904 Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.268766 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.269578 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.769565437 +0000 UTC m=+137.080724383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.366584 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qn985"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.370775 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.371215 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.871203821 +0000 UTC m=+137.182362767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.387341 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7hvvw"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.426523 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.454371 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.472319 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.473151 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:46.973126573 +0000 UTC m=+137.284285519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.484535 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8ldnl"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.489619 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxpmj"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.504697 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.536532 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w72wg"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.536571 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.545535 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vgr9g"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.549641 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w8ptz"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.550478 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.566183 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.567792 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:46 crc kubenswrapper[4945]: W0108 23:17:46.574121 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c69b35b_9333_43fd_8af9_1a046fa97995.slice/crio-67ba0694c343e5090f7cd20b2db7f1e238366df8932c85e24d3050907c471a46 WatchSource:0}: Error finding container 67ba0694c343e5090f7cd20b2db7f1e238366df8932c85e24d3050907c471a46: Status 404 returned error can't find the container with id 67ba0694c343e5090f7cd20b2db7f1e238366df8932c85e24d3050907c471a46 Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.577732 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.579153 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zw5q5"] Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.585592 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.085576457 +0000 UTC m=+137.396735403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.587045 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.588128 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5dvr4"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.590078 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hj428" podStartSLOduration=117.590067137 podStartE2EDuration="1m57.590067137s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:46.551353739 +0000 UTC m=+136.862512685" watchObservedRunningTime="2026-01-08 23:17:46.590067137 +0000 UTC m=+136.901226083" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.620723 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.678918 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.679436 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.179420272 +0000 UTC m=+137.490579218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.707796 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-fwk56" podStartSLOduration=117.707777052 podStartE2EDuration="1m57.707777052s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:46.663496215 +0000 UTC m=+136.974655151" watchObservedRunningTime="2026-01-08 23:17:46.707777052 +0000 UTC m=+137.018935998" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.708294 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" podStartSLOduration=117.708289506 podStartE2EDuration="1m57.708289506s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:46.706116587 +0000 UTC m=+137.017275533" watchObservedRunningTime="2026-01-08 23:17:46.708289506 +0000 UTC m=+137.019448452" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.713611 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:46 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:46 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:46 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.713663 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:46 crc kubenswrapper[4945]: W0108 23:17:46.715625 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27ab75bd_1a05_42c5_b6ee_0917bdc88c6b.slice/crio-497d4432da00195f3e5df993e63ee06deacd00fba591bf491fe86e751d4075de WatchSource:0}: Error finding container 497d4432da00195f3e5df993e63ee06deacd00fba591bf491fe86e751d4075de: Status 404 returned error can't find the container with id 497d4432da00195f3e5df993e63ee06deacd00fba591bf491fe86e751d4075de Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.740118 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-btrhh" podStartSLOduration=5.740099628 podStartE2EDuration="5.740099628s" podCreationTimestamp="2026-01-08 23:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:46.726797362 +0000 UTC m=+137.037956308" watchObservedRunningTime="2026-01-08 23:17:46.740099628 +0000 UTC m=+137.051258574" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.763723 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-n2sv5" podStartSLOduration=117.763706621 podStartE2EDuration="1m57.763706621s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:46.763452584 +0000 UTC m=+137.074611530" watchObservedRunningTime="2026-01-08 23:17:46.763706621 +0000 UTC m=+137.074865567" Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.779914 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.780213 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.280202083 +0000 UTC m=+137.591361029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.824507 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dlpgj"] Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.882816 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.883008 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.382974708 +0000 UTC m=+137.694133654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.883438 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.883978 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.383949904 +0000 UTC m=+137.695108840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: W0108 23:17:46.949436 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod069f874e_b727_46fb_85e6_b36d4921412f.slice/crio-eca186ba8a1959c55d70110931ca514ef6bf349b835da23fcb0113af3452c58c WatchSource:0}: Error finding container eca186ba8a1959c55d70110931ca514ef6bf349b835da23fcb0113af3452c58c: Status 404 returned error can't find the container with id eca186ba8a1959c55d70110931ca514ef6bf349b835da23fcb0113af3452c58c Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.984379 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.984531 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.484515419 +0000 UTC m=+137.795674365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:46 crc kubenswrapper[4945]: I0108 23:17:46.984608 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:46 crc kubenswrapper[4945]: E0108 23:17:46.985371 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.485356022 +0000 UTC m=+137.796514968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.086244 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.086436 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.58641392 +0000 UTC m=+137.897572866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.116200 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" event={"ID":"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba","Type":"ContainerStarted","Data":"ee23ab63e001458379c5a585104708ca2f9af6d0106d45dcfd3f8bfda37eae4c"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.118172 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" event={"ID":"2a8d3a02-1b4e-4258-91d0-1d1b4caeb852","Type":"ContainerStarted","Data":"d803600e8b6797f6fb94fd556fd90101f9d17c7c8e1cca8c3432f225c20014fb"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.118316 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.122954 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" event={"ID":"e486d4f3-46ec-4d4f-9c66-adc95459c76a","Type":"ContainerStarted","Data":"c647fbd760b484f4b5b934ede1e5e24db9cd0b1e63da6fef638e0485b2188848"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.123011 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" event={"ID":"e486d4f3-46ec-4d4f-9c66-adc95459c76a","Type":"ContainerStarted","Data":"9b451f899e526e18b27287b6a6121c77d6f6586f99474e20e0194bf345ed49ed"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.128577 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" event={"ID":"e6710298-7872-4178-b6b8-730c3eddb965","Type":"ContainerStarted","Data":"ac34809250add2a92c4ae4dcf7203e1aab9bdc620d03d0b8e4f87d9396ebd1df"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.128643 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" event={"ID":"e6710298-7872-4178-b6b8-730c3eddb965","Type":"ContainerStarted","Data":"9b34fa23b6540a0ff99896e73a0762227ca0bc3c4a0a51c73f31f767dd6df089"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.137843 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" podStartSLOduration=118.137828648 podStartE2EDuration="1m58.137828648s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:47.137263623 +0000 UTC m=+137.448422579" watchObservedRunningTime="2026-01-08 23:17:47.137828648 +0000 UTC m=+137.448987614" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.140385 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ptz" event={"ID":"4befd52c-d042-4259-8998-533f8a61dddd","Type":"ContainerStarted","Data":"7049bc4e75ed41544af77257d8f2b99ceddd65a320832fe99134e4a49ce8caae"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.164507 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dgtql" podStartSLOduration=118.164483963 podStartE2EDuration="1m58.164483963s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:47.151910036 +0000 UTC m=+137.463069002" watchObservedRunningTime="2026-01-08 23:17:47.164483963 +0000 UTC m=+137.475642909" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.164660 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" event={"ID":"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b","Type":"ContainerStarted","Data":"497d4432da00195f3e5df993e63ee06deacd00fba591bf491fe86e751d4075de"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.166673 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" event={"ID":"0677ae19-e425-485b-9206-98c9ad11aea8","Type":"ContainerStarted","Data":"c4656416702bce2312408e8e3b83d826a46f6fdf86fcb5cc906b0a62a157ca50"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.170815 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" event={"ID":"9c69b35b-9333-43fd-8af9-1a046fa97995","Type":"ContainerStarted","Data":"67ba0694c343e5090f7cd20b2db7f1e238366df8932c85e24d3050907c471a46"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.172340 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" event={"ID":"2cb9903e-6625-40a8-8e02-16b8175ff9bf","Type":"ContainerStarted","Data":"a694343836141048a81bed56ad646b8817c48559389c3be8cafc111fa767879c"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.174383 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" event={"ID":"099b4860-bccd-462a-8a0e-f28604353408","Type":"ContainerStarted","Data":"2f20ba3b50a22fc922e8a34a51600f7b704aa50555e2f97937041dbb30538904"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.175377 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zw5q5" event={"ID":"1489254c-ff30-49b3-b896-b823f1ae8559","Type":"ContainerStarted","Data":"ed2bbe4fbd6827217f5e66e75da4bfa4d3603556cd70746f142005519b708069"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.176845 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" event={"ID":"a05cb0ce-c6e4-4ef9-b32e-8910443fc316","Type":"ContainerStarted","Data":"4860cf5a4c9a45f2b584d3bc91245d533f50c15e7b8503646494f034906db115"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.177055 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.181301 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" event={"ID":"dfefef62-2fe7-47c6-9a57-dadd3fd6705d","Type":"ContainerStarted","Data":"552662ffd9d4c8906d8aa191b8c08060d0befa7863310d07479892dd5b325307"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.181846 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.186816 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gcfhk" podStartSLOduration=118.186797861 podStartE2EDuration="1m58.186797861s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:47.186429271 +0000 UTC m=+137.497588217" watchObservedRunningTime="2026-01-08 23:17:47.186797861 +0000 UTC m=+137.497956807" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.190741 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.193379 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" event={"ID":"f910f213-8fbe-44fe-888a-aea783dcd0ec","Type":"ContainerStarted","Data":"7aa8defce7d1445e27c96d5cd6170858c0d85967e66e5529c08ac29d23c2e72d"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.202414 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r2hvk"] Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.203488 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.204494 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" event={"ID":"def1ce3f-02ba-4056-80f8-b0ba00fa64b2","Type":"ContainerStarted","Data":"1021d432266675677f2419ce526eae272f9af345bb721e55834c3bed2f59f3e3"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.204541 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" event={"ID":"def1ce3f-02ba-4056-80f8-b0ba00fa64b2","Type":"ContainerStarted","Data":"97728b8c203e342dc120b7cd3699615e7feb62571786ea103a443c88000b312e"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.207327 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.210282 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.710256849 +0000 UTC m=+138.021415795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.224513 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" event={"ID":"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d","Type":"ContainerStarted","Data":"18958784edb6184d69085868b79452cf816a8e027f67af2677ff8696bcf948b1"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.227277 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-kd4xr" podStartSLOduration=118.227209804 podStartE2EDuration="1m58.227209804s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:47.212099709 +0000 UTC m=+137.523258655" watchObservedRunningTime="2026-01-08 23:17:47.227209804 +0000 UTC m=+137.538368750" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.228552 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2hvk"] Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.237532 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pmfct" event={"ID":"490f6a3a-21e2-4264-8a92-75202ba3db64","Type":"ContainerStarted","Data":"c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.275878 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-glvqg" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.285203 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-pmfct" podStartSLOduration=118.285184028 podStartE2EDuration="1m58.285184028s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:47.272194819 +0000 UTC m=+137.583353765" watchObservedRunningTime="2026-01-08 23:17:47.285184028 +0000 UTC m=+137.596342974" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.292521 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.293092 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-catalog-content\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.293122 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2pl\" (UniqueName: \"kubernetes.io/projected/b3238595-843e-4c3a-9e67-538483ac4b20-kube-api-access-4t2pl\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.293260 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-utilities\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.294091 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.794073556 +0000 UTC m=+138.105232502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.301653 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-79krd" event={"ID":"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d","Type":"ContainerStarted","Data":"6c8983eae08917b77567aaa837f9d7b04e796e1097519e951aa857178da427d0"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.301686 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-79krd" event={"ID":"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d","Type":"ContainerStarted","Data":"0980748bf8a348d3d2523b1ec0eaf4f3e596fafda62c2042734036be400b5ffd"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.331884 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" event={"ID":"b54d9295-2900-4e23-b8f0-a815fc8e9b7d","Type":"ContainerStarted","Data":"14f7dbf300eeb1551124e2914218d5d28bcc774441f93abe5f2515b50dec95c6"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.344457 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" event={"ID":"93e10fcb-3cb5-454a-bcd1-1eae918e0601","Type":"ContainerStarted","Data":"2a52f99dacebd2b68164816ad6dfd5f886d4ab29ff4fa94ddbeb729a759c9832"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.369707 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" event={"ID":"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5","Type":"ContainerStarted","Data":"98499acf0c2e7106aad158a7fcc6959be50a33fede5deb7919cb16e1533543ab"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.393921 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-catalog-content\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.393968 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t2pl\" (UniqueName: \"kubernetes.io/projected/b3238595-843e-4c3a-9e67-538483ac4b20-kube-api-access-4t2pl\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.394070 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-utilities\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.394102 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.394350 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:47.894339353 +0000 UTC m=+138.205498299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.394661 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-catalog-content\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.395074 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-utilities\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.412606 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w72wg" event={"ID":"fca058c9-9d1b-41b4-b057-c90327ca3628","Type":"ContainerStarted","Data":"75a3c03c86f26d6cc3a3b44ecaaf717d508fc50a8e41d97969e438f3cfa9dc75"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.425615 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" event={"ID":"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9","Type":"ContainerStarted","Data":"232ec2a929fe0ba40e80e1ec2f6187e2f0208cbcd8a6d5864eba1fc87faed42b"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.429741 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5dvr4" event={"ID":"fc9d9307-1c9b-4162-9ffb-2493af6c4b54","Type":"ContainerStarted","Data":"4b36ac9eefe29df801ca6c4651e5e1819535faea0514fb5aa9cc333e9a0ea9cb"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.451743 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t2pl\" (UniqueName: \"kubernetes.io/projected/b3238595-843e-4c3a-9e67-538483ac4b20-kube-api-access-4t2pl\") pod \"redhat-marketplace-r2hvk\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.453795 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlpgj" event={"ID":"069f874e-b727-46fb-85e6-b36d4921412f","Type":"ContainerStarted","Data":"eca186ba8a1959c55d70110931ca514ef6bf349b835da23fcb0113af3452c58c"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.466248 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" podStartSLOduration=118.46623055 podStartE2EDuration="1m58.46623055s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:47.464945206 +0000 UTC m=+137.776104162" watchObservedRunningTime="2026-01-08 23:17:47.46623055 +0000 UTC m=+137.777389496" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.508780 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.508941 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" event={"ID":"e41e28f5-d0c1-4a39-ae5e-b85d00c488a4","Type":"ContainerStarted","Data":"63714b9ef1774feb13a49f505eace21e2bd9500f932d435e261b5abcda0c5833"} Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.509754 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.009740386 +0000 UTC m=+138.320899332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.540090 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" event={"ID":"6391ef36-ef23-42f8-ae03-7610ede1f819","Type":"ContainerStarted","Data":"3c24feed018c518b5b144715d1424d6768d1880e3a94f88a2d03a5eb6507c1ae"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.559855 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" event={"ID":"8a8d494a-4ed4-4707-9354-98912697989d","Type":"ContainerStarted","Data":"439da5b3a662a7f819147f7df777df63201cad95a6613a5f093832d20dd292a4"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.586037 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgr9g" event={"ID":"dce1a9c0-149a-4062-a166-84829a9dc2ec","Type":"ContainerStarted","Data":"28a2300619c10a41f003430aa291438e87f10c0db0c3c0372f8345e0911b4b59"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.593526 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.597458 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.599630 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5d6z8" podStartSLOduration=118.599615095 podStartE2EDuration="1m58.599615095s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:47.539014821 +0000 UTC m=+137.850173767" watchObservedRunningTime="2026-01-08 23:17:47.599615095 +0000 UTC m=+137.910774041" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.609297 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" event={"ID":"662bb234-dff9-4a44-9432-c2f864195ce0","Type":"ContainerStarted","Data":"bbc7755fd973dc1bec0453d7a46987737287254ef64c63e899ef1b316a78929f"} Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.609580 4945 patch_prober.go:28] interesting pod/downloads-7954f5f757-n2sv5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.609622 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-n2sv5" podUID="297cbd4b-37f7-4ab3-82ea-da1872a05ef1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.610665 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.610980 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.110968029 +0000 UTC m=+138.422126975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.615733 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bfqfk"] Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.617605 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.628986 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.629369 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.629582 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfqfk"] Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.633401 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8tv9n" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.650554 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" podStartSLOduration=118.65053624 podStartE2EDuration="1m58.65053624s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:47.641367624 +0000 UTC m=+137.952526570" watchObservedRunningTime="2026-01-08 23:17:47.65053624 +0000 UTC m=+137.961695186" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.662280 4945 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ppfqk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]log ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]etcd ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/generic-apiserver-start-informers ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/max-in-flight-filter ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 08 23:17:47 crc kubenswrapper[4945]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/project.openshift.io-projectcache ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/openshift.io-startinformers ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 08 23:17:47 crc kubenswrapper[4945]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 08 23:17:47 crc kubenswrapper[4945]: livez check failed Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.662355 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" podUID="74e7ac98-1603-4ed9-9306-735d271e5142" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.709409 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:47 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:47 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:47 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.709475 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.712051 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.712527 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-utilities\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.712569 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbgq\" (UniqueName: \"kubernetes.io/projected/77f56ef6-cc93-457b-8f55-e8e74123159e-kube-api-access-pdbgq\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.712602 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-catalog-content\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.713142 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.213127857 +0000 UTC m=+138.524286803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.813495 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-utilities\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.813850 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbgq\" (UniqueName: \"kubernetes.io/projected/77f56ef6-cc93-457b-8f55-e8e74123159e-kube-api-access-pdbgq\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.813959 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.814022 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-catalog-content\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.814116 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-utilities\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.814510 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-catalog-content\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.814592 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.314510225 +0000 UTC m=+138.625669171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.900176 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbgq\" (UniqueName: \"kubernetes.io/projected/77f56ef6-cc93-457b-8f55-e8e74123159e-kube-api-access-pdbgq\") pod \"redhat-marketplace-bfqfk\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.915906 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.916056 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.416029595 +0000 UTC m=+138.727188531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:47 crc kubenswrapper[4945]: I0108 23:17:47.916344 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:47 crc kubenswrapper[4945]: E0108 23:17:47.917369 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.417355931 +0000 UTC m=+138.728514877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.017978 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.018428 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.51841379 +0000 UTC m=+138.829572736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.066881 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.121545 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.122422 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.622404167 +0000 UTC m=+138.933563113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.148713 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2hvk"] Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.222038 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dvcsg"] Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.223204 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.223392 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.223722 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.723707251 +0000 UTC m=+139.034866197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.226442 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.236678 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvcsg"] Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.332805 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.332921 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjlsb\" (UniqueName: \"kubernetes.io/projected/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-kube-api-access-zjlsb\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.332966 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-utilities\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.333022 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-catalog-content\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.333339 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.833327609 +0000 UTC m=+139.144486555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.433937 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.433986 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.933971036 +0000 UTC m=+139.245129982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.434679 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.434897 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjlsb\" (UniqueName: \"kubernetes.io/projected/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-kube-api-access-zjlsb\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.434967 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:48.934959123 +0000 UTC m=+139.246118069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.434936 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-utilities\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.435402 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-catalog-content\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.435507 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-utilities\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.435873 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-catalog-content\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.481030 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjlsb\" (UniqueName: \"kubernetes.io/projected/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-kube-api-access-zjlsb\") pod \"redhat-operators-dvcsg\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.537577 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.537938 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.037923663 +0000 UTC m=+139.349082609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.565708 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.613385 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mntf4"] Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.614940 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.638302 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mntf4"] Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.647251 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7cv\" (UniqueName: \"kubernetes.io/projected/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-kube-api-access-zt7cv\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.647289 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.647350 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-catalog-content\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.647368 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-utilities\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.647711 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.147697045 +0000 UTC m=+139.458856001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.682194 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" event={"ID":"7b92f03d-b5af-485d-b1a6-6e9548b7c8ba","Type":"ContainerStarted","Data":"ad35db603e41d14cd717693f0514768a9a03ed328ba1f2c2a179d1498920395c"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.718830 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:48 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:48 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:48 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.718911 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.721259 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-klv9q" podStartSLOduration=119.721243126 podStartE2EDuration="1m59.721243126s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:48.718919784 +0000 UTC m=+139.030078720" watchObservedRunningTime="2026-01-08 23:17:48.721243126 +0000 UTC m=+139.032402072" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.741604 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5dvr4" event={"ID":"fc9d9307-1c9b-4162-9ffb-2493af6c4b54","Type":"ContainerStarted","Data":"d15366054408dd81bae7430873e84d8f538ad347d1df6ba78022bd571ac903f9"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.743825 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" event={"ID":"93e10fcb-3cb5-454a-bcd1-1eae918e0601","Type":"ContainerStarted","Data":"9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.744719 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.748126 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.748346 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt7cv\" (UniqueName: \"kubernetes.io/projected/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-kube-api-access-zt7cv\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.748456 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-catalog-content\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.748520 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-utilities\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.749067 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-utilities\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.749170 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.249140464 +0000 UTC m=+139.560299410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.750844 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-catalog-content\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.772287 4945 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vxpmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.772335 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.779361 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt7cv\" (UniqueName: \"kubernetes.io/projected/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-kube-api-access-zt7cv\") pod \"redhat-operators-mntf4\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.790378 4945 generic.go:334] "Generic (PLEG): container finished" podID="069f874e-b727-46fb-85e6-b36d4921412f" containerID="465e0ef1d5df2aa84f96857b54d33082ad8d2cec5f4a260050d77761d4a8f49d" exitCode=0 Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.790477 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlpgj" event={"ID":"069f874e-b727-46fb-85e6-b36d4921412f","Type":"ContainerDied","Data":"465e0ef1d5df2aa84f96857b54d33082ad8d2cec5f4a260050d77761d4a8f49d"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.805563 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5dvr4" podStartSLOduration=7.805549406 podStartE2EDuration="7.805549406s" podCreationTimestamp="2026-01-08 23:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:48.772671974 +0000 UTC m=+139.083830920" watchObservedRunningTime="2026-01-08 23:17:48.805549406 +0000 UTC m=+139.116708352" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.805976 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" podStartSLOduration=119.805971397 podStartE2EDuration="1m59.805971397s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:48.805782622 +0000 UTC m=+139.116941598" watchObservedRunningTime="2026-01-08 23:17:48.805971397 +0000 UTC m=+139.117130343" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.816950 4945 generic.go:334] "Generic (PLEG): container finished" podID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerID="a4f148a00a33231bf45056a5b4fb17ced10e8b00b599443bbe284eb0ce28ec7b" exitCode=0 Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.817134 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w72wg" event={"ID":"fca058c9-9d1b-41b4-b057-c90327ca3628","Type":"ContainerDied","Data":"a4f148a00a33231bf45056a5b4fb17ced10e8b00b599443bbe284eb0ce28ec7b"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.842675 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" event={"ID":"e6710298-7872-4178-b6b8-730c3eddb965","Type":"ContainerStarted","Data":"70f5839663eafcd61e09715d62a846727ceed54be244c36f8a8b934bd26102b5"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.851707 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.852939 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.352923705 +0000 UTC m=+139.664082651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.853334 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" event={"ID":"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d","Type":"ContainerStarted","Data":"87ef3272f6a46b7303108831e38cc0a514cb7c5120f1b3ef1e48dcb82f5fa6e0"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.853371 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" event={"ID":"6c871fb2-91ea-450b-9f6d-f3e9d2e6a23d","Type":"ContainerStarted","Data":"b7cab9863fff425d8465769a05fe9b3ff84ec107481070de747e17c1b881831e"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.890847 4945 generic.go:334] "Generic (PLEG): container finished" podID="4befd52c-d042-4259-8998-533f8a61dddd" containerID="ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05" exitCode=0 Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.891039 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ptz" event={"ID":"4befd52c-d042-4259-8998-533f8a61dddd","Type":"ContainerDied","Data":"ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.891385 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfqfk"] Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.935022 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkxxb" podStartSLOduration=119.934983605 podStartE2EDuration="1m59.934983605s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:48.931474261 +0000 UTC m=+139.242633207" watchObservedRunningTime="2026-01-08 23:17:48.934983605 +0000 UTC m=+139.246142551" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.953618 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:48 crc kubenswrapper[4945]: E0108 23:17:48.954803 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.454786755 +0000 UTC m=+139.765945701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.965376 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" event={"ID":"dfefef62-2fe7-47c6-9a57-dadd3fd6705d","Type":"ContainerStarted","Data":"ffc181582902181cbef0c7c8385b8e895a255a2d5b67970f74859de395923f92"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.967040 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fjdtg" podStartSLOduration=119.967025344 podStartE2EDuration="1m59.967025344s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:48.966185771 +0000 UTC m=+139.277344707" watchObservedRunningTime="2026-01-08 23:17:48.967025344 +0000 UTC m=+139.278184290" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.969913 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" event={"ID":"0677ae19-e425-485b-9206-98c9ad11aea8","Type":"ContainerStarted","Data":"353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc"} Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.971787 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:48 crc kubenswrapper[4945]: I0108 23:17:48.999262 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" event={"ID":"f910f213-8fbe-44fe-888a-aea783dcd0ec","Type":"ContainerStarted","Data":"f97a5bc2e4a108e93dd0b0ced6f70aed4e2c0de62675c1719ed8b3c27b943105"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.010659 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" event={"ID":"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5","Type":"ContainerStarted","Data":"328544e178ca1fd4a0b955c2630ae3bf73aa92b83e3d9d48bb6d5c79ee212259"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.048306 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.050295 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cw648" podStartSLOduration=120.050277265 podStartE2EDuration="2m0.050277265s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.025255474 +0000 UTC m=+139.336414420" watchObservedRunningTime="2026-01-08 23:17:49.050277265 +0000 UTC m=+139.361436211" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.054913 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.056945 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.556931423 +0000 UTC m=+139.868090459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.073583 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" event={"ID":"8a8d494a-4ed4-4707-9354-98912697989d","Type":"ContainerStarted","Data":"600e5f3acffa1d42338d5530403ac00e3589b0e52321d178ae07a6dbd7d63bad"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.073650 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" event={"ID":"8a8d494a-4ed4-4707-9354-98912697989d","Type":"ContainerStarted","Data":"7e7584c307333a630bf711480c56dcd9ea020a6ccbbcf5f61ebe73dc9891b68f"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.079501 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7hvvw" podStartSLOduration=120.079484128 podStartE2EDuration="2m0.079484128s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.055389522 +0000 UTC m=+139.366548468" watchObservedRunningTime="2026-01-08 23:17:49.079484128 +0000 UTC m=+139.390643074" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.120137 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" podStartSLOduration=120.120122797 podStartE2EDuration="2m0.120122797s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.086969928 +0000 UTC m=+139.398128884" watchObservedRunningTime="2026-01-08 23:17:49.120122797 +0000 UTC m=+139.431281733" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.121689 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bwk8r" podStartSLOduration=120.121682369 podStartE2EDuration="2m0.121682369s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.119775437 +0000 UTC m=+139.430934383" watchObservedRunningTime="2026-01-08 23:17:49.121682369 +0000 UTC m=+139.432841315" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.126460 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jkvh2" event={"ID":"b0c68805-16fc-47d5-8ca8-7ebd2e59fcb9","Type":"ContainerStarted","Data":"e198d29481ad463cd6767f490543acd60369c0c96c1ed6a7209cecb269c55fe3"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.151031 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2hvk" event={"ID":"b3238595-843e-4c3a-9e67-538483ac4b20","Type":"ContainerStarted","Data":"34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.151071 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2hvk" event={"ID":"b3238595-843e-4c3a-9e67-538483ac4b20","Type":"ContainerStarted","Data":"ce7b1a8050b485912488f6901dd66dbcd5d669d96aa89a7e7c5fe9e0ac733dd5"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.156476 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.157305 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.657291283 +0000 UTC m=+139.968450229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.159153 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79m2f" event={"ID":"662bb234-dff9-4a44-9432-c2f864195ce0","Type":"ContainerStarted","Data":"b9c5c713d6469abbf3ea4bb133368c59962fa864f2c175cec914539e06a78942"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.217129 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" event={"ID":"6391ef36-ef23-42f8-ae03-7610ede1f819","Type":"ContainerStarted","Data":"c7c9eaa3c58c0e11be7c1afa6f01cd4fa3f63c11e86ba289cde420cdb29ba479"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.237165 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hc8bs" podStartSLOduration=120.237148123 podStartE2EDuration="2m0.237148123s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.235831738 +0000 UTC m=+139.546990684" watchObservedRunningTime="2026-01-08 23:17:49.237148123 +0000 UTC m=+139.548307059" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.254123 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-79krd" event={"ID":"de1b2b7c-06eb-45d1-9a43-f1fbc5ae7e7d","Type":"ContainerStarted","Data":"0f08d22a6e5579c0ccbbddf474c9bde229886a4cfb29e483652fc6006d469df7"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.254827 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-79krd" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.257755 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.258835 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.758819014 +0000 UTC m=+140.069977960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.271128 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" event={"ID":"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b","Type":"ContainerStarted","Data":"f2d5d787ee2fbce1c779457fb156c668199fad436028b53078a552096f42dadb"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.305851 4945 generic.go:334] "Generic (PLEG): container finished" podID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerID="57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356" exitCode=0 Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.305940 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgr9g" event={"ID":"dce1a9c0-149a-4062-a166-84829a9dc2ec","Type":"ContainerDied","Data":"57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.317211 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" podStartSLOduration=120.317191279 podStartE2EDuration="2m0.317191279s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.316887571 +0000 UTC m=+139.628046517" watchObservedRunningTime="2026-01-08 23:17:49.317191279 +0000 UTC m=+139.628350225" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.319318 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-79krd" podStartSLOduration=8.319312706 podStartE2EDuration="8.319312706s" podCreationTimestamp="2026-01-08 23:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.29448589 +0000 UTC m=+139.605644836" watchObservedRunningTime="2026-01-08 23:17:49.319312706 +0000 UTC m=+139.630471642" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.323477 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zw5q5" event={"ID":"1489254c-ff30-49b3-b896-b823f1ae8559","Type":"ContainerStarted","Data":"77436ea65cfb39e68ec1fca15f8a38a49defccfc2bc5193bbafdd0087629d21e"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.324195 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.345718 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" event={"ID":"b54d9295-2900-4e23-b8f0-a815fc8e9b7d","Type":"ContainerStarted","Data":"69e42d6da2605bf4fbfd56a1409608eec73e7cfbb77605aa40b8eb4051836e67"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.360530 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.361513 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.861498936 +0000 UTC m=+140.172657882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.364383 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zw5q5" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.372400 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" event={"ID":"099b4860-bccd-462a-8a0e-f28604353408","Type":"ContainerStarted","Data":"2de483510b88ffb45826d5938a221d2708bd065e98a2d8c60c0325d6670437e9"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.385246 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zw5q5" podStartSLOduration=120.385229482 podStartE2EDuration="2m0.385229482s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.353677347 +0000 UTC m=+139.664836293" watchObservedRunningTime="2026-01-08 23:17:49.385229482 +0000 UTC m=+139.696388428" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.410670 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pzbw5" podStartSLOduration=120.410654024 podStartE2EDuration="2m0.410654024s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.38665545 +0000 UTC m=+139.697814396" watchObservedRunningTime="2026-01-08 23:17:49.410654024 +0000 UTC m=+139.721812970" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.414321 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvcsg"] Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.430508 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" event={"ID":"def1ce3f-02ba-4056-80f8-b0ba00fa64b2","Type":"ContainerStarted","Data":"82f4d7579ae85a58d443a6d2356901af7af915e954f507dc90b306a192cded97"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.437615 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9zgl9" podStartSLOduration=120.437600036 podStartE2EDuration="2m0.437600036s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.435717795 +0000 UTC m=+139.746876741" watchObservedRunningTime="2026-01-08 23:17:49.437600036 +0000 UTC m=+139.748758982" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.447748 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" event={"ID":"9c69b35b-9333-43fd-8af9-1a046fa97995","Type":"ContainerStarted","Data":"d7bf96ef4cb68a6bee1a24213fdc2c76444c75a011cacf2f940d846562a417b2"} Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.448893 4945 patch_prober.go:28] interesting pod/downloads-7954f5f757-n2sv5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.453164 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-n2sv5" podUID="297cbd4b-37f7-4ab3-82ea-da1872a05ef1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.465943 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.470867 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:49.970847077 +0000 UTC m=+140.282006103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.530960 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rlddw" podStartSLOduration=120.530945038 podStartE2EDuration="2m0.530945038s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.528498942 +0000 UTC m=+139.839657888" watchObservedRunningTime="2026-01-08 23:17:49.530945038 +0000 UTC m=+139.842103984" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.560167 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.569601 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.573184 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.073145869 +0000 UTC m=+140.384304815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.576066 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.576550 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.07653554 +0000 UTC m=+140.387694486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.591680 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" podStartSLOduration=120.591654515 podStartE2EDuration="2m0.591654515s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:49.576037516 +0000 UTC m=+139.887196462" watchObservedRunningTime="2026-01-08 23:17:49.591654515 +0000 UTC m=+139.902813461" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.678692 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.678980 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.178966095 +0000 UTC m=+140.490125041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.719244 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:49 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:49 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:49 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.719315 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.783470 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.784193 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.284181325 +0000 UTC m=+140.595340271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.803709 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mntf4"] Jan 08 23:17:49 crc kubenswrapper[4945]: W0108 23:17:49.875246 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ab77c0a_0b0d_4afc_8949_acf0d5c4b1db.slice/crio-147d1d3dc8267cd93d5e7a4348a77e9489e52472cdb38bdce255d7621d3cb1a5 WatchSource:0}: Error finding container 147d1d3dc8267cd93d5e7a4348a77e9489e52472cdb38bdce255d7621d3cb1a5: Status 404 returned error can't find the container with id 147d1d3dc8267cd93d5e7a4348a77e9489e52472cdb38bdce255d7621d3cb1a5 Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.884744 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.885349 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.385331766 +0000 UTC m=+140.696490722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:49 crc kubenswrapper[4945]: I0108 23:17:49.987451 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:49 crc kubenswrapper[4945]: E0108 23:17:49.987766 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.487755001 +0000 UTC m=+140.798913947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.090520 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.091063 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.59104799 +0000 UTC m=+140.902206936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.192559 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.192896 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.692885379 +0000 UTC m=+141.004044325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.293346 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.293481 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.793455635 +0000 UTC m=+141.104614581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.293855 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.294228 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.794218605 +0000 UTC m=+141.105377551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.325336 4945 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.395319 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.395667 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.895649994 +0000 UTC m=+141.206808940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.458429 4945 generic.go:334] "Generic (PLEG): container finished" podID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerID="966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb" exitCode=0 Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.458528 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvcsg" event={"ID":"08cb759a-5b37-46e6-9b1f-5e84fabc66cd","Type":"ContainerDied","Data":"966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.458581 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvcsg" event={"ID":"08cb759a-5b37-46e6-9b1f-5e84fabc66cd","Type":"ContainerStarted","Data":"48f4b3308aee41f2cc112c99c80e408490f94a94b6704d2b3e5f7783c8516bb1"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.465224 4945 generic.go:334] "Generic (PLEG): container finished" podID="b3238595-843e-4c3a-9e67-538483ac4b20" containerID="34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b" exitCode=0 Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.465306 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2hvk" event={"ID":"b3238595-843e-4c3a-9e67-538483ac4b20","Type":"ContainerDied","Data":"34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.468872 4945 generic.go:334] "Generic (PLEG): container finished" podID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerID="07a5ea956a5535e3427a4611470efcc1933142aa157943b91241f5d8ccc996d8" exitCode=0 Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.469005 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfqfk" event={"ID":"77f56ef6-cc93-457b-8f55-e8e74123159e","Type":"ContainerDied","Data":"07a5ea956a5535e3427a4611470efcc1933142aa157943b91241f5d8ccc996d8"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.469030 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfqfk" event={"ID":"77f56ef6-cc93-457b-8f55-e8e74123159e","Type":"ContainerStarted","Data":"187f25ddaf067cfbd16336076dd6ab95c2bf1463f89b5c4692c3c4eb8c3b4cc3"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.483723 4945 generic.go:334] "Generic (PLEG): container finished" podID="27ab75bd-1a05-42c5-b6ee-0917bdc88c6b" containerID="f2d5d787ee2fbce1c779457fb156c668199fad436028b53078a552096f42dadb" exitCode=0 Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.483799 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" event={"ID":"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b","Type":"ContainerDied","Data":"f2d5d787ee2fbce1c779457fb156c668199fad436028b53078a552096f42dadb"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.489531 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntf4" event={"ID":"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db","Type":"ContainerStarted","Data":"147d1d3dc8267cd93d5e7a4348a77e9489e52472cdb38bdce255d7621d3cb1a5"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.496055 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" event={"ID":"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5","Type":"ContainerStarted","Data":"bad2e468a74cdb0ec46f90ee8d6f64e2f6195ca7020cf62f62f4db9248c309a6"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.496249 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.497573 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:50.997561165 +0000 UTC m=+141.308720111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.523120 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8ldnl" event={"ID":"9c69b35b-9333-43fd-8af9-1a046fa97995","Type":"ContainerStarted","Data":"910ab886d01870c499bc22ba540f8a8f81282a2d5563f1863684ad103f0745fe"} Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.539252 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.598219 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.598629 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:51.098589573 +0000 UTC m=+141.409748519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.700527 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.700939 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:51.200918206 +0000 UTC m=+141.512077162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.718074 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:50 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:50 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:50 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.718132 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.801629 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.801812 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:51.301780909 +0000 UTC m=+141.612939865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.801911 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.802432 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:51.302421556 +0000 UTC m=+141.613580502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.902944 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.903245 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:51.403191447 +0000 UTC m=+141.714350393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:50 crc kubenswrapper[4945]: I0108 23:17:50.903441 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:50 crc kubenswrapper[4945]: E0108 23:17:50.904138 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:51.404105262 +0000 UTC m=+141.715264208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.004907 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:51 crc kubenswrapper[4945]: E0108 23:17:51.005352 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-08 23:17:51.505332785 +0000 UTC m=+141.816491731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.107234 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:51 crc kubenswrapper[4945]: E0108 23:17:51.107595 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-08 23:17:51.607583885 +0000 UTC m=+141.918742831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j8vl9" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.160777 4945 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-08T23:17:50.326144651Z","Handler":null,"Name":""} Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.165904 4945 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.165983 4945 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.209152 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.219583 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.313373 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.316183 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.316223 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.344274 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j8vl9\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.387420 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.545642 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" event={"ID":"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5","Type":"ContainerStarted","Data":"5fa3f884330d3aa2f7cd01820f536b345954a1a2ddf4d1879b17db375193206e"} Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.545685 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" event={"ID":"ebcf2fb9-b031-4ddf-bb72-ab3aacb49ca5","Type":"ContainerStarted","Data":"36d8a1c815054769d54143ef2914c1955652bb2b4350d18d4ebf72cbdd6e2fa0"} Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.550690 4945 generic.go:334] "Generic (PLEG): container finished" podID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerID="021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a" exitCode=0 Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.550867 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntf4" event={"ID":"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db","Type":"ContainerDied","Data":"021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a"} Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.566758 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qg7b7" podStartSLOduration=10.566745072 podStartE2EDuration="10.566745072s" podCreationTimestamp="2026-01-08 23:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:51.566482134 +0000 UTC m=+141.877641110" watchObservedRunningTime="2026-01-08 23:17:51.566745072 +0000 UTC m=+141.877904018" Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.712846 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:51 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:51 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:51 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.713113 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.800125 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j8vl9"] Jan 08 23:17:51 crc kubenswrapper[4945]: I0108 23:17:51.928413 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.012472 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.037785 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-config-volume\") pod \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.037836 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-secret-volume\") pod \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.038393 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt68q\" (UniqueName: \"kubernetes.io/projected/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-kube-api-access-qt68q\") pod \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\" (UID: \"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b\") " Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.038883 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-config-volume" (OuterVolumeSpecName: "config-volume") pod "27ab75bd-1a05-42c5-b6ee-0917bdc88c6b" (UID: "27ab75bd-1a05-42c5-b6ee-0917bdc88c6b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.044104 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.045044 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "27ab75bd-1a05-42c5-b6ee-0917bdc88c6b" (UID: "27ab75bd-1a05-42c5-b6ee-0917bdc88c6b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.045419 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-kube-api-access-qt68q" (OuterVolumeSpecName: "kube-api-access-qt68q") pod "27ab75bd-1a05-42c5-b6ee-0917bdc88c6b" (UID: "27ab75bd-1a05-42c5-b6ee-0917bdc88c6b"). InnerVolumeSpecName "kube-api-access-qt68q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.140855 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.140897 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.140927 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt68q\" (UniqueName: \"kubernetes.io/projected/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b-kube-api-access-qt68q\") on node \"crc\" DevicePath \"\"" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.178894 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 08 23:17:52 crc kubenswrapper[4945]: E0108 23:17:52.179213 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ab75bd-1a05-42c5-b6ee-0917bdc88c6b" containerName="collect-profiles" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.179232 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ab75bd-1a05-42c5-b6ee-0917bdc88c6b" containerName="collect-profiles" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.179352 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ab75bd-1a05-42c5-b6ee-0917bdc88c6b" containerName="collect-profiles" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.179759 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.181669 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.182759 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.183362 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.242394 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.242462 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.350764 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.350897 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.351357 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.374922 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.515112 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.589281 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.589294 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt" event={"ID":"27ab75bd-1a05-42c5-b6ee-0917bdc88c6b","Type":"ContainerDied","Data":"497d4432da00195f3e5df993e63ee06deacd00fba591bf491fe86e751d4075de"} Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.589384 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497d4432da00195f3e5df993e63ee06deacd00fba591bf491fe86e751d4075de" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.598180 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" event={"ID":"884cb1ac-efad-4ead-b31f-7301081aa310","Type":"ContainerStarted","Data":"a305b12b487c2666baf090e6f682730e55375f6ac7b81f5aacdf74abc5b2dd25"} Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.598215 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" event={"ID":"884cb1ac-efad-4ead-b31f-7301081aa310","Type":"ContainerStarted","Data":"87858d27b246132b256bf6c6fbcb0dee2cc0bd79a4ec680e2b5526a8f4a1f6fc"} Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.598563 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.626961 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" podStartSLOduration=123.626942256 podStartE2EDuration="2m3.626942256s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:52.624755687 +0000 UTC m=+142.935914633" watchObservedRunningTime="2026-01-08 23:17:52.626942256 +0000 UTC m=+142.938101202" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.637746 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.660668 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ppfqk" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.707670 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.724124 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:52 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:52 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:52 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:52 crc kubenswrapper[4945]: I0108 23:17:52.724188 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.389702 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 08 23:17:53 crc kubenswrapper[4945]: W0108 23:17:53.471828 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddbf1b64b_d572_4cb6_a6ed_12a63473ddb3.slice/crio-c865ea1313601f140c6b87590741f195b75c56c2ccf2b5d15d9081085fccc37c WatchSource:0}: Error finding container c865ea1313601f140c6b87590741f195b75c56c2ccf2b5d15d9081085fccc37c: Status 404 returned error can't find the container with id c865ea1313601f140c6b87590741f195b75c56c2ccf2b5d15d9081085fccc37c Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.605191 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3","Type":"ContainerStarted","Data":"c865ea1313601f140c6b87590741f195b75c56c2ccf2b5d15d9081085fccc37c"} Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.707696 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:53 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:53 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:53 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.707777 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.872710 4945 patch_prober.go:28] interesting pod/downloads-7954f5f757-n2sv5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.872749 4945 patch_prober.go:28] interesting pod/downloads-7954f5f757-n2sv5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.872792 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-n2sv5" podUID="297cbd4b-37f7-4ab3-82ea-da1872a05ef1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.872836 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-n2sv5" podUID="297cbd4b-37f7-4ab3-82ea-da1872a05ef1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.958346 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.959111 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.960689 4945 patch_prober.go:28] interesting pod/console-f9d7485db-pmfct container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 08 23:17:53 crc kubenswrapper[4945]: I0108 23:17:53.960746 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pmfct" podUID="490f6a3a-21e2-4264-8a92-75202ba3db64" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 08 23:17:54 crc kubenswrapper[4945]: I0108 23:17:54.668649 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3","Type":"ContainerStarted","Data":"5bda7eee17fc4718784153d7ef1015569c4f3a627526f205b1cb9e8c0bf38a2a"} Jan 08 23:17:54 crc kubenswrapper[4945]: I0108 23:17:54.682344 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.682322874 podStartE2EDuration="2.682322874s" podCreationTimestamp="2026-01-08 23:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:17:54.68215182 +0000 UTC m=+144.993310776" watchObservedRunningTime="2026-01-08 23:17:54.682322874 +0000 UTC m=+144.993481820" Jan 08 23:17:54 crc kubenswrapper[4945]: I0108 23:17:54.707843 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:54 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:54 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:54 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:54 crc kubenswrapper[4945]: I0108 23:17:54.707909 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:55 crc kubenswrapper[4945]: I0108 23:17:55.675334 4945 generic.go:334] "Generic (PLEG): container finished" podID="dbf1b64b-d572-4cb6-a6ed-12a63473ddb3" containerID="5bda7eee17fc4718784153d7ef1015569c4f3a627526f205b1cb9e8c0bf38a2a" exitCode=0 Jan 08 23:17:55 crc kubenswrapper[4945]: I0108 23:17:55.675380 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3","Type":"ContainerDied","Data":"5bda7eee17fc4718784153d7ef1015569c4f3a627526f205b1cb9e8c0bf38a2a"} Jan 08 23:17:55 crc kubenswrapper[4945]: I0108 23:17:55.706618 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:55 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:55 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:55 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:55 crc kubenswrapper[4945]: I0108 23:17:55.706677 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:55 crc kubenswrapper[4945]: I0108 23:17:55.936792 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:55 crc kubenswrapper[4945]: I0108 23:17:55.998137 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.037854 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.037948 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.037980 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.046917 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.046948 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.059073 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.114429 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.122461 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.130214 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:56 crc kubenswrapper[4945]: W0108 23:17:56.617326 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-7abb2dd69560c554e09c2dc5be75f4bf16aba4816e7d40c42f241d1b8c9d3eec WatchSource:0}: Error finding container 7abb2dd69560c554e09c2dc5be75f4bf16aba4816e7d40c42f241d1b8c9d3eec: Status 404 returned error can't find the container with id 7abb2dd69560c554e09c2dc5be75f4bf16aba4816e7d40c42f241d1b8c9d3eec Jan 08 23:17:56 crc kubenswrapper[4945]: W0108 23:17:56.652453 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-407806663e135345293faaff88ff96058822b360efc8f5454a2b7f8d1594d0d8 WatchSource:0}: Error finding container 407806663e135345293faaff88ff96058822b360efc8f5454a2b7f8d1594d0d8: Status 404 returned error can't find the container with id 407806663e135345293faaff88ff96058822b360efc8f5454a2b7f8d1594d0d8 Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.693051 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7abb2dd69560c554e09c2dc5be75f4bf16aba4816e7d40c42f241d1b8c9d3eec"} Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.697648 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"407806663e135345293faaff88ff96058822b360efc8f5454a2b7f8d1594d0d8"} Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.708439 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:56 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:56 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:56 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.708499 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:56 crc kubenswrapper[4945]: I0108 23:17:56.711589 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3a8d6063e22d06f53888f2633e203a2a7817c438f29a7ab709331a8b9acaec3c"} Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.084090 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.264364 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kubelet-dir\") pod \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\" (UID: \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\") " Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.264408 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kube-api-access\") pod \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\" (UID: \"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3\") " Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.264705 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dbf1b64b-d572-4cb6-a6ed-12a63473ddb3" (UID: "dbf1b64b-d572-4cb6-a6ed-12a63473ddb3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.279912 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dbf1b64b-d572-4cb6-a6ed-12a63473ddb3" (UID: "dbf1b64b-d572-4cb6-a6ed-12a63473ddb3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.365265 4945 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.365289 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dbf1b64b-d572-4cb6-a6ed-12a63473ddb3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.707711 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:57 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:57 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:57 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.708891 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.737030 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"dbf1b64b-d572-4cb6-a6ed-12a63473ddb3","Type":"ContainerDied","Data":"c865ea1313601f140c6b87590741f195b75c56c2ccf2b5d15d9081085fccc37c"} Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.737080 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c865ea1313601f140c6b87590741f195b75c56c2ccf2b5d15d9081085fccc37c" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.737143 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.745403 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4981ee77b6caef0a0efa57bb1ff8f8f157626e03b9bce6b1277e6a24737bf357"} Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.754808 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"15cb5a598fa7154f9773bd0960eb1710e976695808b3d133285675748ace1501"} Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.755165 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:17:57 crc kubenswrapper[4945]: I0108 23:17:57.767877 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f1b4f4ee8ed283bf1035c9eb23f42b39a205948cfac7f3b149fb12810ff25cf1"} Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.713502 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:58 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:58 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:58 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.713556 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.713675 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 08 23:17:58 crc kubenswrapper[4945]: E0108 23:17:58.713940 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf1b64b-d572-4cb6-a6ed-12a63473ddb3" containerName="pruner" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.713951 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf1b64b-d572-4cb6-a6ed-12a63473ddb3" containerName="pruner" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.714074 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf1b64b-d572-4cb6-a6ed-12a63473ddb3" containerName="pruner" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.714536 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.719449 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.719623 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.746560 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.887356 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.888291 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.990107 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.990210 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:17:58 crc kubenswrapper[4945]: I0108 23:17:58.990573 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:17:59 crc kubenswrapper[4945]: I0108 23:17:59.046550 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:17:59 crc kubenswrapper[4945]: I0108 23:17:59.051748 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:17:59 crc kubenswrapper[4945]: I0108 23:17:59.221082 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-79krd" Jan 08 23:17:59 crc kubenswrapper[4945]: I0108 23:17:59.707802 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:17:59 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:17:59 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:17:59 crc kubenswrapper[4945]: healthz check failed Jan 08 23:17:59 crc kubenswrapper[4945]: I0108 23:17:59.707903 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:18:00 crc kubenswrapper[4945]: I0108 23:18:00.706709 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:18:00 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:18:00 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:18:00 crc kubenswrapper[4945]: healthz check failed Jan 08 23:18:00 crc kubenswrapper[4945]: I0108 23:18:00.707078 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:18:01 crc kubenswrapper[4945]: I0108 23:18:01.708055 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:18:01 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:18:01 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:18:01 crc kubenswrapper[4945]: healthz check failed Jan 08 23:18:01 crc kubenswrapper[4945]: I0108 23:18:01.708220 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:18:02 crc kubenswrapper[4945]: I0108 23:18:02.707920 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:18:02 crc kubenswrapper[4945]: [-]has-synced failed: reason withheld Jan 08 23:18:02 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:18:02 crc kubenswrapper[4945]: healthz check failed Jan 08 23:18:02 crc kubenswrapper[4945]: I0108 23:18:02.707985 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:18:03 crc kubenswrapper[4945]: I0108 23:18:03.707153 4945 patch_prober.go:28] interesting pod/router-default-5444994796-62c5v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 08 23:18:03 crc kubenswrapper[4945]: [+]has-synced ok Jan 08 23:18:03 crc kubenswrapper[4945]: [+]process-running ok Jan 08 23:18:03 crc kubenswrapper[4945]: healthz check failed Jan 08 23:18:03 crc kubenswrapper[4945]: I0108 23:18:03.707530 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62c5v" podUID="1cce3808-cf0e-430b-bf61-b86cee0baf44" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:18:03 crc kubenswrapper[4945]: I0108 23:18:03.886523 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-n2sv5" Jan 08 23:18:03 crc kubenswrapper[4945]: I0108 23:18:03.961240 4945 patch_prober.go:28] interesting pod/console-f9d7485db-pmfct container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 08 23:18:03 crc kubenswrapper[4945]: I0108 23:18:03.961297 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pmfct" podUID="490f6a3a-21e2-4264-8a92-75202ba3db64" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 08 23:18:04 crc kubenswrapper[4945]: I0108 23:18:04.707164 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:18:04 crc kubenswrapper[4945]: I0108 23:18:04.709868 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-62c5v" Jan 08 23:18:11 crc kubenswrapper[4945]: I0108 23:18:11.392310 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:18:11 crc kubenswrapper[4945]: I0108 23:18:11.806931 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:18:11 crc kubenswrapper[4945]: I0108 23:18:11.831651 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/53cbedd0-f69d-4a28-9077-13fed644be95-metrics-certs\") pod \"network-metrics-daemon-g8gcl\" (UID: \"53cbedd0-f69d-4a28-9077-13fed644be95\") " pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:18:11 crc kubenswrapper[4945]: I0108 23:18:11.917881 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-g8gcl" Jan 08 23:18:13 crc kubenswrapper[4945]: I0108 23:18:13.021838 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5fcjl" Jan 08 23:18:13 crc kubenswrapper[4945]: I0108 23:18:13.578192 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:18:13 crc kubenswrapper[4945]: I0108 23:18:13.578502 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:18:13 crc kubenswrapper[4945]: I0108 23:18:13.962087 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:18:13 crc kubenswrapper[4945]: I0108 23:18:13.977532 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.308032 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.309707 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w9l78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vgr9g_openshift-marketplace(dce1a9c0-149a-4062-a166-84829a9dc2ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.310908 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vgr9g" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.354245 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.354577 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt7cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mntf4_openshift-marketplace(5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.355747 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mntf4" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.360040 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.360206 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n95dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dlpgj_openshift-marketplace(069f874e-b727-46fb-85e6-b36d4921412f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.362019 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dlpgj" podUID="069f874e-b727-46fb-85e6-b36d4921412f" Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.702018 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-g8gcl"] Jan 08 23:18:22 crc kubenswrapper[4945]: W0108 23:18:22.721972 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53cbedd0_f69d_4a28_9077_13fed644be95.slice/crio-65b010a0bd84e38d5497df269fb78d60d9865b9493b5815a9b1bd86f3671fc9a WatchSource:0}: Error finding container 65b010a0bd84e38d5497df269fb78d60d9865b9493b5815a9b1bd86f3671fc9a: Status 404 returned error can't find the container with id 65b010a0bd84e38d5497df269fb78d60d9865b9493b5815a9b1bd86f3671fc9a Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.772588 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.906785 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvcsg" event={"ID":"08cb759a-5b37-46e6-9b1f-5e84fabc66cd","Type":"ContainerStarted","Data":"2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c"} Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.912417 4945 generic.go:334] "Generic (PLEG): container finished" podID="b3238595-843e-4c3a-9e67-538483ac4b20" containerID="d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db" exitCode=0 Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.912505 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2hvk" event={"ID":"b3238595-843e-4c3a-9e67-538483ac4b20","Type":"ContainerDied","Data":"d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db"} Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.917366 4945 generic.go:334] "Generic (PLEG): container finished" podID="4befd52c-d042-4259-8998-533f8a61dddd" containerID="200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a" exitCode=0 Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.917440 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ptz" event={"ID":"4befd52c-d042-4259-8998-533f8a61dddd","Type":"ContainerDied","Data":"200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a"} Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.944712 4945 generic.go:334] "Generic (PLEG): container finished" podID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerID="128082c95b600b1bf9334fd981aa4045d6f04590b2337e50e8794036c5df2f1d" exitCode=0 Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.944814 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfqfk" event={"ID":"77f56ef6-cc93-457b-8f55-e8e74123159e","Type":"ContainerDied","Data":"128082c95b600b1bf9334fd981aa4045d6f04590b2337e50e8794036c5df2f1d"} Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.951583 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w72wg" event={"ID":"fca058c9-9d1b-41b4-b057-c90327ca3628","Type":"ContainerDied","Data":"3ca59e66fe0fdf5868567a48108c8893d2ae075af0e750a44b73b4335da4967c"} Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.951510 4945 generic.go:334] "Generic (PLEG): container finished" podID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerID="3ca59e66fe0fdf5868567a48108c8893d2ae075af0e750a44b73b4335da4967c" exitCode=0 Jan 08 23:18:22 crc kubenswrapper[4945]: I0108 23:18:22.960386 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" event={"ID":"53cbedd0-f69d-4a28-9077-13fed644be95","Type":"ContainerStarted","Data":"65b010a0bd84e38d5497df269fb78d60d9865b9493b5815a9b1bd86f3671fc9a"} Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.962771 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dlpgj" podUID="069f874e-b727-46fb-85e6-b36d4921412f" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.964300 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vgr9g" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" Jan 08 23:18:22 crc kubenswrapper[4945]: E0108 23:18:22.966274 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mntf4" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.967592 4945 generic.go:334] "Generic (PLEG): container finished" podID="98cc6cfa-0774-4154-9ef9-ccb4e8c76cab" containerID="85c16b8613f63f964edc1dd415bbc00a8b20b6e0df8206d818a5707df07b11b8" exitCode=0 Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.967778 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab","Type":"ContainerDied","Data":"85c16b8613f63f964edc1dd415bbc00a8b20b6e0df8206d818a5707df07b11b8"} Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.968195 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab","Type":"ContainerStarted","Data":"61bd1a5c56b3db121b54df47aedca23ad21180efc3b0f311cc1e0425e965f3ce"} Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.970538 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w72wg" event={"ID":"fca058c9-9d1b-41b4-b057-c90327ca3628","Type":"ContainerStarted","Data":"3a0d54b7058c8a4a27632e0c696393776fd18d7f9b769a7633283bb2f500baac"} Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.972897 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" event={"ID":"53cbedd0-f69d-4a28-9077-13fed644be95","Type":"ContainerStarted","Data":"02b701d4f80cb37789fcaa3547a8737aa9a24a4771580149653ba346dc40623a"} Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.972926 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-g8gcl" event={"ID":"53cbedd0-f69d-4a28-9077-13fed644be95","Type":"ContainerStarted","Data":"826d5c536bad1c386a723d54cb2f8b1749a0df3d7d1c97f6eaf87bfdba05dcb1"} Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.974585 4945 generic.go:334] "Generic (PLEG): container finished" podID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerID="2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c" exitCode=0 Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.974631 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvcsg" event={"ID":"08cb759a-5b37-46e6-9b1f-5e84fabc66cd","Type":"ContainerDied","Data":"2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c"} Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.977928 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2hvk" event={"ID":"b3238595-843e-4c3a-9e67-538483ac4b20","Type":"ContainerStarted","Data":"078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53"} Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.980399 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ptz" event={"ID":"4befd52c-d042-4259-8998-533f8a61dddd","Type":"ContainerStarted","Data":"ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890"} Jan 08 23:18:23 crc kubenswrapper[4945]: I0108 23:18:23.982636 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfqfk" event={"ID":"77f56ef6-cc93-457b-8f55-e8e74123159e","Type":"ContainerStarted","Data":"bec59a331392f7540a9cfbceb1b86cd44448d93cf6c998eb4004feaea9fe6128"} Jan 08 23:18:24 crc kubenswrapper[4945]: I0108 23:18:24.007037 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r2hvk" podStartSLOduration=2.473505867 podStartE2EDuration="37.00698337s" podCreationTimestamp="2026-01-08 23:17:47 +0000 UTC" firstStartedPulling="2026-01-08 23:17:49.152827723 +0000 UTC m=+139.463986669" lastFinishedPulling="2026-01-08 23:18:23.686305226 +0000 UTC m=+173.997464172" observedRunningTime="2026-01-08 23:18:24.005799358 +0000 UTC m=+174.316958314" watchObservedRunningTime="2026-01-08 23:18:24.00698337 +0000 UTC m=+174.318142316" Jan 08 23:18:24 crc kubenswrapper[4945]: I0108 23:18:24.041399 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-g8gcl" podStartSLOduration=155.041381192 podStartE2EDuration="2m35.041381192s" podCreationTimestamp="2026-01-08 23:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:18:24.036308376 +0000 UTC m=+174.347467322" watchObservedRunningTime="2026-01-08 23:18:24.041381192 +0000 UTC m=+174.352540138" Jan 08 23:18:24 crc kubenswrapper[4945]: I0108 23:18:24.055610 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w8ptz" podStartSLOduration=5.628146852 podStartE2EDuration="40.055595853s" podCreationTimestamp="2026-01-08 23:17:44 +0000 UTC" firstStartedPulling="2026-01-08 23:17:48.909359358 +0000 UTC m=+139.220518304" lastFinishedPulling="2026-01-08 23:18:23.336808359 +0000 UTC m=+173.647967305" observedRunningTime="2026-01-08 23:18:24.054922695 +0000 UTC m=+174.366081651" watchObservedRunningTime="2026-01-08 23:18:24.055595853 +0000 UTC m=+174.366754799" Jan 08 23:18:24 crc kubenswrapper[4945]: I0108 23:18:24.075071 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w72wg" podStartSLOduration=4.520158858 podStartE2EDuration="39.075056105s" podCreationTimestamp="2026-01-08 23:17:45 +0000 UTC" firstStartedPulling="2026-01-08 23:17:48.828281635 +0000 UTC m=+139.139440571" lastFinishedPulling="2026-01-08 23:18:23.383178872 +0000 UTC m=+173.694337818" observedRunningTime="2026-01-08 23:18:24.072912777 +0000 UTC m=+174.384071733" watchObservedRunningTime="2026-01-08 23:18:24.075056105 +0000 UTC m=+174.386215051" Jan 08 23:18:24 crc kubenswrapper[4945]: I0108 23:18:24.092895 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bfqfk" podStartSLOduration=4.029247874 podStartE2EDuration="37.092881692s" podCreationTimestamp="2026-01-08 23:17:47 +0000 UTC" firstStartedPulling="2026-01-08 23:17:50.473157981 +0000 UTC m=+140.784316927" lastFinishedPulling="2026-01-08 23:18:23.536791799 +0000 UTC m=+173.847950745" observedRunningTime="2026-01-08 23:18:24.091764612 +0000 UTC m=+174.402923558" watchObservedRunningTime="2026-01-08 23:18:24.092881692 +0000 UTC m=+174.404040638" Jan 08 23:18:24 crc kubenswrapper[4945]: I0108 23:18:24.989529 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvcsg" event={"ID":"08cb759a-5b37-46e6-9b1f-5e84fabc66cd","Type":"ContainerStarted","Data":"fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021"} Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.011345 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dvcsg" podStartSLOduration=3.078564068 podStartE2EDuration="37.011324329s" podCreationTimestamp="2026-01-08 23:17:48 +0000 UTC" firstStartedPulling="2026-01-08 23:17:50.461116209 +0000 UTC m=+140.772275155" lastFinishedPulling="2026-01-08 23:18:24.39387647 +0000 UTC m=+174.705035416" observedRunningTime="2026-01-08 23:18:25.008590915 +0000 UTC m=+175.319749861" watchObservedRunningTime="2026-01-08 23:18:25.011324329 +0000 UTC m=+175.322483275" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.321020 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.436301 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.436520 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.483639 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kubelet-dir\") pod \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\" (UID: \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\") " Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.483727 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98cc6cfa-0774-4154-9ef9-ccb4e8c76cab" (UID: "98cc6cfa-0774-4154-9ef9-ccb4e8c76cab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.483790 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kube-api-access\") pod \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\" (UID: \"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab\") " Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.484074 4945 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.502932 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98cc6cfa-0774-4154-9ef9-ccb4e8c76cab" (UID: "98cc6cfa-0774-4154-9ef9-ccb4e8c76cab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.584793 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98cc6cfa-0774-4154-9ef9-ccb4e8c76cab-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.814532 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.814572 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.857577 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.994359 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.994461 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"98cc6cfa-0774-4154-9ef9-ccb4e8c76cab","Type":"ContainerDied","Data":"61bd1a5c56b3db121b54df47aedca23ad21180efc3b0f311cc1e0425e965f3ce"} Jan 08 23:18:25 crc kubenswrapper[4945]: I0108 23:18:25.994496 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61bd1a5c56b3db121b54df47aedca23ad21180efc3b0f311cc1e0425e965f3ce" Jan 08 23:18:26 crc kubenswrapper[4945]: I0108 23:18:26.494328 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-w8ptz" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="registry-server" probeResult="failure" output=< Jan 08 23:18:26 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 08 23:18:26 crc kubenswrapper[4945]: > Jan 08 23:18:27 crc kubenswrapper[4945]: I0108 23:18:27.598066 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:18:27 crc kubenswrapper[4945]: I0108 23:18:27.599413 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:18:27 crc kubenswrapper[4945]: I0108 23:18:27.635189 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:18:28 crc kubenswrapper[4945]: I0108 23:18:28.047558 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:18:28 crc kubenswrapper[4945]: I0108 23:18:28.069317 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:18:28 crc kubenswrapper[4945]: I0108 23:18:28.069584 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:18:28 crc kubenswrapper[4945]: I0108 23:18:28.116831 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:18:28 crc kubenswrapper[4945]: I0108 23:18:28.566965 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:18:28 crc kubenswrapper[4945]: I0108 23:18:28.567033 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:18:29 crc kubenswrapper[4945]: I0108 23:18:29.041582 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:18:29 crc kubenswrapper[4945]: I0108 23:18:29.611285 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dvcsg" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="registry-server" probeResult="failure" output=< Jan 08 23:18:29 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 08 23:18:29 crc kubenswrapper[4945]: > Jan 08 23:18:31 crc kubenswrapper[4945]: I0108 23:18:31.135127 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfqfk"] Jan 08 23:18:32 crc kubenswrapper[4945]: I0108 23:18:32.020502 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bfqfk" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerName="registry-server" containerID="cri-o://bec59a331392f7540a9cfbceb1b86cd44448d93cf6c998eb4004feaea9fe6128" gracePeriod=2 Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.110249 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 08 23:18:33 crc kubenswrapper[4945]: E0108 23:18:33.110608 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98cc6cfa-0774-4154-9ef9-ccb4e8c76cab" containerName="pruner" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.110625 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="98cc6cfa-0774-4154-9ef9-ccb4e8c76cab" containerName="pruner" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.110748 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="98cc6cfa-0774-4154-9ef9-ccb4e8c76cab" containerName="pruner" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.111220 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.115120 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.115207 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.118038 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.176230 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/465f3db6-3336-4406-9917-a935a783d980-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"465f3db6-3336-4406-9917-a935a783d980\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.176750 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/465f3db6-3336-4406-9917-a935a783d980-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"465f3db6-3336-4406-9917-a935a783d980\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.277381 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/465f3db6-3336-4406-9917-a935a783d980-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"465f3db6-3336-4406-9917-a935a783d980\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.277461 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/465f3db6-3336-4406-9917-a935a783d980-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"465f3db6-3336-4406-9917-a935a783d980\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.277579 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/465f3db6-3336-4406-9917-a935a783d980-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"465f3db6-3336-4406-9917-a935a783d980\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.300641 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/465f3db6-3336-4406-9917-a935a783d980-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"465f3db6-3336-4406-9917-a935a783d980\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.438481 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:33 crc kubenswrapper[4945]: I0108 23:18:33.825260 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.035201 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"465f3db6-3336-4406-9917-a935a783d980","Type":"ContainerStarted","Data":"5e78647244ed24dad0563d42fb75f26fd9c08decbf1d2d629190922de455ec58"} Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.036945 4945 generic.go:334] "Generic (PLEG): container finished" podID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerID="bec59a331392f7540a9cfbceb1b86cd44448d93cf6c998eb4004feaea9fe6128" exitCode=0 Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.036975 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfqfk" event={"ID":"77f56ef6-cc93-457b-8f55-e8e74123159e","Type":"ContainerDied","Data":"bec59a331392f7540a9cfbceb1b86cd44448d93cf6c998eb4004feaea9fe6128"} Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.280490 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.390538 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-catalog-content\") pod \"77f56ef6-cc93-457b-8f55-e8e74123159e\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.390604 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdbgq\" (UniqueName: \"kubernetes.io/projected/77f56ef6-cc93-457b-8f55-e8e74123159e-kube-api-access-pdbgq\") pod \"77f56ef6-cc93-457b-8f55-e8e74123159e\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.390642 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-utilities\") pod \"77f56ef6-cc93-457b-8f55-e8e74123159e\" (UID: \"77f56ef6-cc93-457b-8f55-e8e74123159e\") " Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.391415 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-utilities" (OuterVolumeSpecName: "utilities") pod "77f56ef6-cc93-457b-8f55-e8e74123159e" (UID: "77f56ef6-cc93-457b-8f55-e8e74123159e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.396803 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f56ef6-cc93-457b-8f55-e8e74123159e-kube-api-access-pdbgq" (OuterVolumeSpecName: "kube-api-access-pdbgq") pod "77f56ef6-cc93-457b-8f55-e8e74123159e" (UID: "77f56ef6-cc93-457b-8f55-e8e74123159e"). InnerVolumeSpecName "kube-api-access-pdbgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.414178 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77f56ef6-cc93-457b-8f55-e8e74123159e" (UID: "77f56ef6-cc93-457b-8f55-e8e74123159e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.491903 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.491947 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdbgq\" (UniqueName: \"kubernetes.io/projected/77f56ef6-cc93-457b-8f55-e8e74123159e-kube-api-access-pdbgq\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:34 crc kubenswrapper[4945]: I0108 23:18:34.491963 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f56ef6-cc93-457b-8f55-e8e74123159e-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.044935 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bfqfk" event={"ID":"77f56ef6-cc93-457b-8f55-e8e74123159e","Type":"ContainerDied","Data":"187f25ddaf067cfbd16336076dd6ab95c2bf1463f89b5c4692c3c4eb8c3b4cc3"} Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.045116 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bfqfk" Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.045478 4945 scope.go:117] "RemoveContainer" containerID="bec59a331392f7540a9cfbceb1b86cd44448d93cf6c998eb4004feaea9fe6128" Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.046915 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"465f3db6-3336-4406-9917-a935a783d980","Type":"ContainerStarted","Data":"c49a1b74794e744f1940c455643e3c94c71024cb50809d51d31d169b54979436"} Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.070057 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.070032071 podStartE2EDuration="2.070032071s" podCreationTimestamp="2026-01-08 23:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:18:35.067357539 +0000 UTC m=+185.378516485" watchObservedRunningTime="2026-01-08 23:18:35.070032071 +0000 UTC m=+185.381191017" Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.073130 4945 scope.go:117] "RemoveContainer" containerID="128082c95b600b1bf9334fd981aa4045d6f04590b2337e50e8794036c5df2f1d" Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.085249 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfqfk"] Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.091311 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bfqfk"] Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.099211 4945 scope.go:117] "RemoveContainer" containerID="07a5ea956a5535e3427a4611470efcc1933142aa157943b91241f5d8ccc996d8" Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.481554 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.529955 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:18:35 crc kubenswrapper[4945]: I0108 23:18:35.850351 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:18:36 crc kubenswrapper[4945]: I0108 23:18:36.026787 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" path="/var/lib/kubelet/pods/77f56ef6-cc93-457b-8f55-e8e74123159e/volumes" Jan 08 23:18:36 crc kubenswrapper[4945]: I0108 23:18:36.056026 4945 generic.go:334] "Generic (PLEG): container finished" podID="465f3db6-3336-4406-9917-a935a783d980" containerID="c49a1b74794e744f1940c455643e3c94c71024cb50809d51d31d169b54979436" exitCode=0 Jan 08 23:18:36 crc kubenswrapper[4945]: I0108 23:18:36.056216 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"465f3db6-3336-4406-9917-a935a783d980","Type":"ContainerDied","Data":"c49a1b74794e744f1940c455643e3c94c71024cb50809d51d31d169b54979436"} Jan 08 23:18:36 crc kubenswrapper[4945]: I0108 23:18:36.164969 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 08 23:18:37 crc kubenswrapper[4945]: I0108 23:18:37.734680 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w72wg"] Jan 08 23:18:37 crc kubenswrapper[4945]: I0108 23:18:37.735259 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w72wg" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerName="registry-server" containerID="cri-o://3a0d54b7058c8a4a27632e0c696393776fd18d7f9b769a7633283bb2f500baac" gracePeriod=2 Jan 08 23:18:38 crc kubenswrapper[4945]: I0108 23:18:38.612016 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:18:38 crc kubenswrapper[4945]: I0108 23:18:38.653361 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.074045 4945 generic.go:334] "Generic (PLEG): container finished" podID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerID="3a0d54b7058c8a4a27632e0c696393776fd18d7f9b769a7633283bb2f500baac" exitCode=0 Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.074126 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w72wg" event={"ID":"fca058c9-9d1b-41b4-b057-c90327ca3628","Type":"ContainerDied","Data":"3a0d54b7058c8a4a27632e0c696393776fd18d7f9b769a7633283bb2f500baac"} Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.103388 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 08 23:18:39 crc kubenswrapper[4945]: E0108 23:18:39.104006 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerName="extract-utilities" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.104041 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerName="extract-utilities" Jan 08 23:18:39 crc kubenswrapper[4945]: E0108 23:18:39.104071 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerName="registry-server" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.104083 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerName="registry-server" Jan 08 23:18:39 crc kubenswrapper[4945]: E0108 23:18:39.104109 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerName="extract-content" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.104123 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerName="extract-content" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.104387 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f56ef6-cc93-457b-8f55-e8e74123159e" containerName="registry-server" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.105203 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.222212 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.256911 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6e648b5-5638-4b26-beff-c0f2bac003fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.257055 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-var-lock\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.257097 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.358504 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-var-lock\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.358581 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.358626 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-var-lock\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.358649 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6e648b5-5638-4b26-beff-c0f2bac003fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.358761 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.382770 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6e648b5-5638-4b26-beff-c0f2bac003fc-kube-api-access\") pod \"installer-9-crc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:39 crc kubenswrapper[4945]: I0108 23:18:39.435331 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:18:40 crc kubenswrapper[4945]: I0108 23:18:40.845597 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:40 crc kubenswrapper[4945]: I0108 23:18:40.982603 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/465f3db6-3336-4406-9917-a935a783d980-kube-api-access\") pod \"465f3db6-3336-4406-9917-a935a783d980\" (UID: \"465f3db6-3336-4406-9917-a935a783d980\") " Jan 08 23:18:40 crc kubenswrapper[4945]: I0108 23:18:40.982732 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/465f3db6-3336-4406-9917-a935a783d980-kubelet-dir\") pod \"465f3db6-3336-4406-9917-a935a783d980\" (UID: \"465f3db6-3336-4406-9917-a935a783d980\") " Jan 08 23:18:40 crc kubenswrapper[4945]: I0108 23:18:40.983021 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/465f3db6-3336-4406-9917-a935a783d980-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "465f3db6-3336-4406-9917-a935a783d980" (UID: "465f3db6-3336-4406-9917-a935a783d980"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:18:40 crc kubenswrapper[4945]: I0108 23:18:40.986903 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465f3db6-3336-4406-9917-a935a783d980-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "465f3db6-3336-4406-9917-a935a783d980" (UID: "465f3db6-3336-4406-9917-a935a783d980"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.084584 4945 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/465f3db6-3336-4406-9917-a935a783d980-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.084948 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/465f3db6-3336-4406-9917-a935a783d980-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.085833 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"465f3db6-3336-4406-9917-a935a783d980","Type":"ContainerDied","Data":"5e78647244ed24dad0563d42fb75f26fd9c08decbf1d2d629190922de455ec58"} Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.085863 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e78647244ed24dad0563d42fb75f26fd9c08decbf1d2d629190922de455ec58" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.086058 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.396791 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.591605 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-catalog-content\") pod \"fca058c9-9d1b-41b4-b057-c90327ca3628\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.591686 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68jkz\" (UniqueName: \"kubernetes.io/projected/fca058c9-9d1b-41b4-b057-c90327ca3628-kube-api-access-68jkz\") pod \"fca058c9-9d1b-41b4-b057-c90327ca3628\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.591721 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-utilities\") pod \"fca058c9-9d1b-41b4-b057-c90327ca3628\" (UID: \"fca058c9-9d1b-41b4-b057-c90327ca3628\") " Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.592819 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-utilities" (OuterVolumeSpecName: "utilities") pod "fca058c9-9d1b-41b4-b057-c90327ca3628" (UID: "fca058c9-9d1b-41b4-b057-c90327ca3628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.598946 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca058c9-9d1b-41b4-b057-c90327ca3628-kube-api-access-68jkz" (OuterVolumeSpecName: "kube-api-access-68jkz") pod "fca058c9-9d1b-41b4-b057-c90327ca3628" (UID: "fca058c9-9d1b-41b4-b057-c90327ca3628"). InnerVolumeSpecName "kube-api-access-68jkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.652191 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fca058c9-9d1b-41b4-b057-c90327ca3628" (UID: "fca058c9-9d1b-41b4-b057-c90327ca3628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.693097 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.693564 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68jkz\" (UniqueName: \"kubernetes.io/projected/fca058c9-9d1b-41b4-b057-c90327ca3628-kube-api-access-68jkz\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.693582 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fca058c9-9d1b-41b4-b057-c90327ca3628-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:41 crc kubenswrapper[4945]: I0108 23:18:41.934534 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.079002 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qn985"] Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.107499 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgr9g" event={"ID":"dce1a9c0-149a-4062-a166-84829a9dc2ec","Type":"ContainerStarted","Data":"b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33"} Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.123559 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c6e648b5-5638-4b26-beff-c0f2bac003fc","Type":"ContainerStarted","Data":"36aaa1e2dd387de613b32e804caaa4639b5a6a9de573e164dc7aa790461c72f6"} Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.153947 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w72wg" event={"ID":"fca058c9-9d1b-41b4-b057-c90327ca3628","Type":"ContainerDied","Data":"75a3c03c86f26d6cc3a3b44ecaaf717d508fc50a8e41d97969e438f3cfa9dc75"} Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.154027 4945 scope.go:117] "RemoveContainer" containerID="3a0d54b7058c8a4a27632e0c696393776fd18d7f9b769a7633283bb2f500baac" Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.154266 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w72wg" Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.179147 4945 scope.go:117] "RemoveContainer" containerID="3ca59e66fe0fdf5868567a48108c8893d2ae075af0e750a44b73b4335da4967c" Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.180228 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w72wg"] Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.188460 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w72wg"] Jan 08 23:18:42 crc kubenswrapper[4945]: I0108 23:18:42.214545 4945 scope.go:117] "RemoveContainer" containerID="a4f148a00a33231bf45056a5b4fb17ced10e8b00b599443bbe284eb0ce28ec7b" Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.161516 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntf4" event={"ID":"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db","Type":"ContainerStarted","Data":"323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e"} Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.163866 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c6e648b5-5638-4b26-beff-c0f2bac003fc","Type":"ContainerStarted","Data":"794beaac38038e4ca435786b31b69ee93644f024423b742e8bfb1c942fe81076"} Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.166624 4945 generic.go:334] "Generic (PLEG): container finished" podID="069f874e-b727-46fb-85e6-b36d4921412f" containerID="b704dababa8e266d32004d95a0f370c4468c81d9f98f0ef15e49160d4fda4f63" exitCode=0 Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.166669 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlpgj" event={"ID":"069f874e-b727-46fb-85e6-b36d4921412f","Type":"ContainerDied","Data":"b704dababa8e266d32004d95a0f370c4468c81d9f98f0ef15e49160d4fda4f63"} Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.185075 4945 generic.go:334] "Generic (PLEG): container finished" podID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerID="b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33" exitCode=0 Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.185127 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgr9g" event={"ID":"dce1a9c0-149a-4062-a166-84829a9dc2ec","Type":"ContainerDied","Data":"b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33"} Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.215058 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.215034032 podStartE2EDuration="4.215034032s" podCreationTimestamp="2026-01-08 23:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:18:43.20039488 +0000 UTC m=+193.511553836" watchObservedRunningTime="2026-01-08 23:18:43.215034032 +0000 UTC m=+193.526192988" Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.579098 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:18:43 crc kubenswrapper[4945]: I0108 23:18:43.579179 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:18:44 crc kubenswrapper[4945]: I0108 23:18:44.014957 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" path="/var/lib/kubelet/pods/fca058c9-9d1b-41b4-b057-c90327ca3628/volumes" Jan 08 23:18:44 crc kubenswrapper[4945]: I0108 23:18:44.191891 4945 generic.go:334] "Generic (PLEG): container finished" podID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerID="323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e" exitCode=0 Jan 08 23:18:44 crc kubenswrapper[4945]: I0108 23:18:44.191959 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntf4" event={"ID":"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db","Type":"ContainerDied","Data":"323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e"} Jan 08 23:18:44 crc kubenswrapper[4945]: I0108 23:18:44.200669 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlpgj" event={"ID":"069f874e-b727-46fb-85e6-b36d4921412f","Type":"ContainerStarted","Data":"11ef18575bfffafe5af618736d257da0ed252682fbaae3f6d47c5af90446afb7"} Jan 08 23:18:44 crc kubenswrapper[4945]: I0108 23:18:44.206860 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgr9g" event={"ID":"dce1a9c0-149a-4062-a166-84829a9dc2ec","Type":"ContainerStarted","Data":"aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980"} Jan 08 23:18:44 crc kubenswrapper[4945]: I0108 23:18:44.253724 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vgr9g" podStartSLOduration=3.184750608 podStartE2EDuration="59.253698961s" podCreationTimestamp="2026-01-08 23:17:45 +0000 UTC" firstStartedPulling="2026-01-08 23:17:47.593176322 +0000 UTC m=+137.904335268" lastFinishedPulling="2026-01-08 23:18:43.662124675 +0000 UTC m=+193.973283621" observedRunningTime="2026-01-08 23:18:44.250117545 +0000 UTC m=+194.561276501" watchObservedRunningTime="2026-01-08 23:18:44.253698961 +0000 UTC m=+194.564857907" Jan 08 23:18:44 crc kubenswrapper[4945]: I0108 23:18:44.255967 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dlpgj" podStartSLOduration=4.446266618 podStartE2EDuration="59.255947511s" podCreationTimestamp="2026-01-08 23:17:45 +0000 UTC" firstStartedPulling="2026-01-08 23:17:48.791866269 +0000 UTC m=+139.103025215" lastFinishedPulling="2026-01-08 23:18:43.601547162 +0000 UTC m=+193.912706108" observedRunningTime="2026-01-08 23:18:44.235171034 +0000 UTC m=+194.546329990" watchObservedRunningTime="2026-01-08 23:18:44.255947511 +0000 UTC m=+194.567106457" Jan 08 23:18:45 crc kubenswrapper[4945]: I0108 23:18:45.214610 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntf4" event={"ID":"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db","Type":"ContainerStarted","Data":"aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501"} Jan 08 23:18:45 crc kubenswrapper[4945]: I0108 23:18:45.572852 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:18:45 crc kubenswrapper[4945]: I0108 23:18:45.572905 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:18:45 crc kubenswrapper[4945]: I0108 23:18:45.650971 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:18:45 crc kubenswrapper[4945]: I0108 23:18:45.665382 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mntf4" podStartSLOduration=4.604260798 podStartE2EDuration="57.665358615s" podCreationTimestamp="2026-01-08 23:17:48 +0000 UTC" firstStartedPulling="2026-01-08 23:17:51.568369605 +0000 UTC m=+141.879528551" lastFinishedPulling="2026-01-08 23:18:44.629467402 +0000 UTC m=+194.940626368" observedRunningTime="2026-01-08 23:18:45.232355021 +0000 UTC m=+195.543513957" watchObservedRunningTime="2026-01-08 23:18:45.665358615 +0000 UTC m=+195.976517581" Jan 08 23:18:45 crc kubenswrapper[4945]: I0108 23:18:45.984909 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:18:45 crc kubenswrapper[4945]: I0108 23:18:45.985006 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:18:46 crc kubenswrapper[4945]: I0108 23:18:46.031575 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:18:49 crc kubenswrapper[4945]: I0108 23:18:49.049253 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:18:49 crc kubenswrapper[4945]: I0108 23:18:49.049713 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:18:50 crc kubenswrapper[4945]: I0108 23:18:50.083587 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mntf4" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="registry-server" probeResult="failure" output=< Jan 08 23:18:50 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 08 23:18:50 crc kubenswrapper[4945]: > Jan 08 23:18:55 crc kubenswrapper[4945]: I0108 23:18:55.620363 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:18:56 crc kubenswrapper[4945]: I0108 23:18:56.028514 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:18:56 crc kubenswrapper[4945]: I0108 23:18:56.908207 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dlpgj"] Jan 08 23:18:56 crc kubenswrapper[4945]: I0108 23:18:56.908916 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dlpgj" podUID="069f874e-b727-46fb-85e6-b36d4921412f" containerName="registry-server" containerID="cri-o://11ef18575bfffafe5af618736d257da0ed252682fbaae3f6d47c5af90446afb7" gracePeriod=2 Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.273524 4945 generic.go:334] "Generic (PLEG): container finished" podID="069f874e-b727-46fb-85e6-b36d4921412f" containerID="11ef18575bfffafe5af618736d257da0ed252682fbaae3f6d47c5af90446afb7" exitCode=0 Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.273568 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlpgj" event={"ID":"069f874e-b727-46fb-85e6-b36d4921412f","Type":"ContainerDied","Data":"11ef18575bfffafe5af618736d257da0ed252682fbaae3f6d47c5af90446afb7"} Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.760074 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.888429 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-utilities\") pod \"069f874e-b727-46fb-85e6-b36d4921412f\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.888486 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n95dc\" (UniqueName: \"kubernetes.io/projected/069f874e-b727-46fb-85e6-b36d4921412f-kube-api-access-n95dc\") pod \"069f874e-b727-46fb-85e6-b36d4921412f\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.888526 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-catalog-content\") pod \"069f874e-b727-46fb-85e6-b36d4921412f\" (UID: \"069f874e-b727-46fb-85e6-b36d4921412f\") " Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.889327 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-utilities" (OuterVolumeSpecName: "utilities") pod "069f874e-b727-46fb-85e6-b36d4921412f" (UID: "069f874e-b727-46fb-85e6-b36d4921412f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.893421 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069f874e-b727-46fb-85e6-b36d4921412f-kube-api-access-n95dc" (OuterVolumeSpecName: "kube-api-access-n95dc") pod "069f874e-b727-46fb-85e6-b36d4921412f" (UID: "069f874e-b727-46fb-85e6-b36d4921412f"). InnerVolumeSpecName "kube-api-access-n95dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.933327 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "069f874e-b727-46fb-85e6-b36d4921412f" (UID: "069f874e-b727-46fb-85e6-b36d4921412f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.990032 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.990061 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n95dc\" (UniqueName: \"kubernetes.io/projected/069f874e-b727-46fb-85e6-b36d4921412f-kube-api-access-n95dc\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:57 crc kubenswrapper[4945]: I0108 23:18:57.990071 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/069f874e-b727-46fb-85e6-b36d4921412f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:18:58 crc kubenswrapper[4945]: I0108 23:18:58.280079 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlpgj" event={"ID":"069f874e-b727-46fb-85e6-b36d4921412f","Type":"ContainerDied","Data":"eca186ba8a1959c55d70110931ca514ef6bf349b835da23fcb0113af3452c58c"} Jan 08 23:18:58 crc kubenswrapper[4945]: I0108 23:18:58.280133 4945 scope.go:117] "RemoveContainer" containerID="11ef18575bfffafe5af618736d257da0ed252682fbaae3f6d47c5af90446afb7" Jan 08 23:18:58 crc kubenswrapper[4945]: I0108 23:18:58.280238 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlpgj" Jan 08 23:18:58 crc kubenswrapper[4945]: I0108 23:18:58.297828 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dlpgj"] Jan 08 23:18:58 crc kubenswrapper[4945]: I0108 23:18:58.300927 4945 scope.go:117] "RemoveContainer" containerID="b704dababa8e266d32004d95a0f370c4468c81d9f98f0ef15e49160d4fda4f63" Jan 08 23:18:58 crc kubenswrapper[4945]: I0108 23:18:58.301305 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dlpgj"] Jan 08 23:18:58 crc kubenswrapper[4945]: I0108 23:18:58.313687 4945 scope.go:117] "RemoveContainer" containerID="465e0ef1d5df2aa84f96857b54d33082ad8d2cec5f4a260050d77761d4a8f49d" Jan 08 23:18:59 crc kubenswrapper[4945]: I0108 23:18:59.091621 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:18:59 crc kubenswrapper[4945]: I0108 23:18:59.136302 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.025321 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069f874e-b727-46fb-85e6-b36d4921412f" path="/var/lib/kubelet/pods/069f874e-b727-46fb-85e6-b36d4921412f/volumes" Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.306408 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mntf4"] Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.306631 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mntf4" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="registry-server" containerID="cri-o://aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501" gracePeriod=2 Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.661355 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.826198 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-utilities\") pod \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.826338 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-catalog-content\") pod \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.826428 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt7cv\" (UniqueName: \"kubernetes.io/projected/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-kube-api-access-zt7cv\") pod \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\" (UID: \"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db\") " Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.827186 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-utilities" (OuterVolumeSpecName: "utilities") pod "5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" (UID: "5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.831334 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-kube-api-access-zt7cv" (OuterVolumeSpecName: "kube-api-access-zt7cv") pod "5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" (UID: "5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db"). InnerVolumeSpecName "kube-api-access-zt7cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.927762 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.927889 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt7cv\" (UniqueName: \"kubernetes.io/projected/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-kube-api-access-zt7cv\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:00 crc kubenswrapper[4945]: I0108 23:19:00.940515 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" (UID: "5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.028753 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.300210 4945 generic.go:334] "Generic (PLEG): container finished" podID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerID="aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501" exitCode=0 Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.300301 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntf4" event={"ID":"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db","Type":"ContainerDied","Data":"aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501"} Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.300346 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mntf4" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.300626 4945 scope.go:117] "RemoveContainer" containerID="aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.300610 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mntf4" event={"ID":"5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db","Type":"ContainerDied","Data":"147d1d3dc8267cd93d5e7a4348a77e9489e52472cdb38bdce255d7621d3cb1a5"} Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.319541 4945 scope.go:117] "RemoveContainer" containerID="323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.340410 4945 scope.go:117] "RemoveContainer" containerID="021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.340605 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mntf4"] Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.343810 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mntf4"] Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.368593 4945 scope.go:117] "RemoveContainer" containerID="aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501" Jan 08 23:19:01 crc kubenswrapper[4945]: E0108 23:19:01.369101 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501\": container with ID starting with aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501 not found: ID does not exist" containerID="aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.369236 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501"} err="failed to get container status \"aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501\": rpc error: code = NotFound desc = could not find container \"aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501\": container with ID starting with aa65e06086a242e63add3fa86260d111941a4719fc8543e271d547c6135f0501 not found: ID does not exist" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.369364 4945 scope.go:117] "RemoveContainer" containerID="323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e" Jan 08 23:19:01 crc kubenswrapper[4945]: E0108 23:19:01.369758 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e\": container with ID starting with 323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e not found: ID does not exist" containerID="323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.369835 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e"} err="failed to get container status \"323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e\": rpc error: code = NotFound desc = could not find container \"323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e\": container with ID starting with 323bcf1733914d0f3e629105c695ae6c6c6587b0e3ae902e5de6214a497aae7e not found: ID does not exist" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.369866 4945 scope.go:117] "RemoveContainer" containerID="021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a" Jan 08 23:19:01 crc kubenswrapper[4945]: E0108 23:19:01.370150 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a\": container with ID starting with 021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a not found: ID does not exist" containerID="021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a" Jan 08 23:19:01 crc kubenswrapper[4945]: I0108 23:19:01.370257 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a"} err="failed to get container status \"021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a\": rpc error: code = NotFound desc = could not find container \"021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a\": container with ID starting with 021cdba502c06acc3957dd4acc96675d7d0f379331e3938d56b775f28db9ff3a not found: ID does not exist" Jan 08 23:19:02 crc kubenswrapper[4945]: I0108 23:19:02.009711 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" path="/var/lib/kubelet/pods/5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db/volumes" Jan 08 23:19:07 crc kubenswrapper[4945]: I0108 23:19:07.132462 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" podUID="0677ae19-e425-485b-9206-98c9ad11aea8" containerName="oauth-openshift" containerID="cri-o://353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc" gracePeriod=15 Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.066618 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109265 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69fb88d4f9-t9shb"] Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109670 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069f874e-b727-46fb-85e6-b36d4921412f" containerName="extract-content" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109696 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="069f874e-b727-46fb-85e6-b36d4921412f" containerName="extract-content" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109732 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="extract-content" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109739 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="extract-content" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109747 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0677ae19-e425-485b-9206-98c9ad11aea8" containerName="oauth-openshift" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109753 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0677ae19-e425-485b-9206-98c9ad11aea8" containerName="oauth-openshift" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109762 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerName="extract-content" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109768 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerName="extract-content" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109781 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="extract-utilities" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109788 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="extract-utilities" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109797 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069f874e-b727-46fb-85e6-b36d4921412f" containerName="extract-utilities" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109805 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="069f874e-b727-46fb-85e6-b36d4921412f" containerName="extract-utilities" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109812 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109818 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109829 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109835 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109849 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069f874e-b727-46fb-85e6-b36d4921412f" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109855 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="069f874e-b727-46fb-85e6-b36d4921412f" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109870 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465f3db6-3336-4406-9917-a935a783d980" containerName="pruner" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109876 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="465f3db6-3336-4406-9917-a935a783d980" containerName="pruner" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.109894 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerName="extract-utilities" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.109900 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerName="extract-utilities" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.110076 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0677ae19-e425-485b-9206-98c9ad11aea8" containerName="oauth-openshift" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.110093 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="069f874e-b727-46fb-85e6-b36d4921412f" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.110103 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="465f3db6-3336-4406-9917-a935a783d980" containerName="pruner" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.110116 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab77c0a-0b0d-4afc-8949-acf0d5c4b1db" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.110129 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca058c9-9d1b-41b4-b057-c90327ca3628" containerName="registry-server" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.110637 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.116471 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69fb88d4f9-t9shb"] Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.214591 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-audit-policies\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.214680 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-trusted-ca-bundle\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.214726 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-router-certs\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.214759 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-session\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.214839 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btzbj\" (UniqueName: \"kubernetes.io/projected/0677ae19-e425-485b-9206-98c9ad11aea8-kube-api-access-btzbj\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.214891 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-login\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.214919 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-serving-cert\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.214958 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-ocp-branding-template\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215001 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-cliconfig\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215055 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-service-ca\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215102 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-error\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215132 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0677ae19-e425-485b-9206-98c9ad11aea8-audit-dir\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215165 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-provider-selection\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215228 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-idp-0-file-data\") pod \"0677ae19-e425-485b-9206-98c9ad11aea8\" (UID: \"0677ae19-e425-485b-9206-98c9ad11aea8\") " Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215417 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-session\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215472 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nccp\" (UniqueName: \"kubernetes.io/projected/0c0aba24-d904-4a21-a02d-fecefddcc27b-kube-api-access-9nccp\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215550 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215582 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215623 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215658 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-audit-policies\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215693 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-router-certs\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215722 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215761 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-service-ca\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215797 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-error\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215839 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c0aba24-d904-4a21-a02d-fecefddcc27b-audit-dir\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215872 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215915 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-login\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.215947 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.216488 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.216501 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0677ae19-e425-485b-9206-98c9ad11aea8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.216900 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.217169 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.217483 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.221733 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0677ae19-e425-485b-9206-98c9ad11aea8-kube-api-access-btzbj" (OuterVolumeSpecName: "kube-api-access-btzbj") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "kube-api-access-btzbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.224740 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.224978 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.225625 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.228323 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.228611 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.228827 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.229035 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.229635 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0677ae19-e425-485b-9206-98c9ad11aea8" (UID: "0677ae19-e425-485b-9206-98c9ad11aea8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316587 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-login\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316637 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316680 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-session\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316712 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nccp\" (UniqueName: \"kubernetes.io/projected/0c0aba24-d904-4a21-a02d-fecefddcc27b-kube-api-access-9nccp\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316741 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316763 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316796 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316820 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-audit-policies\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316843 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-router-certs\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316863 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316889 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-service-ca\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316912 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-error\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316945 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c0aba24-d904-4a21-a02d-fecefddcc27b-audit-dir\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.316968 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317038 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317054 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317068 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317081 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btzbj\" (UniqueName: \"kubernetes.io/projected/0677ae19-e425-485b-9206-98c9ad11aea8-kube-api-access-btzbj\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317091 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317104 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317116 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317128 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317138 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317151 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317161 4945 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0677ae19-e425-485b-9206-98c9ad11aea8-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317174 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317184 4945 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0677ae19-e425-485b-9206-98c9ad11aea8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317196 4945 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0677ae19-e425-485b-9206-98c9ad11aea8-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317846 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.317908 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c0aba24-d904-4a21-a02d-fecefddcc27b-audit-dir\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.318057 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-service-ca\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.318464 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.318481 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c0aba24-d904-4a21-a02d-fecefddcc27b-audit-policies\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.321484 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.321606 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-router-certs\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.322085 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-session\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.322765 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.322871 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.323792 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-error\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.325520 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.328296 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c0aba24-d904-4a21-a02d-fecefddcc27b-v4-0-config-user-template-login\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.333812 4945 generic.go:334] "Generic (PLEG): container finished" podID="0677ae19-e425-485b-9206-98c9ad11aea8" containerID="353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc" exitCode=0 Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.333861 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" event={"ID":"0677ae19-e425-485b-9206-98c9ad11aea8","Type":"ContainerDied","Data":"353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc"} Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.333891 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" event={"ID":"0677ae19-e425-485b-9206-98c9ad11aea8","Type":"ContainerDied","Data":"c4656416702bce2312408e8e3b83d826a46f6fdf86fcb5cc906b0a62a157ca50"} Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.333899 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qn985" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.333909 4945 scope.go:117] "RemoveContainer" containerID="353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.334522 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nccp\" (UniqueName: \"kubernetes.io/projected/0c0aba24-d904-4a21-a02d-fecefddcc27b-kube-api-access-9nccp\") pod \"oauth-openshift-69fb88d4f9-t9shb\" (UID: \"0c0aba24-d904-4a21-a02d-fecefddcc27b\") " pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.371724 4945 scope.go:117] "RemoveContainer" containerID="353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc" Jan 08 23:19:08 crc kubenswrapper[4945]: E0108 23:19:08.372274 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc\": container with ID starting with 353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc not found: ID does not exist" containerID="353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.372322 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc"} err="failed to get container status \"353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc\": rpc error: code = NotFound desc = could not find container \"353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc\": container with ID starting with 353c71109b4b926c98cae20037f8020b041efd71fa144e2f04a124c045b6dbdc not found: ID does not exist" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.383540 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qn985"] Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.387178 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qn985"] Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.429151 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:08 crc kubenswrapper[4945]: I0108 23:19:08.819339 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69fb88d4f9-t9shb"] Jan 08 23:19:08 crc kubenswrapper[4945]: W0108 23:19:08.828096 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c0aba24_d904_4a21_a02d_fecefddcc27b.slice/crio-a3e29570ad12265c4adc6090f690cadf4783d0c8a4a74de5ea1eb5823c53336c WatchSource:0}: Error finding container a3e29570ad12265c4adc6090f690cadf4783d0c8a4a74de5ea1eb5823c53336c: Status 404 returned error can't find the container with id a3e29570ad12265c4adc6090f690cadf4783d0c8a4a74de5ea1eb5823c53336c Jan 08 23:19:09 crc kubenswrapper[4945]: I0108 23:19:09.340641 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" event={"ID":"0c0aba24-d904-4a21-a02d-fecefddcc27b","Type":"ContainerStarted","Data":"1ab86bee40827498b89b37a4c55b963b11a0d7ffd5c15f71ff4842674179f838"} Jan 08 23:19:09 crc kubenswrapper[4945]: I0108 23:19:09.340692 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" event={"ID":"0c0aba24-d904-4a21-a02d-fecefddcc27b","Type":"ContainerStarted","Data":"a3e29570ad12265c4adc6090f690cadf4783d0c8a4a74de5ea1eb5823c53336c"} Jan 08 23:19:09 crc kubenswrapper[4945]: I0108 23:19:09.340917 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:09 crc kubenswrapper[4945]: I0108 23:19:09.358161 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" podStartSLOduration=27.358140434 podStartE2EDuration="27.358140434s" podCreationTimestamp="2026-01-08 23:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:19:09.357477648 +0000 UTC m=+219.668636604" watchObservedRunningTime="2026-01-08 23:19:09.358140434 +0000 UTC m=+219.669299400" Jan 08 23:19:09 crc kubenswrapper[4945]: I0108 23:19:09.667953 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69fb88d4f9-t9shb" Jan 08 23:19:10 crc kubenswrapper[4945]: I0108 23:19:10.005656 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0677ae19-e425-485b-9206-98c9ad11aea8" path="/var/lib/kubelet/pods/0677ae19-e425-485b-9206-98c9ad11aea8/volumes" Jan 08 23:19:13 crc kubenswrapper[4945]: I0108 23:19:13.579101 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:19:13 crc kubenswrapper[4945]: I0108 23:19:13.579457 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:19:13 crc kubenswrapper[4945]: I0108 23:19:13.579511 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:19:13 crc kubenswrapper[4945]: I0108 23:19:13.580189 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:19:13 crc kubenswrapper[4945]: I0108 23:19:13.580253 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc" gracePeriod=600 Jan 08 23:19:14 crc kubenswrapper[4945]: I0108 23:19:14.375584 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc" exitCode=0 Jan 08 23:19:14 crc kubenswrapper[4945]: I0108 23:19:14.375679 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc"} Jan 08 23:19:14 crc kubenswrapper[4945]: I0108 23:19:14.376639 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"03b243e540c86d992ce6bfde8b79c5371746158349f3c2a49e7cec342fe0ef67"} Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.822749 4945 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.823752 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.823830 4945 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.824347 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f" gracePeriod=15 Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.824363 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6" gracePeriod=15 Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.824425 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f" gracePeriod=15 Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.824483 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248" gracePeriod=15 Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.824505 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85" gracePeriod=15 Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.828456 4945 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 08 23:19:19 crc kubenswrapper[4945]: E0108 23:19:19.828747 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.828770 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 08 23:19:19 crc kubenswrapper[4945]: E0108 23:19:19.828788 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.828796 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 08 23:19:19 crc kubenswrapper[4945]: E0108 23:19:19.828808 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.828816 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 08 23:19:19 crc kubenswrapper[4945]: E0108 23:19:19.828827 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.828835 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 08 23:19:19 crc kubenswrapper[4945]: E0108 23:19:19.828850 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.828857 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 08 23:19:19 crc kubenswrapper[4945]: E0108 23:19:19.828866 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.828874 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 08 23:19:19 crc kubenswrapper[4945]: E0108 23:19:19.828884 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.828890 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.829303 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.829322 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.829331 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.829345 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.829353 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.829361 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 08 23:19:19 crc kubenswrapper[4945]: I0108 23:19:19.880148 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003209 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003546 4945 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003683 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003759 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003822 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003884 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003914 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003965 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.003988 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.004023 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105529 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105626 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105656 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105693 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105706 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105779 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105786 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105793 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105815 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105817 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105881 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.105918 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.106020 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.106066 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.106149 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.106152 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.174750 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:19:20 crc kubenswrapper[4945]: E0108 23:19:20.197221 4945 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.74:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1888e4be68ce8592 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-08 23:19:20.196203922 +0000 UTC m=+230.507362868,LastTimestamp:2026-01-08 23:19:20.196203922 +0000 UTC m=+230.507362868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.417793 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.419505 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.420410 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6" exitCode=0 Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.420451 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85" exitCode=0 Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.420462 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f" exitCode=0 Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.420469 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248" exitCode=2 Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.420629 4945 scope.go:117] "RemoveContainer" containerID="9fffa017e028d7b86c56c3e898cda97b5b1dc9c62e97ec17a996e3b1d42ed718" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.423023 4945 generic.go:334] "Generic (PLEG): container finished" podID="c6e648b5-5638-4b26-beff-c0f2bac003fc" containerID="794beaac38038e4ca435786b31b69ee93644f024423b742e8bfb1c942fe81076" exitCode=0 Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.423059 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c6e648b5-5638-4b26-beff-c0f2bac003fc","Type":"ContainerDied","Data":"794beaac38038e4ca435786b31b69ee93644f024423b742e8bfb1c942fe81076"} Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.423728 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.424266 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.424471 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e034fae29dc5125a4fce9cc4427883ec503e7c4969e6558523a34cb61c234f04"} Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.425180 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:20 crc kubenswrapper[4945]: I0108 23:19:20.425566 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.432017 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6"} Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.437620 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.688037 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.689236 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.690100 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.834552 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6e648b5-5638-4b26-beff-c0f2bac003fc-kube-api-access\") pod \"c6e648b5-5638-4b26-beff-c0f2bac003fc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.834603 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-kubelet-dir\") pod \"c6e648b5-5638-4b26-beff-c0f2bac003fc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.834738 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c6e648b5-5638-4b26-beff-c0f2bac003fc" (UID: "c6e648b5-5638-4b26-beff-c0f2bac003fc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.834765 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-var-lock\") pod \"c6e648b5-5638-4b26-beff-c0f2bac003fc\" (UID: \"c6e648b5-5638-4b26-beff-c0f2bac003fc\") " Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.834812 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-var-lock" (OuterVolumeSpecName: "var-lock") pod "c6e648b5-5638-4b26-beff-c0f2bac003fc" (UID: "c6e648b5-5638-4b26-beff-c0f2bac003fc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.835555 4945 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.835579 4945 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6e648b5-5638-4b26-beff-c0f2bac003fc-var-lock\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.843538 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e648b5-5638-4b26-beff-c0f2bac003fc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c6e648b5-5638-4b26-beff-c0f2bac003fc" (UID: "c6e648b5-5638-4b26-beff-c0f2bac003fc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:19:21 crc kubenswrapper[4945]: I0108 23:19:21.959371 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6e648b5-5638-4b26-beff-c0f2bac003fc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:22 crc kubenswrapper[4945]: E0108 23:19:22.115614 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podc6e648b5_5638_4b26_beff_c0f2bac003fc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f.scope\": RecentStats: unable to find data in memory cache]" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.188795 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.190197 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.190953 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.191482 4945 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.191905 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.364672 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.364783 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.364791 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.364848 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.364857 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.365023 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.365340 4945 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.365363 4945 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.365373 4945 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.450200 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.451614 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f" exitCode=0 Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.451802 4945 scope.go:117] "RemoveContainer" containerID="ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.451826 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.455790 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c6e648b5-5638-4b26-beff-c0f2bac003fc","Type":"ContainerDied","Data":"36aaa1e2dd387de613b32e804caaa4639b5a6a9de573e164dc7aa790461c72f6"} Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.455847 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.455852 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36aaa1e2dd387de613b32e804caaa4639b5a6a9de573e164dc7aa790461c72f6" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.464167 4945 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.464762 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.465331 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.476977 4945 scope.go:117] "RemoveContainer" containerID="ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.480731 4945 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.481300 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.481645 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.498179 4945 scope.go:117] "RemoveContainer" containerID="136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.511864 4945 scope.go:117] "RemoveContainer" containerID="a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.529761 4945 scope.go:117] "RemoveContainer" containerID="f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.553949 4945 scope.go:117] "RemoveContainer" containerID="6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.579949 4945 scope.go:117] "RemoveContainer" containerID="ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6" Jan 08 23:19:22 crc kubenswrapper[4945]: E0108 23:19:22.580509 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\": container with ID starting with ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6 not found: ID does not exist" containerID="ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.580556 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6"} err="failed to get container status \"ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\": rpc error: code = NotFound desc = could not find container \"ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6\": container with ID starting with ca4549e1b96e7e9c90af3d88af526663c61ceafec28e105d3017dba20a698be6 not found: ID does not exist" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.580620 4945 scope.go:117] "RemoveContainer" containerID="ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85" Jan 08 23:19:22 crc kubenswrapper[4945]: E0108 23:19:22.581201 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\": container with ID starting with ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85 not found: ID does not exist" containerID="ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.581273 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85"} err="failed to get container status \"ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\": rpc error: code = NotFound desc = could not find container \"ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85\": container with ID starting with ccccf9248bacb5fc4341cd457516b2ff297ce6783c1fdfb94817c69001575f85 not found: ID does not exist" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.581407 4945 scope.go:117] "RemoveContainer" containerID="136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f" Jan 08 23:19:22 crc kubenswrapper[4945]: E0108 23:19:22.581812 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\": container with ID starting with 136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f not found: ID does not exist" containerID="136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.581857 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f"} err="failed to get container status \"136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\": rpc error: code = NotFound desc = could not find container \"136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f\": container with ID starting with 136e8e589651495af16c3a90948d813c037776f090785e3b81dbd81160d8d52f not found: ID does not exist" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.581883 4945 scope.go:117] "RemoveContainer" containerID="a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248" Jan 08 23:19:22 crc kubenswrapper[4945]: E0108 23:19:22.582493 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\": container with ID starting with a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248 not found: ID does not exist" containerID="a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.582527 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248"} err="failed to get container status \"a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\": rpc error: code = NotFound desc = could not find container \"a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248\": container with ID starting with a400cdf1af980efe4f9c2fad9ff2535393d0cd285e3777c3fb4339ec50c19248 not found: ID does not exist" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.582549 4945 scope.go:117] "RemoveContainer" containerID="f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f" Jan 08 23:19:22 crc kubenswrapper[4945]: E0108 23:19:22.582821 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\": container with ID starting with f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f not found: ID does not exist" containerID="f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.582854 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f"} err="failed to get container status \"f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\": rpc error: code = NotFound desc = could not find container \"f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f\": container with ID starting with f3bd9aeccb4289d549596c0f139d97ab32977d66edf935337a58dd8e55bda39f not found: ID does not exist" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.582874 4945 scope.go:117] "RemoveContainer" containerID="6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28" Jan 08 23:19:22 crc kubenswrapper[4945]: E0108 23:19:22.583282 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\": container with ID starting with 6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28 not found: ID does not exist" containerID="6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28" Jan 08 23:19:22 crc kubenswrapper[4945]: I0108 23:19:22.583329 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28"} err="failed to get container status \"6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\": rpc error: code = NotFound desc = could not find container \"6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28\": container with ID starting with 6cebf7486b9cbe89d207429912627fd535faf5e2c0b36dad288a032fa5a8eb28 not found: ID does not exist" Jan 08 23:19:24 crc kubenswrapper[4945]: I0108 23:19:24.009226 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 08 23:19:24 crc kubenswrapper[4945]: E0108 23:19:24.940268 4945 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:24 crc kubenswrapper[4945]: E0108 23:19:24.940732 4945 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:24 crc kubenswrapper[4945]: E0108 23:19:24.941450 4945 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:24 crc kubenswrapper[4945]: E0108 23:19:24.941929 4945 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:24 crc kubenswrapper[4945]: E0108 23:19:24.942508 4945 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:24 crc kubenswrapper[4945]: I0108 23:19:24.942579 4945 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 08 23:19:24 crc kubenswrapper[4945]: E0108 23:19:24.943387 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="200ms" Jan 08 23:19:25 crc kubenswrapper[4945]: E0108 23:19:25.145100 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="400ms" Jan 08 23:19:25 crc kubenswrapper[4945]: E0108 23:19:25.546482 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="800ms" Jan 08 23:19:26 crc kubenswrapper[4945]: E0108 23:19:26.348228 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="1.6s" Jan 08 23:19:26 crc kubenswrapper[4945]: E0108 23:19:26.817373 4945 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.74:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1888e4be68ce8592 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-08 23:19:20.196203922 +0000 UTC m=+230.507362868,LastTimestamp:2026-01-08 23:19:20.196203922 +0000 UTC m=+230.507362868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 08 23:19:27 crc kubenswrapper[4945]: E0108 23:19:27.949050 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="3.2s" Jan 08 23:19:30 crc kubenswrapper[4945]: I0108 23:19:30.006383 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:30 crc kubenswrapper[4945]: I0108 23:19:30.006855 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:31 crc kubenswrapper[4945]: E0108 23:19:31.149602 4945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.74:6443: connect: connection refused" interval="6.4s" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.000026 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.000784 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.001746 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.020341 4945 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.020382 4945 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:32 crc kubenswrapper[4945]: E0108 23:19:32.020924 4945 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.021484 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:32 crc kubenswrapper[4945]: W0108 23:19:32.042284 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-104bd912259bec37695098ed785e6577956ce51105a520da786d1854aa42a317 WatchSource:0}: Error finding container 104bd912259bec37695098ed785e6577956ce51105a520da786d1854aa42a317: Status 404 returned error can't find the container with id 104bd912259bec37695098ed785e6577956ce51105a520da786d1854aa42a317 Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.529608 4945 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4434cd283183ff38b13441b379a8904687410f5d0c791ca100b3014fc51dcaf2" exitCode=0 Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.529707 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4434cd283183ff38b13441b379a8904687410f5d0c791ca100b3014fc51dcaf2"} Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.529901 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"104bd912259bec37695098ed785e6577956ce51105a520da786d1854aa42a317"} Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.530171 4945 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.530185 4945 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:32 crc kubenswrapper[4945]: E0108 23:19:32.530653 4945 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.530653 4945 status_manager.go:851] "Failed to get status for pod" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:32 crc kubenswrapper[4945]: I0108 23:19:32.531071 4945 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.74:6443: connect: connection refused" Jan 08 23:19:33 crc kubenswrapper[4945]: I0108 23:19:33.537646 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a4465324fe1ef2cfdeb8fdbfe512f505deecf11450bc347f896940ce7e994a79"} Jan 08 23:19:33 crc kubenswrapper[4945]: I0108 23:19:33.537962 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"25871d64369aa0a746658ad74232dcc11437d240b67c6a5dc6f33a8a44578352"} Jan 08 23:19:33 crc kubenswrapper[4945]: I0108 23:19:33.537974 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f290e2e38f0f7a8d44d261f3c5994993ea46ac0340834645b0f48c0627c44f36"} Jan 08 23:19:33 crc kubenswrapper[4945]: I0108 23:19:33.537984 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b606069635c13502b0d4d1fe8e01fa0d9e4b41431f0f28eb6992d4d7aee9836"} Jan 08 23:19:34 crc kubenswrapper[4945]: I0108 23:19:34.545887 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 08 23:19:34 crc kubenswrapper[4945]: I0108 23:19:34.545951 4945 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21" exitCode=1 Jan 08 23:19:34 crc kubenswrapper[4945]: I0108 23:19:34.546064 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21"} Jan 08 23:19:34 crc kubenswrapper[4945]: I0108 23:19:34.546585 4945 scope.go:117] "RemoveContainer" containerID="448b44bf474fedb4c76cea460a1eab57d2d17453b58634ab16372a239dc5bb21" Jan 08 23:19:34 crc kubenswrapper[4945]: I0108 23:19:34.551285 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cb26ce36d21dccb74bb52e63e8589bb31de97531158f2136b0338e10a27e021c"} Jan 08 23:19:34 crc kubenswrapper[4945]: I0108 23:19:34.551465 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:34 crc kubenswrapper[4945]: I0108 23:19:34.551565 4945 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:34 crc kubenswrapper[4945]: I0108 23:19:34.551587 4945 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:35 crc kubenswrapper[4945]: I0108 23:19:35.559856 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 08 23:19:35 crc kubenswrapper[4945]: I0108 23:19:35.560214 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b06258e10a014b6010187d2d6f622d2e43f1887f1063b8a045bd476c8263cd54"} Jan 08 23:19:35 crc kubenswrapper[4945]: I0108 23:19:35.652974 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:19:35 crc kubenswrapper[4945]: I0108 23:19:35.656884 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:19:36 crc kubenswrapper[4945]: I0108 23:19:36.566800 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:19:37 crc kubenswrapper[4945]: I0108 23:19:37.021912 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:37 crc kubenswrapper[4945]: I0108 23:19:37.021958 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:37 crc kubenswrapper[4945]: I0108 23:19:37.026319 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:39 crc kubenswrapper[4945]: I0108 23:19:39.582788 4945 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:40 crc kubenswrapper[4945]: I0108 23:19:40.014684 4945 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="407ce0b6-d69c-4606-9e09-f885685a6735" Jan 08 23:19:40 crc kubenswrapper[4945]: I0108 23:19:40.587192 4945 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:40 crc kubenswrapper[4945]: I0108 23:19:40.587225 4945 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:40 crc kubenswrapper[4945]: I0108 23:19:40.589810 4945 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="407ce0b6-d69c-4606-9e09-f885685a6735" Jan 08 23:19:40 crc kubenswrapper[4945]: I0108 23:19:40.591679 4945 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://1b606069635c13502b0d4d1fe8e01fa0d9e4b41431f0f28eb6992d4d7aee9836" Jan 08 23:19:40 crc kubenswrapper[4945]: I0108 23:19:40.591701 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:19:41 crc kubenswrapper[4945]: I0108 23:19:41.591238 4945 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:41 crc kubenswrapper[4945]: I0108 23:19:41.591267 4945 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f499c197-c4c1-4fc7-95b2-c797e8ce9682" Jan 08 23:19:41 crc kubenswrapper[4945]: I0108 23:19:41.593715 4945 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="407ce0b6-d69c-4606-9e09-f885685a6735" Jan 08 23:19:48 crc kubenswrapper[4945]: I0108 23:19:48.155112 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 08 23:19:49 crc kubenswrapper[4945]: I0108 23:19:49.576391 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 08 23:19:49 crc kubenswrapper[4945]: I0108 23:19:49.802188 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 08 23:19:50 crc kubenswrapper[4945]: I0108 23:19:50.791680 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 08 23:19:50 crc kubenswrapper[4945]: I0108 23:19:50.837012 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 08 23:19:50 crc kubenswrapper[4945]: I0108 23:19:50.967769 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 08 23:19:51 crc kubenswrapper[4945]: I0108 23:19:51.661298 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 08 23:19:51 crc kubenswrapper[4945]: I0108 23:19:51.683489 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 08 23:19:51 crc kubenswrapper[4945]: I0108 23:19:51.691530 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 08 23:19:51 crc kubenswrapper[4945]: I0108 23:19:51.833938 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 08 23:19:51 crc kubenswrapper[4945]: I0108 23:19:51.890931 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.055554 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.057119 4945 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.208661 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.509155 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.572039 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.613447 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.636376 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.728493 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.841628 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.908203 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.908418 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.921142 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.927067 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.957226 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.971656 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 08 23:19:52 crc kubenswrapper[4945]: I0108 23:19:52.991566 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.232756 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.300391 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.369224 4945 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.438841 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.525607 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.549120 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.616948 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.657186 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.665840 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.779773 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.781498 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.918394 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.937913 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.950981 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.953688 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 08 23:19:53 crc kubenswrapper[4945]: I0108 23:19:53.989523 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.080436 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.114169 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.119882 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.124571 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.174416 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.220647 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.405746 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.528134 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.531141 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.539133 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.551372 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.622841 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.774324 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.782014 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.797522 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 08 23:19:54 crc kubenswrapper[4945]: I0108 23:19:54.900606 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.158687 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.191178 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.253218 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.279416 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.300227 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.358453 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.428715 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.582381 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.621073 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.715354 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.716176 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.802035 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.835616 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.864257 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.944812 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 08 23:19:55 crc kubenswrapper[4945]: I0108 23:19:55.953086 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.001846 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.047922 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.117217 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.209114 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.211411 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.239466 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.277682 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.391796 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.402454 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.432966 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.527961 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.568046 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.660822 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.689263 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.706101 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.735471 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.767703 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.808914 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 08 23:19:56 crc kubenswrapper[4945]: I0108 23:19:56.920281 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.048710 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.117364 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.157341 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.168955 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.179154 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.187446 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.240627 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.265654 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.270927 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.281616 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.324864 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.361051 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.364376 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.479327 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.510222 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.556586 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.650951 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.737140 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.750347 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.843168 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.850506 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.887140 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.904615 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 08 23:19:57 crc kubenswrapper[4945]: I0108 23:19:57.963622 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.048569 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.050848 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.091211 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.091916 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.101108 4945 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.190648 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.214883 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.361961 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.402138 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.451747 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.561388 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.590517 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.667931 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.668119 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.683664 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.691205 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.744929 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.828280 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.829493 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.831315 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.840529 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.867836 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.939305 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.939709 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.972775 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 08 23:19:58 crc kubenswrapper[4945]: I0108 23:19:58.984036 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.084552 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.110564 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.134930 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.141587 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.143707 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.153147 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.185714 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.212360 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.282632 4945 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.324484 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.339570 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.340396 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.409601 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.611679 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.638204 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 08 23:19:59 crc kubenswrapper[4945]: I0108 23:19:59.779969 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.008439 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.043628 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.203645 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.311377 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.403077 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.472064 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.476692 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.478377 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.639839 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.640052 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.780249 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.828180 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.830214 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.878268 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.880215 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.883921 4945 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.891609 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.891573189 podStartE2EDuration="41.891573189s" podCreationTimestamp="2026-01-08 23:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:19:39.633701464 +0000 UTC m=+249.944860410" watchObservedRunningTime="2026-01-08 23:20:00.891573189 +0000 UTC m=+271.202732165" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.897084 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.897177 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.901443 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.913333 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.913317751 podStartE2EDuration="21.913317751s" podCreationTimestamp="2026-01-08 23:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:20:00.912067221 +0000 UTC m=+271.223226177" watchObservedRunningTime="2026-01-08 23:20:00.913317751 +0000 UTC m=+271.224476697" Jan 08 23:20:00 crc kubenswrapper[4945]: I0108 23:20:00.959219 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.017585 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.131884 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.257346 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.285805 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.294037 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.492872 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.536245 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.642644 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.658610 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.722493 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.785448 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.887027 4945 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.887260 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6" gracePeriod=5 Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.919038 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 08 23:20:01 crc kubenswrapper[4945]: I0108 23:20:01.935646 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.051514 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.152258 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.154843 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.233843 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.273402 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.628433 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.628466 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.670326 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.723729 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.782279 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.795383 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.810387 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.838872 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.848015 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 08 23:20:02 crc kubenswrapper[4945]: I0108 23:20:02.901390 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.263531 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.299199 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.356900 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.486540 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.493566 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.690296 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.784724 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.803420 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.869084 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 08 23:20:03 crc kubenswrapper[4945]: I0108 23:20:03.958062 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.080236 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.124553 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.198776 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.215375 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.366985 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.399879 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.480156 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.496611 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.580543 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.717239 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.842869 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 08 23:20:04 crc kubenswrapper[4945]: I0108 23:20:04.886240 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.059428 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.068800 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.160983 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.228627 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.238837 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.293583 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.327752 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.332942 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.336840 4945 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.430931 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.454663 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.692127 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 08 23:20:05 crc kubenswrapper[4945]: I0108 23:20:05.987094 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.000362 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.024123 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.058299 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.159716 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.240215 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.624388 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.830760 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.850337 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 08 23:20:06 crc kubenswrapper[4945]: I0108 23:20:06.948254 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.115633 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.254132 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.486031 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.487939 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.488037 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636497 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636573 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636630 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636631 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636656 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636677 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636695 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636720 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.636782 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.637120 4945 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.637147 4945 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.637167 4945 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.637186 4945 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.656333 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.738353 4945 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.750546 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.750593 4945 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6" exitCode=137 Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.750634 4945 scope.go:117] "RemoveContainer" containerID="3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.750715 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.772917 4945 scope.go:117] "RemoveContainer" containerID="3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6" Jan 08 23:20:07 crc kubenswrapper[4945]: E0108 23:20:07.773733 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6\": container with ID starting with 3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6 not found: ID does not exist" containerID="3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6" Jan 08 23:20:07 crc kubenswrapper[4945]: I0108 23:20:07.773768 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6"} err="failed to get container status \"3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6\": rpc error: code = NotFound desc = could not find container \"3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6\": container with ID starting with 3ed9c82d33fbb6467e48186bdd9ac01d41eeefd25e091edb98d9565626e2a5e6 not found: ID does not exist" Jan 08 23:20:08 crc kubenswrapper[4945]: I0108 23:20:08.006474 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 08 23:20:08 crc kubenswrapper[4945]: I0108 23:20:08.006762 4945 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 08 23:20:08 crc kubenswrapper[4945]: I0108 23:20:08.019039 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 08 23:20:08 crc kubenswrapper[4945]: I0108 23:20:08.019076 4945 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="60e8bc8d-8750-4d20-aa41-6b5b9d33603e" Jan 08 23:20:08 crc kubenswrapper[4945]: I0108 23:20:08.021834 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 08 23:20:08 crc kubenswrapper[4945]: I0108 23:20:08.021868 4945 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="60e8bc8d-8750-4d20-aa41-6b5b9d33603e" Jan 08 23:20:09 crc kubenswrapper[4945]: I0108 23:20:09.190399 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 08 23:20:20 crc kubenswrapper[4945]: I0108 23:20:20.823885 4945 generic.go:334] "Generic (PLEG): container finished" podID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerID="9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8" exitCode=0 Jan 08 23:20:20 crc kubenswrapper[4945]: I0108 23:20:20.824077 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" event={"ID":"93e10fcb-3cb5-454a-bcd1-1eae918e0601","Type":"ContainerDied","Data":"9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8"} Jan 08 23:20:20 crc kubenswrapper[4945]: I0108 23:20:20.825306 4945 scope.go:117] "RemoveContainer" containerID="9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8" Jan 08 23:20:21 crc kubenswrapper[4945]: I0108 23:20:21.835406 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" event={"ID":"93e10fcb-3cb5-454a-bcd1-1eae918e0601","Type":"ContainerStarted","Data":"3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b"} Jan 08 23:20:21 crc kubenswrapper[4945]: I0108 23:20:21.837177 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:20:21 crc kubenswrapper[4945]: I0108 23:20:21.837825 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:20:26 crc kubenswrapper[4945]: I0108 23:20:26.593523 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gvrb5"] Jan 08 23:20:26 crc kubenswrapper[4945]: I0108 23:20:26.595477 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" podUID="d2534d4c-181b-45a2-8fec-118b7f17d296" containerName="controller-manager" containerID="cri-o://8462206c011a7e9b68d92a88f253129996851fd3f4e310f5feb241a57284aa27" gracePeriod=30 Jan 08 23:20:26 crc kubenswrapper[4945]: I0108 23:20:26.676978 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv"] Jan 08 23:20:26 crc kubenswrapper[4945]: I0108 23:20:26.677607 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" podUID="cb8ba07b-804c-4712-9215-6c3ea4f0d96c" containerName="route-controller-manager" containerID="cri-o://0221f941b02ab0b413610f25457de4876b70f88a740026465e92699eb83fdac5" gracePeriod=30 Jan 08 23:20:26 crc kubenswrapper[4945]: I0108 23:20:26.940555 4945 generic.go:334] "Generic (PLEG): container finished" podID="cb8ba07b-804c-4712-9215-6c3ea4f0d96c" containerID="0221f941b02ab0b413610f25457de4876b70f88a740026465e92699eb83fdac5" exitCode=0 Jan 08 23:20:26 crc kubenswrapper[4945]: I0108 23:20:26.940647 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" event={"ID":"cb8ba07b-804c-4712-9215-6c3ea4f0d96c","Type":"ContainerDied","Data":"0221f941b02ab0b413610f25457de4876b70f88a740026465e92699eb83fdac5"} Jan 08 23:20:26 crc kubenswrapper[4945]: I0108 23:20:26.942244 4945 generic.go:334] "Generic (PLEG): container finished" podID="d2534d4c-181b-45a2-8fec-118b7f17d296" containerID="8462206c011a7e9b68d92a88f253129996851fd3f4e310f5feb241a57284aa27" exitCode=0 Jan 08 23:20:26 crc kubenswrapper[4945]: I0108 23:20:26.942265 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" event={"ID":"d2534d4c-181b-45a2-8fec-118b7f17d296","Type":"ContainerDied","Data":"8462206c011a7e9b68d92a88f253129996851fd3f4e310f5feb241a57284aa27"} Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.056557 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.108189 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.200790 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-config\") pod \"d2534d4c-181b-45a2-8fec-118b7f17d296\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.201835 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttkrg\" (UniqueName: \"kubernetes.io/projected/d2534d4c-181b-45a2-8fec-118b7f17d296-kube-api-access-ttkrg\") pod \"d2534d4c-181b-45a2-8fec-118b7f17d296\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.201890 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-config\") pod \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.201950 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-proxy-ca-bundles\") pod \"d2534d4c-181b-45a2-8fec-118b7f17d296\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.202011 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2534d4c-181b-45a2-8fec-118b7f17d296-serving-cert\") pod \"d2534d4c-181b-45a2-8fec-118b7f17d296\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.202054 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-config" (OuterVolumeSpecName: "config") pod "d2534d4c-181b-45a2-8fec-118b7f17d296" (UID: "d2534d4c-181b-45a2-8fec-118b7f17d296"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.202057 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpjzw\" (UniqueName: \"kubernetes.io/projected/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-kube-api-access-vpjzw\") pod \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.202157 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-client-ca\") pod \"d2534d4c-181b-45a2-8fec-118b7f17d296\" (UID: \"d2534d4c-181b-45a2-8fec-118b7f17d296\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.202387 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-client-ca\") pod \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.202413 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-serving-cert\") pod \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\" (UID: \"cb8ba07b-804c-4712-9215-6c3ea4f0d96c\") " Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.202870 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-client-ca" (OuterVolumeSpecName: "client-ca") pod "d2534d4c-181b-45a2-8fec-118b7f17d296" (UID: "d2534d4c-181b-45a2-8fec-118b7f17d296"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.202859 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-config" (OuterVolumeSpecName: "config") pod "cb8ba07b-804c-4712-9215-6c3ea4f0d96c" (UID: "cb8ba07b-804c-4712-9215-6c3ea4f0d96c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.203143 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb8ba07b-804c-4712-9215-6c3ea4f0d96c" (UID: "cb8ba07b-804c-4712-9215-6c3ea4f0d96c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.203481 4945 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.203549 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.203564 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.203577 4945 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-client-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.204191 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d2534d4c-181b-45a2-8fec-118b7f17d296" (UID: "d2534d4c-181b-45a2-8fec-118b7f17d296"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.208932 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb8ba07b-804c-4712-9215-6c3ea4f0d96c" (UID: "cb8ba07b-804c-4712-9215-6c3ea4f0d96c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.209244 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-kube-api-access-vpjzw" (OuterVolumeSpecName: "kube-api-access-vpjzw") pod "cb8ba07b-804c-4712-9215-6c3ea4f0d96c" (UID: "cb8ba07b-804c-4712-9215-6c3ea4f0d96c"). InnerVolumeSpecName "kube-api-access-vpjzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.209274 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2534d4c-181b-45a2-8fec-118b7f17d296-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d2534d4c-181b-45a2-8fec-118b7f17d296" (UID: "d2534d4c-181b-45a2-8fec-118b7f17d296"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.209294 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2534d4c-181b-45a2-8fec-118b7f17d296-kube-api-access-ttkrg" (OuterVolumeSpecName: "kube-api-access-ttkrg") pod "d2534d4c-181b-45a2-8fec-118b7f17d296" (UID: "d2534d4c-181b-45a2-8fec-118b7f17d296"). InnerVolumeSpecName "kube-api-access-ttkrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.305201 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2534d4c-181b-45a2-8fec-118b7f17d296-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.305260 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpjzw\" (UniqueName: \"kubernetes.io/projected/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-kube-api-access-vpjzw\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.305274 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb8ba07b-804c-4712-9215-6c3ea4f0d96c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.305287 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttkrg\" (UniqueName: \"kubernetes.io/projected/d2534d4c-181b-45a2-8fec-118b7f17d296-kube-api-access-ttkrg\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.305299 4945 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2534d4c-181b-45a2-8fec-118b7f17d296-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.861163 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9"] Jan 08 23:20:27 crc kubenswrapper[4945]: E0108 23:20:27.861682 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.861708 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 08 23:20:27 crc kubenswrapper[4945]: E0108 23:20:27.861736 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8ba07b-804c-4712-9215-6c3ea4f0d96c" containerName="route-controller-manager" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.861748 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8ba07b-804c-4712-9215-6c3ea4f0d96c" containerName="route-controller-manager" Jan 08 23:20:27 crc kubenswrapper[4945]: E0108 23:20:27.861769 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" containerName="installer" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.861779 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" containerName="installer" Jan 08 23:20:27 crc kubenswrapper[4945]: E0108 23:20:27.861814 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2534d4c-181b-45a2-8fec-118b7f17d296" containerName="controller-manager" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.861826 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2534d4c-181b-45a2-8fec-118b7f17d296" containerName="controller-manager" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.862020 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e648b5-5638-4b26-beff-c0f2bac003fc" containerName="installer" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.862039 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2534d4c-181b-45a2-8fec-118b7f17d296" containerName="controller-manager" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.862057 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.862072 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8ba07b-804c-4712-9215-6c3ea4f0d96c" containerName="route-controller-manager" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.862948 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.870755 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79d75cf94f-cfvzl"] Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.872149 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.877357 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79d75cf94f-cfvzl"] Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.882282 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9"] Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.919605 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-config\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.919827 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-config\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.919858 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-client-ca\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.919884 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70389482-137d-4d22-b562-ec0376809925-serving-cert\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.919906 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mhs\" (UniqueName: \"kubernetes.io/projected/f509a493-d9f0-4452-9786-6b1141ede005-kube-api-access-k5mhs\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.919929 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-client-ca\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.920236 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f509a493-d9f0-4452-9786-6b1141ede005-serving-cert\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.920378 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-proxy-ca-bundles\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.920459 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg65q\" (UniqueName: \"kubernetes.io/projected/70389482-137d-4d22-b562-ec0376809925-kube-api-access-cg65q\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.951144 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" event={"ID":"d2534d4c-181b-45a2-8fec-118b7f17d296","Type":"ContainerDied","Data":"9682c12e0282266ab15b35f67a57576e762a341879b45217d9ab60f15b5a64e3"} Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.951229 4945 scope.go:117] "RemoveContainer" containerID="8462206c011a7e9b68d92a88f253129996851fd3f4e310f5feb241a57284aa27" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.951720 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gvrb5" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.953966 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" event={"ID":"cb8ba07b-804c-4712-9215-6c3ea4f0d96c","Type":"ContainerDied","Data":"b4528f1c877f27a86c70be611cd16c2a7e45ed80d08762adf3759e7ebc4fa572"} Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.954087 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.972649 4945 scope.go:117] "RemoveContainer" containerID="0221f941b02ab0b413610f25457de4876b70f88a740026465e92699eb83fdac5" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.991315 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9"] Jan 08 23:20:27 crc kubenswrapper[4945]: E0108 23:20:27.992309 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-cg65q serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" podUID="70389482-137d-4d22-b562-ec0376809925" Jan 08 23:20:27 crc kubenswrapper[4945]: I0108 23:20:27.998178 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79d75cf94f-cfvzl"] Jan 08 23:20:27 crc kubenswrapper[4945]: E0108 23:20:27.998879 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-k5mhs proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" podUID="f509a493-d9f0-4452-9786-6b1141ede005" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.002068 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gvrb5"] Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.015045 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gvrb5"] Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022353 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-client-ca\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022417 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70389482-137d-4d22-b562-ec0376809925-serving-cert\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022455 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5mhs\" (UniqueName: \"kubernetes.io/projected/f509a493-d9f0-4452-9786-6b1141ede005-kube-api-access-k5mhs\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022502 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-client-ca\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022538 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f509a493-d9f0-4452-9786-6b1141ede005-serving-cert\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022617 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-proxy-ca-bundles\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022668 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg65q\" (UniqueName: \"kubernetes.io/projected/70389482-137d-4d22-b562-ec0376809925-kube-api-access-cg65q\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022753 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-config\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.022801 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-config\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.024065 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-client-ca\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.024529 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-config\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.025785 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-client-ca\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.026809 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-proxy-ca-bundles\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.027044 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-config\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.038094 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv"] Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.043901 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70389482-137d-4d22-b562-ec0376809925-serving-cert\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.045292 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5mhs\" (UniqueName: \"kubernetes.io/projected/f509a493-d9f0-4452-9786-6b1141ede005-kube-api-access-k5mhs\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.047366 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg65q\" (UniqueName: \"kubernetes.io/projected/70389482-137d-4d22-b562-ec0376809925-kube-api-access-cg65q\") pod \"route-controller-manager-c886cb488-tcdj9\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.047594 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s87cv"] Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.060021 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f509a493-d9f0-4452-9786-6b1141ede005-serving-cert\") pod \"controller-manager-79d75cf94f-cfvzl\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.961141 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.962394 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.975668 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:28 crc kubenswrapper[4945]: I0108 23:20:28.983045 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137670 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-client-ca\") pod \"f509a493-d9f0-4452-9786-6b1141ede005\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137741 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-config\") pod \"70389482-137d-4d22-b562-ec0376809925\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137760 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-client-ca\") pod \"70389482-137d-4d22-b562-ec0376809925\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137795 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f509a493-d9f0-4452-9786-6b1141ede005-serving-cert\") pod \"f509a493-d9f0-4452-9786-6b1141ede005\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137827 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-config\") pod \"f509a493-d9f0-4452-9786-6b1141ede005\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137845 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg65q\" (UniqueName: \"kubernetes.io/projected/70389482-137d-4d22-b562-ec0376809925-kube-api-access-cg65q\") pod \"70389482-137d-4d22-b562-ec0376809925\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137865 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-proxy-ca-bundles\") pod \"f509a493-d9f0-4452-9786-6b1141ede005\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137901 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70389482-137d-4d22-b562-ec0376809925-serving-cert\") pod \"70389482-137d-4d22-b562-ec0376809925\" (UID: \"70389482-137d-4d22-b562-ec0376809925\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.137921 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5mhs\" (UniqueName: \"kubernetes.io/projected/f509a493-d9f0-4452-9786-6b1141ede005-kube-api-access-k5mhs\") pod \"f509a493-d9f0-4452-9786-6b1141ede005\" (UID: \"f509a493-d9f0-4452-9786-6b1141ede005\") " Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.139406 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-config" (OuterVolumeSpecName: "config") pod "f509a493-d9f0-4452-9786-6b1141ede005" (UID: "f509a493-d9f0-4452-9786-6b1141ede005"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.139529 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-client-ca" (OuterVolumeSpecName: "client-ca") pod "70389482-137d-4d22-b562-ec0376809925" (UID: "70389482-137d-4d22-b562-ec0376809925"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.139758 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f509a493-d9f0-4452-9786-6b1141ede005" (UID: "f509a493-d9f0-4452-9786-6b1141ede005"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.140242 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-client-ca" (OuterVolumeSpecName: "client-ca") pod "f509a493-d9f0-4452-9786-6b1141ede005" (UID: "f509a493-d9f0-4452-9786-6b1141ede005"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.140798 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-config" (OuterVolumeSpecName: "config") pod "70389482-137d-4d22-b562-ec0376809925" (UID: "70389482-137d-4d22-b562-ec0376809925"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.143674 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f509a493-d9f0-4452-9786-6b1141ede005-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f509a493-d9f0-4452-9786-6b1141ede005" (UID: "f509a493-d9f0-4452-9786-6b1141ede005"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.143973 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f509a493-d9f0-4452-9786-6b1141ede005-kube-api-access-k5mhs" (OuterVolumeSpecName: "kube-api-access-k5mhs") pod "f509a493-d9f0-4452-9786-6b1141ede005" (UID: "f509a493-d9f0-4452-9786-6b1141ede005"). InnerVolumeSpecName "kube-api-access-k5mhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.144895 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70389482-137d-4d22-b562-ec0376809925-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "70389482-137d-4d22-b562-ec0376809925" (UID: "70389482-137d-4d22-b562-ec0376809925"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.153249 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70389482-137d-4d22-b562-ec0376809925-kube-api-access-cg65q" (OuterVolumeSpecName: "kube-api-access-cg65q") pod "70389482-137d-4d22-b562-ec0376809925" (UID: "70389482-137d-4d22-b562-ec0376809925"). InnerVolumeSpecName "kube-api-access-cg65q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239743 4945 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-client-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239780 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239792 4945 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70389482-137d-4d22-b562-ec0376809925-client-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239801 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f509a493-d9f0-4452-9786-6b1141ede005-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239811 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239822 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg65q\" (UniqueName: \"kubernetes.io/projected/70389482-137d-4d22-b562-ec0376809925-kube-api-access-cg65q\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239832 4945 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f509a493-d9f0-4452-9786-6b1141ede005-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239842 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5mhs\" (UniqueName: \"kubernetes.io/projected/f509a493-d9f0-4452-9786-6b1141ede005-kube-api-access-k5mhs\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.239851 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70389482-137d-4d22-b562-ec0376809925-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.861771 4945 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.966886 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d75cf94f-cfvzl" Jan 08 23:20:29 crc kubenswrapper[4945]: I0108 23:20:29.966942 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.047042 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8ba07b-804c-4712-9215-6c3ea4f0d96c" path="/var/lib/kubelet/pods/cb8ba07b-804c-4712-9215-6c3ea4f0d96c/volumes" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.048054 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2534d4c-181b-45a2-8fec-118b7f17d296" path="/var/lib/kubelet/pods/d2534d4c-181b-45a2-8fec-118b7f17d296/volumes" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.049736 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp"] Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.051785 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.055186 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9"] Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.059831 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.061459 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c886cb488-tcdj9"] Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.062099 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.062239 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.064825 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp"] Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.083026 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.083414 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.083340 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.100732 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79d75cf94f-cfvzl"] Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.106725 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79d75cf94f-cfvzl"] Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.253653 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-client-ca\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.253719 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qglb7\" (UniqueName: \"kubernetes.io/projected/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-kube-api-access-qglb7\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.253762 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-config\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.253779 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-serving-cert\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.354702 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-config\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.354751 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-serving-cert\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.354792 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-client-ca\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.354841 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qglb7\" (UniqueName: \"kubernetes.io/projected/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-kube-api-access-qglb7\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.356177 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-config\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.357161 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-client-ca\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.370404 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-serving-cert\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.373047 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qglb7\" (UniqueName: \"kubernetes.io/projected/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-kube-api-access-qglb7\") pod \"route-controller-manager-75f7b94877-85txp\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.383384 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.648534 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp"] Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.974604 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" event={"ID":"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f","Type":"ContainerStarted","Data":"b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72"} Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.974667 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" event={"ID":"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f","Type":"ContainerStarted","Data":"6b5c23420e1bc73e4d7252bd5795d0bbe8beaf2c0cf063bdabfaa88cae0066d3"} Jan 08 23:20:30 crc kubenswrapper[4945]: I0108 23:20:30.974925 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:31 crc kubenswrapper[4945]: I0108 23:20:31.001059 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" podStartSLOduration=4.001035387 podStartE2EDuration="4.001035387s" podCreationTimestamp="2026-01-08 23:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:20:30.997493113 +0000 UTC m=+301.308652109" watchObservedRunningTime="2026-01-08 23:20:31.001035387 +0000 UTC m=+301.312194363" Jan 08 23:20:31 crc kubenswrapper[4945]: I0108 23:20:31.431314 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.007292 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70389482-137d-4d22-b562-ec0376809925" path="/var/lib/kubelet/pods/70389482-137d-4d22-b562-ec0376809925/volumes" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.008360 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f509a493-d9f0-4452-9786-6b1141ede005" path="/var/lib/kubelet/pods/f509a493-d9f0-4452-9786-6b1141ede005/volumes" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.855131 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm"] Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.856030 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.858370 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.858420 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.859832 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.860033 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.860777 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.861415 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.866143 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.866392 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm"] Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.930099 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-client-ca\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.930150 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v27hl\" (UniqueName: \"kubernetes.io/projected/a55c0047-cca2-4616-b4d4-8cb0baf0b332-kube-api-access-v27hl\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.930225 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a55c0047-cca2-4616-b4d4-8cb0baf0b332-serving-cert\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.930302 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-proxy-ca-bundles\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:32 crc kubenswrapper[4945]: I0108 23:20:32.930350 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-config\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.031108 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-client-ca\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.031152 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v27hl\" (UniqueName: \"kubernetes.io/projected/a55c0047-cca2-4616-b4d4-8cb0baf0b332-kube-api-access-v27hl\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.031180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a55c0047-cca2-4616-b4d4-8cb0baf0b332-serving-cert\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.031221 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-proxy-ca-bundles\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.031248 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-config\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.039009 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-client-ca\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.039652 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-proxy-ca-bundles\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.042211 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-config\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.046921 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a55c0047-cca2-4616-b4d4-8cb0baf0b332-serving-cert\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.051972 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v27hl\" (UniqueName: \"kubernetes.io/projected/a55c0047-cca2-4616-b4d4-8cb0baf0b332-kube-api-access-v27hl\") pod \"controller-manager-5c8f4f45cc-m78lm\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.238275 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.398247 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm"] Jan 08 23:20:33 crc kubenswrapper[4945]: W0108 23:20:33.401021 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55c0047_cca2_4616_b4d4_8cb0baf0b332.slice/crio-4ffe3d5a6ae0a6c2567b2d653eeeb31dae3da85938849a75204d10c34d4a78b3 WatchSource:0}: Error finding container 4ffe3d5a6ae0a6c2567b2d653eeeb31dae3da85938849a75204d10c34d4a78b3: Status 404 returned error can't find the container with id 4ffe3d5a6ae0a6c2567b2d653eeeb31dae3da85938849a75204d10c34d4a78b3 Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.990220 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" event={"ID":"a55c0047-cca2-4616-b4d4-8cb0baf0b332","Type":"ContainerStarted","Data":"4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87"} Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.990544 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" event={"ID":"a55c0047-cca2-4616-b4d4-8cb0baf0b332","Type":"ContainerStarted","Data":"4ffe3d5a6ae0a6c2567b2d653eeeb31dae3da85938849a75204d10c34d4a78b3"} Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.990558 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:33 crc kubenswrapper[4945]: I0108 23:20:33.994373 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:20:34 crc kubenswrapper[4945]: I0108 23:20:34.005600 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" podStartSLOduration=6.005574259 podStartE2EDuration="6.005574259s" podCreationTimestamp="2026-01-08 23:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:20:34.003586092 +0000 UTC m=+304.314745048" watchObservedRunningTime="2026-01-08 23:20:34.005574259 +0000 UTC m=+304.316733205" Jan 08 23:21:06 crc kubenswrapper[4945]: I0108 23:21:06.526579 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp"] Jan 08 23:21:06 crc kubenswrapper[4945]: I0108 23:21:06.527344 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" podUID="a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" containerName="route-controller-manager" containerID="cri-o://b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72" gracePeriod=30 Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.038488 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.148030 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-config\") pod \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.148117 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-serving-cert\") pod \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.148171 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qglb7\" (UniqueName: \"kubernetes.io/projected/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-kube-api-access-qglb7\") pod \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.148196 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-client-ca\") pod \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\" (UID: \"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f\") " Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.149309 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" (UID: "a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.149679 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-config" (OuterVolumeSpecName: "config") pod "a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" (UID: "a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.154916 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" (UID: "a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.155285 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-kube-api-access-qglb7" (OuterVolumeSpecName: "kube-api-access-qglb7") pod "a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" (UID: "a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f"). InnerVolumeSpecName "kube-api-access-qglb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.156757 4945 generic.go:334] "Generic (PLEG): container finished" podID="a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" containerID="b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72" exitCode=0 Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.156791 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" event={"ID":"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f","Type":"ContainerDied","Data":"b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72"} Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.156815 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" event={"ID":"a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f","Type":"ContainerDied","Data":"6b5c23420e1bc73e4d7252bd5795d0bbe8beaf2c0cf063bdabfaa88cae0066d3"} Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.156830 4945 scope.go:117] "RemoveContainer" containerID="b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.156923 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.210171 4945 scope.go:117] "RemoveContainer" containerID="b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72" Jan 08 23:21:07 crc kubenswrapper[4945]: E0108 23:21:07.210620 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72\": container with ID starting with b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72 not found: ID does not exist" containerID="b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.210666 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72"} err="failed to get container status \"b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72\": rpc error: code = NotFound desc = could not find container \"b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72\": container with ID starting with b04a483cf04e940a3d5068ecbc0e2cdd207f35cb3c2b2820932a35e5314d3e72 not found: ID does not exist" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.217644 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp"] Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.220777 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f7b94877-85txp"] Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.249128 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.249182 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.249197 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qglb7\" (UniqueName: \"kubernetes.io/projected/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-kube-api-access-qglb7\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.249206 4945 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.883472 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl"] Jan 08 23:21:07 crc kubenswrapper[4945]: E0108 23:21:07.884977 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" containerName="route-controller-manager" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.885052 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" containerName="route-controller-manager" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.885294 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" containerName="route-controller-manager" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.885975 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.888236 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.888256 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.888680 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.888688 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.888701 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.888680 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 08 23:21:07 crc kubenswrapper[4945]: I0108 23:21:07.893857 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl"] Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.008535 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f" path="/var/lib/kubelet/pods/a0e3d9b5-4ae3-421a-b1a8-8d5dad7d939f/volumes" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.059155 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfvs7\" (UniqueName: \"kubernetes.io/projected/eae68986-33b0-463e-8106-8e43a3b2220d-kube-api-access-hfvs7\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.059248 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae68986-33b0-463e-8106-8e43a3b2220d-serving-cert\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.059291 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae68986-33b0-463e-8106-8e43a3b2220d-config\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.059412 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eae68986-33b0-463e-8106-8e43a3b2220d-client-ca\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.160979 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae68986-33b0-463e-8106-8e43a3b2220d-serving-cert\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.161089 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae68986-33b0-463e-8106-8e43a3b2220d-config\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.161190 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eae68986-33b0-463e-8106-8e43a3b2220d-client-ca\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.161308 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfvs7\" (UniqueName: \"kubernetes.io/projected/eae68986-33b0-463e-8106-8e43a3b2220d-kube-api-access-hfvs7\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.162621 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eae68986-33b0-463e-8106-8e43a3b2220d-client-ca\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.163122 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae68986-33b0-463e-8106-8e43a3b2220d-config\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.167895 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae68986-33b0-463e-8106-8e43a3b2220d-serving-cert\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.181127 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfvs7\" (UniqueName: \"kubernetes.io/projected/eae68986-33b0-463e-8106-8e43a3b2220d-kube-api-access-hfvs7\") pod \"route-controller-manager-59d9bf5b8-848vl\" (UID: \"eae68986-33b0-463e-8106-8e43a3b2220d\") " pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.229727 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:08 crc kubenswrapper[4945]: I0108 23:21:08.670059 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl"] Jan 08 23:21:09 crc kubenswrapper[4945]: I0108 23:21:09.169093 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" event={"ID":"eae68986-33b0-463e-8106-8e43a3b2220d","Type":"ContainerStarted","Data":"e4e7fbf8599c3cd81f9bb4a99eda82871cf1545bbf1d0c2962fcd0a8dda39f0d"} Jan 08 23:21:09 crc kubenswrapper[4945]: I0108 23:21:09.169134 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" event={"ID":"eae68986-33b0-463e-8106-8e43a3b2220d","Type":"ContainerStarted","Data":"7ce4598660bbc0377546e4d56096c1a348c6325a413bedbed3f7d56dba26336d"} Jan 08 23:21:09 crc kubenswrapper[4945]: I0108 23:21:09.171311 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:09 crc kubenswrapper[4945]: I0108 23:21:09.188905 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" podStartSLOduration=3.188890891 podStartE2EDuration="3.188890891s" podCreationTimestamp="2026-01-08 23:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:21:09.185127709 +0000 UTC m=+339.496286665" watchObservedRunningTime="2026-01-08 23:21:09.188890891 +0000 UTC m=+339.500049837" Jan 08 23:21:09 crc kubenswrapper[4945]: I0108 23:21:09.200106 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59d9bf5b8-848vl" Jan 08 23:21:13 crc kubenswrapper[4945]: I0108 23:21:13.578173 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:21:13 crc kubenswrapper[4945]: I0108 23:21:13.578616 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.485208 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s5tr5"] Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.486428 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.510192 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s5tr5"] Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.624949 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5679c69a-a855-40fa-abf9-78a8d1244129-trusted-ca\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.625049 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5679c69a-a855-40fa-abf9-78a8d1244129-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.625092 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5679c69a-a855-40fa-abf9-78a8d1244129-registry-certificates\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.625114 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-registry-tls\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.625154 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.625452 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5679c69a-a855-40fa-abf9-78a8d1244129-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.625829 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4qk\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-kube-api-access-jh4qk\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.625902 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-bound-sa-token\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.665390 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.727423 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5679c69a-a855-40fa-abf9-78a8d1244129-registry-certificates\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.727531 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-registry-tls\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.727622 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5679c69a-a855-40fa-abf9-78a8d1244129-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.727720 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh4qk\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-kube-api-access-jh4qk\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.727804 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-bound-sa-token\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.727908 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5679c69a-a855-40fa-abf9-78a8d1244129-trusted-ca\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.728051 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5679c69a-a855-40fa-abf9-78a8d1244129-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.728985 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5679c69a-a855-40fa-abf9-78a8d1244129-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.730321 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5679c69a-a855-40fa-abf9-78a8d1244129-trusted-ca\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.730442 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5679c69a-a855-40fa-abf9-78a8d1244129-registry-certificates\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.739017 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-registry-tls\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.739902 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5679c69a-a855-40fa-abf9-78a8d1244129-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.750645 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-bound-sa-token\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.752048 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh4qk\" (UniqueName: \"kubernetes.io/projected/5679c69a-a855-40fa-abf9-78a8d1244129-kube-api-access-jh4qk\") pod \"image-registry-66df7c8f76-s5tr5\" (UID: \"5679c69a-a855-40fa-abf9-78a8d1244129\") " pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:20 crc kubenswrapper[4945]: I0108 23:21:20.809426 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:21 crc kubenswrapper[4945]: I0108 23:21:21.270653 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s5tr5"] Jan 08 23:21:21 crc kubenswrapper[4945]: W0108 23:21:21.273746 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5679c69a_a855_40fa_abf9_78a8d1244129.slice/crio-c3d8f19c3fe14f5901c28ec0235d80a69b073c80594f8eddd3bfa9c01e8b6d56 WatchSource:0}: Error finding container c3d8f19c3fe14f5901c28ec0235d80a69b073c80594f8eddd3bfa9c01e8b6d56: Status 404 returned error can't find the container with id c3d8f19c3fe14f5901c28ec0235d80a69b073c80594f8eddd3bfa9c01e8b6d56 Jan 08 23:21:22 crc kubenswrapper[4945]: I0108 23:21:22.236828 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" event={"ID":"5679c69a-a855-40fa-abf9-78a8d1244129","Type":"ContainerStarted","Data":"4ea2b2924fd1e410ff5be11e6007345767fc5857208f40e079664406b2f7a8cd"} Jan 08 23:21:22 crc kubenswrapper[4945]: I0108 23:21:22.237231 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:22 crc kubenswrapper[4945]: I0108 23:21:22.237247 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" event={"ID":"5679c69a-a855-40fa-abf9-78a8d1244129","Type":"ContainerStarted","Data":"c3d8f19c3fe14f5901c28ec0235d80a69b073c80594f8eddd3bfa9c01e8b6d56"} Jan 08 23:21:22 crc kubenswrapper[4945]: I0108 23:21:22.273399 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" podStartSLOduration=2.273368747 podStartE2EDuration="2.273368747s" podCreationTimestamp="2026-01-08 23:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:21:22.268479338 +0000 UTC m=+352.579638324" watchObservedRunningTime="2026-01-08 23:21:22.273368747 +0000 UTC m=+352.584527733" Jan 08 23:21:26 crc kubenswrapper[4945]: I0108 23:21:26.548255 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm"] Jan 08 23:21:26 crc kubenswrapper[4945]: I0108 23:21:26.549660 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" podUID="a55c0047-cca2-4616-b4d4-8cb0baf0b332" containerName="controller-manager" containerID="cri-o://4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87" gracePeriod=30 Jan 08 23:21:26 crc kubenswrapper[4945]: I0108 23:21:26.931851 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.016432 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v27hl\" (UniqueName: \"kubernetes.io/projected/a55c0047-cca2-4616-b4d4-8cb0baf0b332-kube-api-access-v27hl\") pod \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.016488 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-config\") pod \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.016506 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-proxy-ca-bundles\") pod \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.017430 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a55c0047-cca2-4616-b4d4-8cb0baf0b332" (UID: "a55c0047-cca2-4616-b4d4-8cb0baf0b332"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.016544 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a55c0047-cca2-4616-b4d4-8cb0baf0b332-serving-cert\") pod \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.017509 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-client-ca\") pod \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\" (UID: \"a55c0047-cca2-4616-b4d4-8cb0baf0b332\") " Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.017461 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-config" (OuterVolumeSpecName: "config") pod "a55c0047-cca2-4616-b4d4-8cb0baf0b332" (UID: "a55c0047-cca2-4616-b4d4-8cb0baf0b332"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.017845 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-client-ca" (OuterVolumeSpecName: "client-ca") pod "a55c0047-cca2-4616-b4d4-8cb0baf0b332" (UID: "a55c0047-cca2-4616-b4d4-8cb0baf0b332"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.017855 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.017933 4945 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.021784 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55c0047-cca2-4616-b4d4-8cb0baf0b332-kube-api-access-v27hl" (OuterVolumeSpecName: "kube-api-access-v27hl") pod "a55c0047-cca2-4616-b4d4-8cb0baf0b332" (UID: "a55c0047-cca2-4616-b4d4-8cb0baf0b332"). InnerVolumeSpecName "kube-api-access-v27hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.021887 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a55c0047-cca2-4616-b4d4-8cb0baf0b332-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a55c0047-cca2-4616-b4d4-8cb0baf0b332" (UID: "a55c0047-cca2-4616-b4d4-8cb0baf0b332"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.119739 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v27hl\" (UniqueName: \"kubernetes.io/projected/a55c0047-cca2-4616-b4d4-8cb0baf0b332-kube-api-access-v27hl\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.119783 4945 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a55c0047-cca2-4616-b4d4-8cb0baf0b332-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.119797 4945 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a55c0047-cca2-4616-b4d4-8cb0baf0b332-client-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.264464 4945 generic.go:334] "Generic (PLEG): container finished" podID="a55c0047-cca2-4616-b4d4-8cb0baf0b332" containerID="4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87" exitCode=0 Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.264517 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" event={"ID":"a55c0047-cca2-4616-b4d4-8cb0baf0b332","Type":"ContainerDied","Data":"4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87"} Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.264591 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.264613 4945 scope.go:117] "RemoveContainer" containerID="4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.264596 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm" event={"ID":"a55c0047-cca2-4616-b4d4-8cb0baf0b332","Type":"ContainerDied","Data":"4ffe3d5a6ae0a6c2567b2d653eeeb31dae3da85938849a75204d10c34d4a78b3"} Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.282215 4945 scope.go:117] "RemoveContainer" containerID="4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87" Jan 08 23:21:27 crc kubenswrapper[4945]: E0108 23:21:27.282946 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87\": container with ID starting with 4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87 not found: ID does not exist" containerID="4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.283011 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87"} err="failed to get container status \"4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87\": rpc error: code = NotFound desc = could not find container \"4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87\": container with ID starting with 4ea3981ee7d10cea442f00ac0540509bf2bebe0cfcad46fda738f91928990f87 not found: ID does not exist" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.304917 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm"] Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.307975 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c8f4f45cc-m78lm"] Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.890489 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79d75cf94f-57ttf"] Jan 08 23:21:27 crc kubenswrapper[4945]: E0108 23:21:27.890732 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55c0047-cca2-4616-b4d4-8cb0baf0b332" containerName="controller-manager" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.890744 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55c0047-cca2-4616-b4d4-8cb0baf0b332" containerName="controller-manager" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.890835 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55c0047-cca2-4616-b4d4-8cb0baf0b332" containerName="controller-manager" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.891214 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.893626 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.893855 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.893778 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.895208 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.895541 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.900937 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.904422 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 08 23:21:27 crc kubenswrapper[4945]: I0108 23:21:27.906737 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79d75cf94f-57ttf"] Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.008336 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55c0047-cca2-4616-b4d4-8cb0baf0b332" path="/var/lib/kubelet/pods/a55c0047-cca2-4616-b4d4-8cb0baf0b332/volumes" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.033633 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz9q2\" (UniqueName: \"kubernetes.io/projected/8803565a-daac-4f87-a518-813c6dead564-kube-api-access-tz9q2\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.033731 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-proxy-ca-bundles\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.033794 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8803565a-daac-4f87-a518-813c6dead564-serving-cert\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.033830 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-config\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.033889 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-client-ca\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.135200 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-client-ca\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.135294 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz9q2\" (UniqueName: \"kubernetes.io/projected/8803565a-daac-4f87-a518-813c6dead564-kube-api-access-tz9q2\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.135331 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-proxy-ca-bundles\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.135391 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8803565a-daac-4f87-a518-813c6dead564-serving-cert\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.135437 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-config\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.136446 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-client-ca\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.137072 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-proxy-ca-bundles\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.137513 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8803565a-daac-4f87-a518-813c6dead564-config\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.143139 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8803565a-daac-4f87-a518-813c6dead564-serving-cert\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.156844 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz9q2\" (UniqueName: \"kubernetes.io/projected/8803565a-daac-4f87-a518-813c6dead564-kube-api-access-tz9q2\") pod \"controller-manager-79d75cf94f-57ttf\" (UID: \"8803565a-daac-4f87-a518-813c6dead564\") " pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.209216 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:28 crc kubenswrapper[4945]: I0108 23:21:28.460043 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79d75cf94f-57ttf"] Jan 08 23:21:28 crc kubenswrapper[4945]: W0108 23:21:28.468857 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8803565a_daac_4f87_a518_813c6dead564.slice/crio-73001cd8e3c896b0616c2a81ddbb66a54dd2242dcebcd35da61d91bfd89c801a WatchSource:0}: Error finding container 73001cd8e3c896b0616c2a81ddbb66a54dd2242dcebcd35da61d91bfd89c801a: Status 404 returned error can't find the container with id 73001cd8e3c896b0616c2a81ddbb66a54dd2242dcebcd35da61d91bfd89c801a Jan 08 23:21:29 crc kubenswrapper[4945]: I0108 23:21:29.278771 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" event={"ID":"8803565a-daac-4f87-a518-813c6dead564","Type":"ContainerStarted","Data":"e1bda43cab21f84deb06cb2c8f88685bb6ff7433827be70060a015f86598b676"} Jan 08 23:21:29 crc kubenswrapper[4945]: I0108 23:21:29.279122 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:29 crc kubenswrapper[4945]: I0108 23:21:29.279138 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" event={"ID":"8803565a-daac-4f87-a518-813c6dead564","Type":"ContainerStarted","Data":"73001cd8e3c896b0616c2a81ddbb66a54dd2242dcebcd35da61d91bfd89c801a"} Jan 08 23:21:29 crc kubenswrapper[4945]: I0108 23:21:29.283045 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" Jan 08 23:21:29 crc kubenswrapper[4945]: I0108 23:21:29.293635 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79d75cf94f-57ttf" podStartSLOduration=3.293617471 podStartE2EDuration="3.293617471s" podCreationTimestamp="2026-01-08 23:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:21:29.292100424 +0000 UTC m=+359.603259380" watchObservedRunningTime="2026-01-08 23:21:29.293617471 +0000 UTC m=+359.604776417" Jan 08 23:21:40 crc kubenswrapper[4945]: I0108 23:21:40.817346 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-s5tr5" Jan 08 23:21:40 crc kubenswrapper[4945]: I0108 23:21:40.885960 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j8vl9"] Jan 08 23:21:43 crc kubenswrapper[4945]: I0108 23:21:43.578120 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:21:43 crc kubenswrapper[4945]: I0108 23:21:43.578183 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.673567 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgr9g"] Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.674397 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vgr9g" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerName="registry-server" containerID="cri-o://aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980" gracePeriod=30 Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.684277 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8ptz"] Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.684510 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w8ptz" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="registry-server" containerID="cri-o://ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890" gracePeriod=30 Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.692648 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxpmj"] Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.693056 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerName="marketplace-operator" containerID="cri-o://3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b" gracePeriod=30 Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.715504 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tl6cr"] Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.717421 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.730647 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2hvk"] Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.731250 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r2hvk" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" containerName="registry-server" containerID="cri-o://078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53" gracePeriod=30 Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.739509 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvcsg"] Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.739972 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dvcsg" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="registry-server" containerID="cri-o://fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021" gracePeriod=30 Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.746422 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tl6cr"] Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.903550 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/776b72cb-be81-499c-9a26-09ce115d3b8e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.903601 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/776b72cb-be81-499c-9a26-09ce115d3b8e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:50 crc kubenswrapper[4945]: I0108 23:21:50.903661 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tbw8\" (UniqueName: \"kubernetes.io/projected/776b72cb-be81-499c-9a26-09ce115d3b8e-kube-api-access-9tbw8\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.004607 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/776b72cb-be81-499c-9a26-09ce115d3b8e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.005539 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/776b72cb-be81-499c-9a26-09ce115d3b8e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.005639 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tbw8\" (UniqueName: \"kubernetes.io/projected/776b72cb-be81-499c-9a26-09ce115d3b8e-kube-api-access-9tbw8\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.006661 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/776b72cb-be81-499c-9a26-09ce115d3b8e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.011307 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/776b72cb-be81-499c-9a26-09ce115d3b8e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.024737 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tbw8\" (UniqueName: \"kubernetes.io/projected/776b72cb-be81-499c-9a26-09ce115d3b8e-kube-api-access-9tbw8\") pod \"marketplace-operator-79b997595-tl6cr\" (UID: \"776b72cb-be81-499c-9a26-09ce115d3b8e\") " pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.158054 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.207103 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.283215 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.311833 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-catalog-content\") pod \"dce1a9c0-149a-4062-a166-84829a9dc2ec\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.311926 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9l78\" (UniqueName: \"kubernetes.io/projected/dce1a9c0-149a-4062-a166-84829a9dc2ec-kube-api-access-w9l78\") pod \"dce1a9c0-149a-4062-a166-84829a9dc2ec\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.311984 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-utilities\") pod \"dce1a9c0-149a-4062-a166-84829a9dc2ec\" (UID: \"dce1a9c0-149a-4062-a166-84829a9dc2ec\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.314481 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-utilities" (OuterVolumeSpecName: "utilities") pod "dce1a9c0-149a-4062-a166-84829a9dc2ec" (UID: "dce1a9c0-149a-4062-a166-84829a9dc2ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.317790 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce1a9c0-149a-4062-a166-84829a9dc2ec-kube-api-access-w9l78" (OuterVolumeSpecName: "kube-api-access-w9l78") pod "dce1a9c0-149a-4062-a166-84829a9dc2ec" (UID: "dce1a9c0-149a-4062-a166-84829a9dc2ec"). InnerVolumeSpecName "kube-api-access-w9l78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.368557 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dce1a9c0-149a-4062-a166-84829a9dc2ec" (UID: "dce1a9c0-149a-4062-a166-84829a9dc2ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.375467 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.383164 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.406736 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.412799 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxsl6\" (UniqueName: \"kubernetes.io/projected/4befd52c-d042-4259-8998-533f8a61dddd-kube-api-access-qxsl6\") pod \"4befd52c-d042-4259-8998-533f8a61dddd\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.412923 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-utilities\") pod \"4befd52c-d042-4259-8998-533f8a61dddd\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.412965 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-catalog-content\") pod \"4befd52c-d042-4259-8998-533f8a61dddd\" (UID: \"4befd52c-d042-4259-8998-533f8a61dddd\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.413267 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.413284 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9l78\" (UniqueName: \"kubernetes.io/projected/dce1a9c0-149a-4062-a166-84829a9dc2ec-kube-api-access-w9l78\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.413298 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce1a9c0-149a-4062-a166-84829a9dc2ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.414308 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-utilities" (OuterVolumeSpecName: "utilities") pod "4befd52c-d042-4259-8998-533f8a61dddd" (UID: "4befd52c-d042-4259-8998-533f8a61dddd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.416663 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4befd52c-d042-4259-8998-533f8a61dddd-kube-api-access-qxsl6" (OuterVolumeSpecName: "kube-api-access-qxsl6") pod "4befd52c-d042-4259-8998-533f8a61dddd" (UID: "4befd52c-d042-4259-8998-533f8a61dddd"). InnerVolumeSpecName "kube-api-access-qxsl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.427548 4945 generic.go:334] "Generic (PLEG): container finished" podID="4befd52c-d042-4259-8998-533f8a61dddd" containerID="ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890" exitCode=0 Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.427633 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ptz" event={"ID":"4befd52c-d042-4259-8998-533f8a61dddd","Type":"ContainerDied","Data":"ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.427662 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w8ptz" event={"ID":"4befd52c-d042-4259-8998-533f8a61dddd","Type":"ContainerDied","Data":"7049bc4e75ed41544af77257d8f2b99ceddd65a320832fe99134e4a49ce8caae"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.427678 4945 scope.go:117] "RemoveContainer" containerID="ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.427816 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w8ptz" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.433458 4945 generic.go:334] "Generic (PLEG): container finished" podID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerID="3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b" exitCode=0 Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.433549 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" event={"ID":"93e10fcb-3cb5-454a-bcd1-1eae918e0601","Type":"ContainerDied","Data":"3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.433576 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" event={"ID":"93e10fcb-3cb5-454a-bcd1-1eae918e0601","Type":"ContainerDied","Data":"2a52f99dacebd2b68164816ad6dfd5f886d4ab29ff4fa94ddbeb729a759c9832"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.433669 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vxpmj" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.444256 4945 generic.go:334] "Generic (PLEG): container finished" podID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerID="aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980" exitCode=0 Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.444324 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgr9g" event={"ID":"dce1a9c0-149a-4062-a166-84829a9dc2ec","Type":"ContainerDied","Data":"aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.444352 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vgr9g" event={"ID":"dce1a9c0-149a-4062-a166-84829a9dc2ec","Type":"ContainerDied","Data":"28a2300619c10a41f003430aa291438e87f10c0db0c3c0372f8345e0911b4b59"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.444424 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vgr9g" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.447207 4945 generic.go:334] "Generic (PLEG): container finished" podID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerID="fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021" exitCode=0 Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.447279 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvcsg" event={"ID":"08cb759a-5b37-46e6-9b1f-5e84fabc66cd","Type":"ContainerDied","Data":"fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.447311 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvcsg" event={"ID":"08cb759a-5b37-46e6-9b1f-5e84fabc66cd","Type":"ContainerDied","Data":"48f4b3308aee41f2cc112c99c80e408490f94a94b6704d2b3e5f7783c8516bb1"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.447496 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvcsg" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.449158 4945 scope.go:117] "RemoveContainer" containerID="200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.452874 4945 generic.go:334] "Generic (PLEG): container finished" podID="b3238595-843e-4c3a-9e67-538483ac4b20" containerID="078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53" exitCode=0 Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.452906 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2hvk" event={"ID":"b3238595-843e-4c3a-9e67-538483ac4b20","Type":"ContainerDied","Data":"078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.452933 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2hvk" event={"ID":"b3238595-843e-4c3a-9e67-538483ac4b20","Type":"ContainerDied","Data":"ce7b1a8050b485912488f6901dd66dbcd5d669d96aa89a7e7c5fe9e0ac733dd5"} Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.452966 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2hvk" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.465742 4945 scope.go:117] "RemoveContainer" containerID="ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.483510 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4befd52c-d042-4259-8998-533f8a61dddd" (UID: "4befd52c-d042-4259-8998-533f8a61dddd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.495144 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vgr9g"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.498580 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vgr9g"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.501966 4945 scope.go:117] "RemoveContainer" containerID="ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.502422 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890\": container with ID starting with ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890 not found: ID does not exist" containerID="ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.502458 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890"} err="failed to get container status \"ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890\": rpc error: code = NotFound desc = could not find container \"ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890\": container with ID starting with ef425d09ea0e1cc1ae0a5efb6a868d98ab8459d1df1e30bbf3d01c16760cd890 not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.502598 4945 scope.go:117] "RemoveContainer" containerID="200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.503019 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a\": container with ID starting with 200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a not found: ID does not exist" containerID="200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.503056 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a"} err="failed to get container status \"200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a\": rpc error: code = NotFound desc = could not find container \"200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a\": container with ID starting with 200c6d2dde2a36d314b103d486d2df52bab98bfd65b2a41c9f215f975ab7cc0a not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.503084 4945 scope.go:117] "RemoveContainer" containerID="ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.503354 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05\": container with ID starting with ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05 not found: ID does not exist" containerID="ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.503374 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05"} err="failed to get container status \"ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05\": rpc error: code = NotFound desc = could not find container \"ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05\": container with ID starting with ffc4710663e52e3d78252ae1715c2ffa1d20bd44d51d0d4485fdac7412041a05 not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.503388 4945 scope.go:117] "RemoveContainer" containerID="3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.516937 4945 scope.go:117] "RemoveContainer" containerID="9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517525 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-trusted-ca\") pod \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517584 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-catalog-content\") pod \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517636 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjlsb\" (UniqueName: \"kubernetes.io/projected/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-kube-api-access-zjlsb\") pod \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517673 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-operator-metrics\") pod \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517699 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t2pl\" (UniqueName: \"kubernetes.io/projected/b3238595-843e-4c3a-9e67-538483ac4b20-kube-api-access-4t2pl\") pod \"b3238595-843e-4c3a-9e67-538483ac4b20\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517737 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdgb9\" (UniqueName: \"kubernetes.io/projected/93e10fcb-3cb5-454a-bcd1-1eae918e0601-kube-api-access-hdgb9\") pod \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\" (UID: \"93e10fcb-3cb5-454a-bcd1-1eae918e0601\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517760 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-catalog-content\") pod \"b3238595-843e-4c3a-9e67-538483ac4b20\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517788 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-utilities\") pod \"b3238595-843e-4c3a-9e67-538483ac4b20\" (UID: \"b3238595-843e-4c3a-9e67-538483ac4b20\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.517865 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-utilities\") pod \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\" (UID: \"08cb759a-5b37-46e6-9b1f-5e84fabc66cd\") " Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.518142 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "93e10fcb-3cb5-454a-bcd1-1eae918e0601" (UID: "93e10fcb-3cb5-454a-bcd1-1eae918e0601"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.518259 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxsl6\" (UniqueName: \"kubernetes.io/projected/4befd52c-d042-4259-8998-533f8a61dddd-kube-api-access-qxsl6\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.518283 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.518296 4945 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.518304 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4befd52c-d042-4259-8998-533f8a61dddd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.520581 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-kube-api-access-zjlsb" (OuterVolumeSpecName: "kube-api-access-zjlsb") pod "08cb759a-5b37-46e6-9b1f-5e84fabc66cd" (UID: "08cb759a-5b37-46e6-9b1f-5e84fabc66cd"). InnerVolumeSpecName "kube-api-access-zjlsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.520744 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-utilities" (OuterVolumeSpecName: "utilities") pod "b3238595-843e-4c3a-9e67-538483ac4b20" (UID: "b3238595-843e-4c3a-9e67-538483ac4b20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.520791 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-utilities" (OuterVolumeSpecName: "utilities") pod "08cb759a-5b37-46e6-9b1f-5e84fabc66cd" (UID: "08cb759a-5b37-46e6-9b1f-5e84fabc66cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.523193 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e10fcb-3cb5-454a-bcd1-1eae918e0601-kube-api-access-hdgb9" (OuterVolumeSpecName: "kube-api-access-hdgb9") pod "93e10fcb-3cb5-454a-bcd1-1eae918e0601" (UID: "93e10fcb-3cb5-454a-bcd1-1eae918e0601"). InnerVolumeSpecName "kube-api-access-hdgb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.523396 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3238595-843e-4c3a-9e67-538483ac4b20-kube-api-access-4t2pl" (OuterVolumeSpecName: "kube-api-access-4t2pl") pod "b3238595-843e-4c3a-9e67-538483ac4b20" (UID: "b3238595-843e-4c3a-9e67-538483ac4b20"). InnerVolumeSpecName "kube-api-access-4t2pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.523423 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "93e10fcb-3cb5-454a-bcd1-1eae918e0601" (UID: "93e10fcb-3cb5-454a-bcd1-1eae918e0601"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.530240 4945 scope.go:117] "RemoveContainer" containerID="3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.530609 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b\": container with ID starting with 3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b not found: ID does not exist" containerID="3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.530650 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b"} err="failed to get container status \"3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b\": rpc error: code = NotFound desc = could not find container \"3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b\": container with ID starting with 3419af9f165f4e3c3b2ec449f2bdf024cf5dac115c275078d6b033dae72b375b not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.530676 4945 scope.go:117] "RemoveContainer" containerID="9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.530893 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8\": container with ID starting with 9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8 not found: ID does not exist" containerID="9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.530920 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8"} err="failed to get container status \"9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8\": rpc error: code = NotFound desc = could not find container \"9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8\": container with ID starting with 9c96693f33518d8095c526bdc3e215f17acbc21b0de6cab3a6de37d37615faa8 not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.530938 4945 scope.go:117] "RemoveContainer" containerID="aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.544070 4945 scope.go:117] "RemoveContainer" containerID="b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.553650 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3238595-843e-4c3a-9e67-538483ac4b20" (UID: "b3238595-843e-4c3a-9e67-538483ac4b20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.563083 4945 scope.go:117] "RemoveContainer" containerID="57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.577639 4945 scope.go:117] "RemoveContainer" containerID="aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.578109 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980\": container with ID starting with aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980 not found: ID does not exist" containerID="aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.578146 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980"} err="failed to get container status \"aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980\": rpc error: code = NotFound desc = could not find container \"aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980\": container with ID starting with aca650a5f3e1be3992101c04038cf920fd00fa8478c5cc4a3d9f693fff506980 not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.578173 4945 scope.go:117] "RemoveContainer" containerID="b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.578510 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33\": container with ID starting with b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33 not found: ID does not exist" containerID="b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.578540 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33"} err="failed to get container status \"b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33\": rpc error: code = NotFound desc = could not find container \"b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33\": container with ID starting with b976bbce83ffc93b3c7696d5f3304fd605d75f6a0f91a7bfe410982bb2723d33 not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.578561 4945 scope.go:117] "RemoveContainer" containerID="57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.578917 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356\": container with ID starting with 57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356 not found: ID does not exist" containerID="57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.578948 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356"} err="failed to get container status \"57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356\": rpc error: code = NotFound desc = could not find container \"57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356\": container with ID starting with 57ea6aa0feed40cd1806b79962c3d3a03cdd127ee1cf45603accf98408566356 not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.578968 4945 scope.go:117] "RemoveContainer" containerID="fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.591168 4945 scope.go:117] "RemoveContainer" containerID="2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.615320 4945 scope.go:117] "RemoveContainer" containerID="966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.618930 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjlsb\" (UniqueName: \"kubernetes.io/projected/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-kube-api-access-zjlsb\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.618949 4945 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93e10fcb-3cb5-454a-bcd1-1eae918e0601-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.618958 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t2pl\" (UniqueName: \"kubernetes.io/projected/b3238595-843e-4c3a-9e67-538483ac4b20-kube-api-access-4t2pl\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.618966 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdgb9\" (UniqueName: \"kubernetes.io/projected/93e10fcb-3cb5-454a-bcd1-1eae918e0601-kube-api-access-hdgb9\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.618975 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.618983 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3238595-843e-4c3a-9e67-538483ac4b20-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.619008 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.629286 4945 scope.go:117] "RemoveContainer" containerID="fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.629724 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021\": container with ID starting with fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021 not found: ID does not exist" containerID="fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.629756 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021"} err="failed to get container status \"fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021\": rpc error: code = NotFound desc = could not find container \"fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021\": container with ID starting with fa6e4adbda65f6a1e74db976b974abc062dd08976a439631a705bdeb9d324021 not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.629783 4945 scope.go:117] "RemoveContainer" containerID="2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.630415 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c\": container with ID starting with 2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c not found: ID does not exist" containerID="2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.630432 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c"} err="failed to get container status \"2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c\": rpc error: code = NotFound desc = could not find container \"2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c\": container with ID starting with 2f4efd5eff99c107bc96bd4204d03acef1bb48b871916a8900c1e705203cef5c not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.630444 4945 scope.go:117] "RemoveContainer" containerID="966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.630677 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb\": container with ID starting with 966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb not found: ID does not exist" containerID="966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.630695 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb"} err="failed to get container status \"966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb\": rpc error: code = NotFound desc = could not find container \"966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb\": container with ID starting with 966326057c933cdfb79140ea43d38e431f3838b1fb34e95945376b00d53ce9bb not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.630711 4945 scope.go:117] "RemoveContainer" containerID="078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.639933 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08cb759a-5b37-46e6-9b1f-5e84fabc66cd" (UID: "08cb759a-5b37-46e6-9b1f-5e84fabc66cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.643118 4945 scope.go:117] "RemoveContainer" containerID="d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.654160 4945 scope.go:117] "RemoveContainer" containerID="34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.672970 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tl6cr"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.673474 4945 scope.go:117] "RemoveContainer" containerID="078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.674036 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53\": container with ID starting with 078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53 not found: ID does not exist" containerID="078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.674068 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53"} err="failed to get container status \"078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53\": rpc error: code = NotFound desc = could not find container \"078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53\": container with ID starting with 078ff84b0a35b604a2519bc7d0359013ef6bb745cb928a8a6a3e4c9397026f53 not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.674114 4945 scope.go:117] "RemoveContainer" containerID="d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.674544 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db\": container with ID starting with d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db not found: ID does not exist" containerID="d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.674569 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db"} err="failed to get container status \"d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db\": rpc error: code = NotFound desc = could not find container \"d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db\": container with ID starting with d182b6e21adb9b83b1fca3afe9e2ec31454e15a8de70b432a5dd048dee86b5db not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.674601 4945 scope.go:117] "RemoveContainer" containerID="34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b" Jan 08 23:21:51 crc kubenswrapper[4945]: E0108 23:21:51.674886 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b\": container with ID starting with 34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b not found: ID does not exist" containerID="34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.674919 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b"} err="failed to get container status \"34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b\": rpc error: code = NotFound desc = could not find container \"34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b\": container with ID starting with 34afe8e120c7ea975546bfb4aeb6033f671a28fcbe47f45ee3e1e4b7a08cfb4b not found: ID does not exist" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.719818 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cb759a-5b37-46e6-9b1f-5e84fabc66cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.777288 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxpmj"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.783783 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vxpmj"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.786856 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w8ptz"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.793066 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w8ptz"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.798959 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvcsg"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.805498 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dvcsg"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.808482 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2hvk"] Jan 08 23:21:51 crc kubenswrapper[4945]: I0108 23:21:51.810916 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2hvk"] Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.006401 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" path="/var/lib/kubelet/pods/08cb759a-5b37-46e6-9b1f-5e84fabc66cd/volumes" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.007195 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4befd52c-d042-4259-8998-533f8a61dddd" path="/var/lib/kubelet/pods/4befd52c-d042-4259-8998-533f8a61dddd/volumes" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.007896 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" path="/var/lib/kubelet/pods/93e10fcb-3cb5-454a-bcd1-1eae918e0601/volumes" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.008914 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" path="/var/lib/kubelet/pods/b3238595-843e-4c3a-9e67-538483ac4b20/volumes" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.009557 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" path="/var/lib/kubelet/pods/dce1a9c0-149a-4062-a166-84829a9dc2ec/volumes" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.459408 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" event={"ID":"776b72cb-be81-499c-9a26-09ce115d3b8e","Type":"ContainerStarted","Data":"9e6d0a12b95edffb7cd8ba2e2760e1149683f6dbbf324a783f45313e39c13aac"} Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.459457 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" event={"ID":"776b72cb-be81-499c-9a26-09ce115d3b8e","Type":"ContainerStarted","Data":"55964ee575a7a82595543e85d87049e6858bec68f388130e96b734f2bc19a264"} Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.461244 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.463069 4945 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tl6cr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.463105 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" podUID="776b72cb-be81-499c-9a26-09ce115d3b8e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.481195 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" podStartSLOduration=2.481177887 podStartE2EDuration="2.481177887s" podCreationTimestamp="2026-01-08 23:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:21:52.481131366 +0000 UTC m=+382.792290332" watchObservedRunningTime="2026-01-08 23:21:52.481177887 +0000 UTC m=+382.792336833" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686084 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tv4b6"] Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686276 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" containerName="extract-content" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686289 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" containerName="extract-content" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686299 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686305 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686317 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerName="marketplace-operator" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686324 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerName="marketplace-operator" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686332 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686337 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686347 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="extract-utilities" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686352 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="extract-utilities" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686362 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="extract-utilities" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686368 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="extract-utilities" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686375 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="extract-content" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686381 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="extract-content" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686389 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="extract-content" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686394 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="extract-content" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686403 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerName="extract-utilities" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686409 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerName="extract-utilities" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686419 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerName="extract-content" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686425 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerName="extract-content" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686433 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" containerName="extract-utilities" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686438 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" containerName="extract-utilities" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686447 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686452 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686460 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686466 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686543 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerName="marketplace-operator" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686554 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce1a9c0-149a-4062-a166-84829a9dc2ec" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686564 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4befd52c-d042-4259-8998-533f8a61dddd" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686573 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3238595-843e-4c3a-9e67-538483ac4b20" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686581 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cb759a-5b37-46e6-9b1f-5e84fabc66cd" containerName="registry-server" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686589 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerName="marketplace-operator" Jan 08 23:21:52 crc kubenswrapper[4945]: E0108 23:21:52.686672 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerName="marketplace-operator" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.686679 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e10fcb-3cb5-454a-bcd1-1eae918e0601" containerName="marketplace-operator" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.687272 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.689859 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.700022 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tv4b6"] Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.837174 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-catalog-content\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.837235 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-utilities\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.837274 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gklb\" (UniqueName: \"kubernetes.io/projected/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-kube-api-access-7gklb\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.938251 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-utilities\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.938324 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gklb\" (UniqueName: \"kubernetes.io/projected/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-kube-api-access-7gklb\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.938377 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-catalog-content\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.938848 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-utilities\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.938883 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-catalog-content\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:52 crc kubenswrapper[4945]: I0108 23:21:52.957120 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gklb\" (UniqueName: \"kubernetes.io/projected/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-kube-api-access-7gklb\") pod \"certified-operators-tv4b6\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.005622 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.286583 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xsf4k"] Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.287592 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.290063 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.295881 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsf4k"] Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.389040 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tv4b6"] Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.445506 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qdl\" (UniqueName: \"kubernetes.io/projected/fe4f2df8-e361-4814-bc78-16d82dd1cb84-kube-api-access-h7qdl\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.445797 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4f2df8-e361-4814-bc78-16d82dd1cb84-utilities\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.445846 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4f2df8-e361-4814-bc78-16d82dd1cb84-catalog-content\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.470280 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv4b6" event={"ID":"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f","Type":"ContainerStarted","Data":"a7ff6884284f2cbbba84cfe22e333869ae93c94cdd8e0b50a0c294c0b8085774"} Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.474042 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tl6cr" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.547041 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qdl\" (UniqueName: \"kubernetes.io/projected/fe4f2df8-e361-4814-bc78-16d82dd1cb84-kube-api-access-h7qdl\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.547082 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4f2df8-e361-4814-bc78-16d82dd1cb84-utilities\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.547133 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4f2df8-e361-4814-bc78-16d82dd1cb84-catalog-content\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.547482 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe4f2df8-e361-4814-bc78-16d82dd1cb84-utilities\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.547499 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe4f2df8-e361-4814-bc78-16d82dd1cb84-catalog-content\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.567478 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qdl\" (UniqueName: \"kubernetes.io/projected/fe4f2df8-e361-4814-bc78-16d82dd1cb84-kube-api-access-h7qdl\") pod \"redhat-marketplace-xsf4k\" (UID: \"fe4f2df8-e361-4814-bc78-16d82dd1cb84\") " pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:53 crc kubenswrapper[4945]: I0108 23:21:53.660026 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:21:54 crc kubenswrapper[4945]: I0108 23:21:54.032496 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsf4k"] Jan 08 23:21:54 crc kubenswrapper[4945]: W0108 23:21:54.041519 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe4f2df8_e361_4814_bc78_16d82dd1cb84.slice/crio-4a9393e7361b012cef120138b0153944489573e0bd35ce4a4d46bfcef40da101 WatchSource:0}: Error finding container 4a9393e7361b012cef120138b0153944489573e0bd35ce4a4d46bfcef40da101: Status 404 returned error can't find the container with id 4a9393e7361b012cef120138b0153944489573e0bd35ce4a4d46bfcef40da101 Jan 08 23:21:54 crc kubenswrapper[4945]: I0108 23:21:54.477124 4945 generic.go:334] "Generic (PLEG): container finished" podID="fe4f2df8-e361-4814-bc78-16d82dd1cb84" containerID="d9bc78e4fddd85f7c70ed870625269084eff49f6f20b0a79a59e0b58d7b7606e" exitCode=0 Jan 08 23:21:54 crc kubenswrapper[4945]: I0108 23:21:54.477206 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsf4k" event={"ID":"fe4f2df8-e361-4814-bc78-16d82dd1cb84","Type":"ContainerDied","Data":"d9bc78e4fddd85f7c70ed870625269084eff49f6f20b0a79a59e0b58d7b7606e"} Jan 08 23:21:54 crc kubenswrapper[4945]: I0108 23:21:54.477395 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsf4k" event={"ID":"fe4f2df8-e361-4814-bc78-16d82dd1cb84","Type":"ContainerStarted","Data":"4a9393e7361b012cef120138b0153944489573e0bd35ce4a4d46bfcef40da101"} Jan 08 23:21:54 crc kubenswrapper[4945]: I0108 23:21:54.479138 4945 generic.go:334] "Generic (PLEG): container finished" podID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerID="6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f" exitCode=0 Jan 08 23:21:54 crc kubenswrapper[4945]: I0108 23:21:54.479183 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv4b6" event={"ID":"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f","Type":"ContainerDied","Data":"6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f"} Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.089726 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mpr9p"] Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.090921 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.092658 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.108015 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpr9p"] Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.271896 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4gd7\" (UniqueName: \"kubernetes.io/projected/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-kube-api-access-x4gd7\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.272003 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-utilities\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.272026 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-catalog-content\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.373384 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-utilities\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.373455 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-catalog-content\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.373511 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4gd7\" (UniqueName: \"kubernetes.io/projected/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-kube-api-access-x4gd7\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.373829 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-utilities\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.373937 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-catalog-content\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.403209 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4gd7\" (UniqueName: \"kubernetes.io/projected/ce75ef30-8da2-4993-b5d6-6db6250cb3ac-kube-api-access-x4gd7\") pod \"redhat-operators-mpr9p\" (UID: \"ce75ef30-8da2-4993-b5d6-6db6250cb3ac\") " pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.412155 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.490704 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv4b6" event={"ID":"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f","Type":"ContainerStarted","Data":"b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1"} Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.496971 4945 generic.go:334] "Generic (PLEG): container finished" podID="fe4f2df8-e361-4814-bc78-16d82dd1cb84" containerID="b3209df150ab73805399c0df68989a03d1bdee3358efde16136e8f5d2553ca9c" exitCode=0 Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.497027 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsf4k" event={"ID":"fe4f2df8-e361-4814-bc78-16d82dd1cb84","Type":"ContainerDied","Data":"b3209df150ab73805399c0df68989a03d1bdee3358efde16136e8f5d2553ca9c"} Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.690266 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q99xk"] Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.691544 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.695276 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.699223 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q99xk"] Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.780278 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-catalog-content\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.780324 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-utilities\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.780352 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mhhk\" (UniqueName: \"kubernetes.io/projected/035037f1-e099-4416-a125-177d9aeef29f-kube-api-access-9mhhk\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.792087 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpr9p"] Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.881108 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-catalog-content\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.881166 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-utilities\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.881205 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mhhk\" (UniqueName: \"kubernetes.io/projected/035037f1-e099-4416-a125-177d9aeef29f-kube-api-access-9mhhk\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.882053 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-catalog-content\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.882151 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-utilities\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:55 crc kubenswrapper[4945]: I0108 23:21:55.898593 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mhhk\" (UniqueName: \"kubernetes.io/projected/035037f1-e099-4416-a125-177d9aeef29f-kube-api-access-9mhhk\") pod \"community-operators-q99xk\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.013746 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.432513 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q99xk"] Jan 08 23:21:56 crc kubenswrapper[4945]: W0108 23:21:56.441895 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod035037f1_e099_4416_a125_177d9aeef29f.slice/crio-b6de9ec0e07072b014171a1a4415fb63682fa5a60f99a345d9f60f5bf8464fba WatchSource:0}: Error finding container b6de9ec0e07072b014171a1a4415fb63682fa5a60f99a345d9f60f5bf8464fba: Status 404 returned error can't find the container with id b6de9ec0e07072b014171a1a4415fb63682fa5a60f99a345d9f60f5bf8464fba Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.502626 4945 generic.go:334] "Generic (PLEG): container finished" podID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerID="b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1" exitCode=0 Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.502678 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv4b6" event={"ID":"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f","Type":"ContainerDied","Data":"b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1"} Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.511876 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsf4k" event={"ID":"fe4f2df8-e361-4814-bc78-16d82dd1cb84","Type":"ContainerStarted","Data":"4b394052163306e2ae552b3b19e5d8963ad3e5cf963ddc245c904880338326cb"} Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.514201 4945 generic.go:334] "Generic (PLEG): container finished" podID="ce75ef30-8da2-4993-b5d6-6db6250cb3ac" containerID="61593ec3c19d9e9696cffc2a651c49e343c9b0bcad967acc1b68cfc2fedd6eff" exitCode=0 Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.514250 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpr9p" event={"ID":"ce75ef30-8da2-4993-b5d6-6db6250cb3ac","Type":"ContainerDied","Data":"61593ec3c19d9e9696cffc2a651c49e343c9b0bcad967acc1b68cfc2fedd6eff"} Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.514269 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpr9p" event={"ID":"ce75ef30-8da2-4993-b5d6-6db6250cb3ac","Type":"ContainerStarted","Data":"8d2cadf76163b56fad2c4beaac181be971feec77fdfa4164cb1aff0064ebfeab"} Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.516365 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q99xk" event={"ID":"035037f1-e099-4416-a125-177d9aeef29f","Type":"ContainerStarted","Data":"b6de9ec0e07072b014171a1a4415fb63682fa5a60f99a345d9f60f5bf8464fba"} Jan 08 23:21:56 crc kubenswrapper[4945]: I0108 23:21:56.556744 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xsf4k" podStartSLOduration=1.890826839 podStartE2EDuration="3.556726397s" podCreationTimestamp="2026-01-08 23:21:53 +0000 UTC" firstStartedPulling="2026-01-08 23:21:54.479456002 +0000 UTC m=+384.790614948" lastFinishedPulling="2026-01-08 23:21:56.14535556 +0000 UTC m=+386.456514506" observedRunningTime="2026-01-08 23:21:56.553233192 +0000 UTC m=+386.864392138" watchObservedRunningTime="2026-01-08 23:21:56.556726397 +0000 UTC m=+386.867885333" Jan 08 23:21:57 crc kubenswrapper[4945]: I0108 23:21:57.525472 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv4b6" event={"ID":"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f","Type":"ContainerStarted","Data":"f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d"} Jan 08 23:21:57 crc kubenswrapper[4945]: I0108 23:21:57.527828 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpr9p" event={"ID":"ce75ef30-8da2-4993-b5d6-6db6250cb3ac","Type":"ContainerStarted","Data":"0b03c4f27cdacc9c09e596fb5c355ae45638444dbd04c4f5817941fc5d92af96"} Jan 08 23:21:57 crc kubenswrapper[4945]: I0108 23:21:57.529659 4945 generic.go:334] "Generic (PLEG): container finished" podID="035037f1-e099-4416-a125-177d9aeef29f" containerID="0dd1d08d974b90e2be829c9061dfaa074e7a175f5a4827fdf64858fd9f7fd718" exitCode=0 Jan 08 23:21:57 crc kubenswrapper[4945]: I0108 23:21:57.530613 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q99xk" event={"ID":"035037f1-e099-4416-a125-177d9aeef29f","Type":"ContainerDied","Data":"0dd1d08d974b90e2be829c9061dfaa074e7a175f5a4827fdf64858fd9f7fd718"} Jan 08 23:21:57 crc kubenswrapper[4945]: I0108 23:21:57.543874 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tv4b6" podStartSLOduration=3.141893188 podStartE2EDuration="5.543856141s" podCreationTimestamp="2026-01-08 23:21:52 +0000 UTC" firstStartedPulling="2026-01-08 23:21:54.481165454 +0000 UTC m=+384.792324400" lastFinishedPulling="2026-01-08 23:21:56.883128407 +0000 UTC m=+387.194287353" observedRunningTime="2026-01-08 23:21:57.543179275 +0000 UTC m=+387.854338221" watchObservedRunningTime="2026-01-08 23:21:57.543856141 +0000 UTC m=+387.855015087" Jan 08 23:21:58 crc kubenswrapper[4945]: I0108 23:21:58.537677 4945 generic.go:334] "Generic (PLEG): container finished" podID="ce75ef30-8da2-4993-b5d6-6db6250cb3ac" containerID="0b03c4f27cdacc9c09e596fb5c355ae45638444dbd04c4f5817941fc5d92af96" exitCode=0 Jan 08 23:21:58 crc kubenswrapper[4945]: I0108 23:21:58.539728 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpr9p" event={"ID":"ce75ef30-8da2-4993-b5d6-6db6250cb3ac","Type":"ContainerDied","Data":"0b03c4f27cdacc9c09e596fb5c355ae45638444dbd04c4f5817941fc5d92af96"} Jan 08 23:22:02 crc kubenswrapper[4945]: I0108 23:22:02.561155 4945 generic.go:334] "Generic (PLEG): container finished" podID="035037f1-e099-4416-a125-177d9aeef29f" containerID="d9a1cc05795b2ce2b04a6d19fd727cda07b7e9254fd6c42fdb1fe8a913299836" exitCode=0 Jan 08 23:22:02 crc kubenswrapper[4945]: I0108 23:22:02.561717 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q99xk" event={"ID":"035037f1-e099-4416-a125-177d9aeef29f","Type":"ContainerDied","Data":"d9a1cc05795b2ce2b04a6d19fd727cda07b7e9254fd6c42fdb1fe8a913299836"} Jan 08 23:22:02 crc kubenswrapper[4945]: I0108 23:22:02.564965 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpr9p" event={"ID":"ce75ef30-8da2-4993-b5d6-6db6250cb3ac","Type":"ContainerStarted","Data":"d327ce9ec1e5d522b61d1033b58e9a3842e9beb2807cec185a3a4408c3d16e76"} Jan 08 23:22:02 crc kubenswrapper[4945]: I0108 23:22:02.607849 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mpr9p" podStartSLOduration=2.533791332 podStartE2EDuration="7.607830444s" podCreationTimestamp="2026-01-08 23:21:55 +0000 UTC" firstStartedPulling="2026-01-08 23:21:56.515262834 +0000 UTC m=+386.826421780" lastFinishedPulling="2026-01-08 23:22:01.589301946 +0000 UTC m=+391.900460892" observedRunningTime="2026-01-08 23:22:02.607043645 +0000 UTC m=+392.918202611" watchObservedRunningTime="2026-01-08 23:22:02.607830444 +0000 UTC m=+392.918989390" Jan 08 23:22:03 crc kubenswrapper[4945]: I0108 23:22:03.006817 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:22:03 crc kubenswrapper[4945]: I0108 23:22:03.007211 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:22:03 crc kubenswrapper[4945]: I0108 23:22:03.049027 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:22:03 crc kubenswrapper[4945]: I0108 23:22:03.627368 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tv4b6" Jan 08 23:22:03 crc kubenswrapper[4945]: I0108 23:22:03.660960 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:22:03 crc kubenswrapper[4945]: I0108 23:22:03.661017 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:22:03 crc kubenswrapper[4945]: I0108 23:22:03.716865 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:22:04 crc kubenswrapper[4945]: I0108 23:22:04.585723 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q99xk" event={"ID":"035037f1-e099-4416-a125-177d9aeef29f","Type":"ContainerStarted","Data":"a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b"} Jan 08 23:22:04 crc kubenswrapper[4945]: I0108 23:22:04.610369 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q99xk" podStartSLOduration=3.620075493 podStartE2EDuration="9.610342713s" podCreationTimestamp="2026-01-08 23:21:55 +0000 UTC" firstStartedPulling="2026-01-08 23:21:57.531097069 +0000 UTC m=+387.842256015" lastFinishedPulling="2026-01-08 23:22:03.521364279 +0000 UTC m=+393.832523235" observedRunningTime="2026-01-08 23:22:04.605778673 +0000 UTC m=+394.916937619" watchObservedRunningTime="2026-01-08 23:22:04.610342713 +0000 UTC m=+394.921501659" Jan 08 23:22:04 crc kubenswrapper[4945]: I0108 23:22:04.635206 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xsf4k" Jan 08 23:22:05 crc kubenswrapper[4945]: I0108 23:22:05.413381 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:22:05 crc kubenswrapper[4945]: I0108 23:22:05.413437 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:22:05 crc kubenswrapper[4945]: I0108 23:22:05.924512 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" podUID="884cb1ac-efad-4ead-b31f-7301081aa310" containerName="registry" containerID="cri-o://a305b12b487c2666baf090e6f682730e55375f6ac7b81f5aacdf74abc5b2dd25" gracePeriod=30 Jan 08 23:22:06 crc kubenswrapper[4945]: I0108 23:22:06.015090 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:22:06 crc kubenswrapper[4945]: I0108 23:22:06.015374 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:22:06 crc kubenswrapper[4945]: I0108 23:22:06.058467 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:22:06 crc kubenswrapper[4945]: I0108 23:22:06.451858 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mpr9p" podUID="ce75ef30-8da2-4993-b5d6-6db6250cb3ac" containerName="registry-server" probeResult="failure" output=< Jan 08 23:22:06 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 08 23:22:06 crc kubenswrapper[4945]: > Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.604082 4945 generic.go:334] "Generic (PLEG): container finished" podID="884cb1ac-efad-4ead-b31f-7301081aa310" containerID="a305b12b487c2666baf090e6f682730e55375f6ac7b81f5aacdf74abc5b2dd25" exitCode=0 Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.604181 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" event={"ID":"884cb1ac-efad-4ead-b31f-7301081aa310","Type":"ContainerDied","Data":"a305b12b487c2666baf090e6f682730e55375f6ac7b81f5aacdf74abc5b2dd25"} Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.656477 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.745337 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-bound-sa-token\") pod \"884cb1ac-efad-4ead-b31f-7301081aa310\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.745419 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkfm2\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-kube-api-access-tkfm2\") pod \"884cb1ac-efad-4ead-b31f-7301081aa310\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.745465 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/884cb1ac-efad-4ead-b31f-7301081aa310-installation-pull-secrets\") pod \"884cb1ac-efad-4ead-b31f-7301081aa310\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.745498 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-registry-tls\") pod \"884cb1ac-efad-4ead-b31f-7301081aa310\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.745527 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-trusted-ca\") pod \"884cb1ac-efad-4ead-b31f-7301081aa310\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.745545 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-registry-certificates\") pod \"884cb1ac-efad-4ead-b31f-7301081aa310\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.745684 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"884cb1ac-efad-4ead-b31f-7301081aa310\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.745708 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/884cb1ac-efad-4ead-b31f-7301081aa310-ca-trust-extracted\") pod \"884cb1ac-efad-4ead-b31f-7301081aa310\" (UID: \"884cb1ac-efad-4ead-b31f-7301081aa310\") " Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.747161 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "884cb1ac-efad-4ead-b31f-7301081aa310" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.747172 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "884cb1ac-efad-4ead-b31f-7301081aa310" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.757771 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "884cb1ac-efad-4ead-b31f-7301081aa310" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.766085 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-kube-api-access-tkfm2" (OuterVolumeSpecName: "kube-api-access-tkfm2") pod "884cb1ac-efad-4ead-b31f-7301081aa310" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310"). InnerVolumeSpecName "kube-api-access-tkfm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.767041 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/884cb1ac-efad-4ead-b31f-7301081aa310-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "884cb1ac-efad-4ead-b31f-7301081aa310" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.769778 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/884cb1ac-efad-4ead-b31f-7301081aa310-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "884cb1ac-efad-4ead-b31f-7301081aa310" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.775533 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "884cb1ac-efad-4ead-b31f-7301081aa310" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.778784 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "884cb1ac-efad-4ead-b31f-7301081aa310" (UID: "884cb1ac-efad-4ead-b31f-7301081aa310"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.847226 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.847299 4945 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/884cb1ac-efad-4ead-b31f-7301081aa310-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.847318 4945 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/884cb1ac-efad-4ead-b31f-7301081aa310-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.847331 4945 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.847343 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkfm2\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-kube-api-access-tkfm2\") on node \"crc\" DevicePath \"\"" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.847357 4945 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/884cb1ac-efad-4ead-b31f-7301081aa310-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 08 23:22:07 crc kubenswrapper[4945]: I0108 23:22:07.847369 4945 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/884cb1ac-efad-4ead-b31f-7301081aa310-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:22:08 crc kubenswrapper[4945]: I0108 23:22:08.610377 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" event={"ID":"884cb1ac-efad-4ead-b31f-7301081aa310","Type":"ContainerDied","Data":"87858d27b246132b256bf6c6fbcb0dee2cc0bd79a4ec680e2b5526a8f4a1f6fc"} Jan 08 23:22:08 crc kubenswrapper[4945]: I0108 23:22:08.610442 4945 scope.go:117] "RemoveContainer" containerID="a305b12b487c2666baf090e6f682730e55375f6ac7b81f5aacdf74abc5b2dd25" Jan 08 23:22:08 crc kubenswrapper[4945]: I0108 23:22:08.610450 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j8vl9" Jan 08 23:22:08 crc kubenswrapper[4945]: I0108 23:22:08.636215 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j8vl9"] Jan 08 23:22:08 crc kubenswrapper[4945]: I0108 23:22:08.640891 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j8vl9"] Jan 08 23:22:10 crc kubenswrapper[4945]: I0108 23:22:10.012049 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="884cb1ac-efad-4ead-b31f-7301081aa310" path="/var/lib/kubelet/pods/884cb1ac-efad-4ead-b31f-7301081aa310/volumes" Jan 08 23:22:13 crc kubenswrapper[4945]: I0108 23:22:13.577962 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:22:13 crc kubenswrapper[4945]: I0108 23:22:13.578371 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:22:13 crc kubenswrapper[4945]: I0108 23:22:13.578444 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:22:13 crc kubenswrapper[4945]: I0108 23:22:13.579097 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03b243e540c86d992ce6bfde8b79c5371746158349f3c2a49e7cec342fe0ef67"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:22:13 crc kubenswrapper[4945]: I0108 23:22:13.579163 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://03b243e540c86d992ce6bfde8b79c5371746158349f3c2a49e7cec342fe0ef67" gracePeriod=600 Jan 08 23:22:15 crc kubenswrapper[4945]: I0108 23:22:15.463218 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:22:15 crc kubenswrapper[4945]: I0108 23:22:15.523712 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mpr9p" Jan 08 23:22:15 crc kubenswrapper[4945]: I0108 23:22:15.662935 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="03b243e540c86d992ce6bfde8b79c5371746158349f3c2a49e7cec342fe0ef67" exitCode=0 Jan 08 23:22:15 crc kubenswrapper[4945]: I0108 23:22:15.663783 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"03b243e540c86d992ce6bfde8b79c5371746158349f3c2a49e7cec342fe0ef67"} Jan 08 23:22:15 crc kubenswrapper[4945]: I0108 23:22:15.663819 4945 scope.go:117] "RemoveContainer" containerID="5a21b4ebd882b3fbec8c61c1601c4a79f2e6e761921d8115e6dc39d6085e62fc" Jan 08 23:22:16 crc kubenswrapper[4945]: I0108 23:22:16.080705 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q99xk" Jan 08 23:22:16 crc kubenswrapper[4945]: I0108 23:22:16.683573 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"e98a6551c147bc5179d0050676356620bbb4e5029dcbf763406bae9cd08060cb"} Jan 08 23:24:43 crc kubenswrapper[4945]: I0108 23:24:43.578827 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:24:43 crc kubenswrapper[4945]: I0108 23:24:43.579468 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:25:13 crc kubenswrapper[4945]: I0108 23:25:13.578115 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:25:13 crc kubenswrapper[4945]: I0108 23:25:13.581151 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:25:43 crc kubenswrapper[4945]: I0108 23:25:43.578681 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:25:43 crc kubenswrapper[4945]: I0108 23:25:43.579658 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:25:43 crc kubenswrapper[4945]: I0108 23:25:43.579758 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:25:43 crc kubenswrapper[4945]: I0108 23:25:43.580716 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e98a6551c147bc5179d0050676356620bbb4e5029dcbf763406bae9cd08060cb"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:25:43 crc kubenswrapper[4945]: I0108 23:25:43.580824 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://e98a6551c147bc5179d0050676356620bbb4e5029dcbf763406bae9cd08060cb" gracePeriod=600 Jan 08 23:25:44 crc kubenswrapper[4945]: I0108 23:25:44.077100 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="e98a6551c147bc5179d0050676356620bbb4e5029dcbf763406bae9cd08060cb" exitCode=0 Jan 08 23:25:44 crc kubenswrapper[4945]: I0108 23:25:44.077172 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"e98a6551c147bc5179d0050676356620bbb4e5029dcbf763406bae9cd08060cb"} Jan 08 23:25:44 crc kubenswrapper[4945]: I0108 23:25:44.077501 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"e226cbec52724f8156027ca6ca9d14289ac828814d24940b98909ac9a557fa94"} Jan 08 23:25:44 crc kubenswrapper[4945]: I0108 23:25:44.077529 4945 scope.go:117] "RemoveContainer" containerID="03b243e540c86d992ce6bfde8b79c5371746158349f3c2a49e7cec342fe0ef67" Jan 08 23:27:43 crc kubenswrapper[4945]: I0108 23:27:43.578641 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:27:43 crc kubenswrapper[4945]: I0108 23:27:43.579428 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:27:56 crc kubenswrapper[4945]: I0108 23:27:56.850951 4945 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 08 23:28:13 crc kubenswrapper[4945]: I0108 23:28:13.578447 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:28:13 crc kubenswrapper[4945]: I0108 23:28:13.580269 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.243333 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qftfp"] Jan 08 23:28:19 crc kubenswrapper[4945]: E0108 23:28:19.243939 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884cb1ac-efad-4ead-b31f-7301081aa310" containerName="registry" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.243956 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="884cb1ac-efad-4ead-b31f-7301081aa310" containerName="registry" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.244107 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="884cb1ac-efad-4ead-b31f-7301081aa310" containerName="registry" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.245446 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.254044 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qftfp"] Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.311945 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfhqp\" (UniqueName: \"kubernetes.io/projected/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-kube-api-access-gfhqp\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.312459 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-utilities\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.312521 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-catalog-content\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.413253 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfhqp\" (UniqueName: \"kubernetes.io/projected/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-kube-api-access-gfhqp\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.413307 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-utilities\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.413371 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-catalog-content\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.413811 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-catalog-content\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.413923 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-utilities\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.430539 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfhqp\" (UniqueName: \"kubernetes.io/projected/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-kube-api-access-gfhqp\") pod \"community-operators-qftfp\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.582235 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:19 crc kubenswrapper[4945]: I0108 23:28:19.815722 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qftfp"] Jan 08 23:28:20 crc kubenswrapper[4945]: I0108 23:28:20.056840 4945 generic.go:334] "Generic (PLEG): container finished" podID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerID="aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858" exitCode=0 Jan 08 23:28:20 crc kubenswrapper[4945]: I0108 23:28:20.056878 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftfp" event={"ID":"a44e9e6a-740a-4eae-9e09-bc9029a6bd41","Type":"ContainerDied","Data":"aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858"} Jan 08 23:28:20 crc kubenswrapper[4945]: I0108 23:28:20.056900 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftfp" event={"ID":"a44e9e6a-740a-4eae-9e09-bc9029a6bd41","Type":"ContainerStarted","Data":"dca73469c7dc2a26cac371c8a70fbceeb714f7d54e25e17ec44f6a65e8211fc0"} Jan 08 23:28:20 crc kubenswrapper[4945]: I0108 23:28:20.058749 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 08 23:28:21 crc kubenswrapper[4945]: I0108 23:28:21.066406 4945 generic.go:334] "Generic (PLEG): container finished" podID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerID="ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24" exitCode=0 Jan 08 23:28:21 crc kubenswrapper[4945]: I0108 23:28:21.066515 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftfp" event={"ID":"a44e9e6a-740a-4eae-9e09-bc9029a6bd41","Type":"ContainerDied","Data":"ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24"} Jan 08 23:28:22 crc kubenswrapper[4945]: I0108 23:28:22.074242 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftfp" event={"ID":"a44e9e6a-740a-4eae-9e09-bc9029a6bd41","Type":"ContainerStarted","Data":"93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52"} Jan 08 23:28:22 crc kubenswrapper[4945]: I0108 23:28:22.090204 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qftfp" podStartSLOduration=1.412695678 podStartE2EDuration="3.090186303s" podCreationTimestamp="2026-01-08 23:28:19 +0000 UTC" firstStartedPulling="2026-01-08 23:28:20.058542251 +0000 UTC m=+770.369701197" lastFinishedPulling="2026-01-08 23:28:21.736032876 +0000 UTC m=+772.047191822" observedRunningTime="2026-01-08 23:28:22.088384828 +0000 UTC m=+772.399543814" watchObservedRunningTime="2026-01-08 23:28:22.090186303 +0000 UTC m=+772.401345249" Jan 08 23:28:29 crc kubenswrapper[4945]: I0108 23:28:29.582654 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:29 crc kubenswrapper[4945]: I0108 23:28:29.584039 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:29 crc kubenswrapper[4945]: I0108 23:28:29.627940 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:30 crc kubenswrapper[4945]: I0108 23:28:30.186977 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:30 crc kubenswrapper[4945]: I0108 23:28:30.239620 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qftfp"] Jan 08 23:28:32 crc kubenswrapper[4945]: I0108 23:28:32.147395 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qftfp" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerName="registry-server" containerID="cri-o://93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52" gracePeriod=2 Jan 08 23:28:32 crc kubenswrapper[4945]: I0108 23:28:32.964423 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.107858 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfhqp\" (UniqueName: \"kubernetes.io/projected/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-kube-api-access-gfhqp\") pod \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.108081 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-utilities\") pod \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.108114 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-catalog-content\") pod \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\" (UID: \"a44e9e6a-740a-4eae-9e09-bc9029a6bd41\") " Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.109175 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-utilities" (OuterVolumeSpecName: "utilities") pod "a44e9e6a-740a-4eae-9e09-bc9029a6bd41" (UID: "a44e9e6a-740a-4eae-9e09-bc9029a6bd41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.112382 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-kube-api-access-gfhqp" (OuterVolumeSpecName: "kube-api-access-gfhqp") pod "a44e9e6a-740a-4eae-9e09-bc9029a6bd41" (UID: "a44e9e6a-740a-4eae-9e09-bc9029a6bd41"). InnerVolumeSpecName "kube-api-access-gfhqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.154825 4945 generic.go:334] "Generic (PLEG): container finished" podID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerID="93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52" exitCode=0 Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.154877 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftfp" event={"ID":"a44e9e6a-740a-4eae-9e09-bc9029a6bd41","Type":"ContainerDied","Data":"93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52"} Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.154900 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qftfp" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.154924 4945 scope.go:117] "RemoveContainer" containerID="93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.154911 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qftfp" event={"ID":"a44e9e6a-740a-4eae-9e09-bc9029a6bd41","Type":"ContainerDied","Data":"dca73469c7dc2a26cac371c8a70fbceeb714f7d54e25e17ec44f6a65e8211fc0"} Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.158903 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a44e9e6a-740a-4eae-9e09-bc9029a6bd41" (UID: "a44e9e6a-740a-4eae-9e09-bc9029a6bd41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.169538 4945 scope.go:117] "RemoveContainer" containerID="ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.181747 4945 scope.go:117] "RemoveContainer" containerID="aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.196866 4945 scope.go:117] "RemoveContainer" containerID="93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52" Jan 08 23:28:33 crc kubenswrapper[4945]: E0108 23:28:33.197347 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52\": container with ID starting with 93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52 not found: ID does not exist" containerID="93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.197396 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52"} err="failed to get container status \"93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52\": rpc error: code = NotFound desc = could not find container \"93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52\": container with ID starting with 93d3e2cce8b655987d521bc65d77266b2ea427bce99f249ed624b408964adf52 not found: ID does not exist" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.197433 4945 scope.go:117] "RemoveContainer" containerID="ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24" Jan 08 23:28:33 crc kubenswrapper[4945]: E0108 23:28:33.197747 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24\": container with ID starting with ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24 not found: ID does not exist" containerID="ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.197771 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24"} err="failed to get container status \"ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24\": rpc error: code = NotFound desc = could not find container \"ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24\": container with ID starting with ed1e0d118b39e114d373bfe1b835518da882675c20e607b3fb3dccf89f65dd24 not found: ID does not exist" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.197788 4945 scope.go:117] "RemoveContainer" containerID="aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858" Jan 08 23:28:33 crc kubenswrapper[4945]: E0108 23:28:33.198152 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858\": container with ID starting with aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858 not found: ID does not exist" containerID="aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.198187 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858"} err="failed to get container status \"aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858\": rpc error: code = NotFound desc = could not find container \"aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858\": container with ID starting with aa58ae19fa33d567bb83f60e0425b2b73582920de40d5706395d3a98b6ebe858 not found: ID does not exist" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.209907 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfhqp\" (UniqueName: \"kubernetes.io/projected/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-kube-api-access-gfhqp\") on node \"crc\" DevicePath \"\"" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.209931 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.209942 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a44e9e6a-740a-4eae-9e09-bc9029a6bd41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.489568 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qftfp"] Jan 08 23:28:33 crc kubenswrapper[4945]: I0108 23:28:33.495052 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qftfp"] Jan 08 23:28:34 crc kubenswrapper[4945]: I0108 23:28:34.005753 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" path="/var/lib/kubelet/pods/a44e9e6a-740a-4eae-9e09-bc9029a6bd41/volumes" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.289421 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-69jhq"] Jan 08 23:28:38 crc kubenswrapper[4945]: E0108 23:28:38.290244 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerName="registry-server" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.290278 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerName="registry-server" Jan 08 23:28:38 crc kubenswrapper[4945]: E0108 23:28:38.290311 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerName="extract-utilities" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.290333 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerName="extract-utilities" Jan 08 23:28:38 crc kubenswrapper[4945]: E0108 23:28:38.290365 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerName="extract-content" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.290382 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerName="extract-content" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.290591 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44e9e6a-740a-4eae-9e09-bc9029a6bd41" containerName="registry-server" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.292325 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.307816 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-69jhq"] Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.380447 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7tw\" (UniqueName: \"kubernetes.io/projected/fd54ea84-6f97-45ef-8678-19e877b8b4f1-kube-api-access-fz7tw\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.380523 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-catalog-content\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.380576 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-utilities\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.482359 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7tw\" (UniqueName: \"kubernetes.io/projected/fd54ea84-6f97-45ef-8678-19e877b8b4f1-kube-api-access-fz7tw\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.482483 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-catalog-content\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.482586 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-utilities\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.483629 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-utilities\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.483867 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-catalog-content\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.499921 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7tw\" (UniqueName: \"kubernetes.io/projected/fd54ea84-6f97-45ef-8678-19e877b8b4f1-kube-api-access-fz7tw\") pod \"certified-operators-69jhq\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.629154 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:38 crc kubenswrapper[4945]: I0108 23:28:38.895194 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-69jhq"] Jan 08 23:28:39 crc kubenswrapper[4945]: I0108 23:28:39.193799 4945 generic.go:334] "Generic (PLEG): container finished" podID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerID="b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c" exitCode=0 Jan 08 23:28:39 crc kubenswrapper[4945]: I0108 23:28:39.193865 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69jhq" event={"ID":"fd54ea84-6f97-45ef-8678-19e877b8b4f1","Type":"ContainerDied","Data":"b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c"} Jan 08 23:28:39 crc kubenswrapper[4945]: I0108 23:28:39.194309 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69jhq" event={"ID":"fd54ea84-6f97-45ef-8678-19e877b8b4f1","Type":"ContainerStarted","Data":"572b8f110cca719df96d2f7d26523375170c7b9417ffdf8cf442a7c220979b44"} Jan 08 23:28:40 crc kubenswrapper[4945]: I0108 23:28:40.205231 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69jhq" event={"ID":"fd54ea84-6f97-45ef-8678-19e877b8b4f1","Type":"ContainerStarted","Data":"08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1"} Jan 08 23:28:41 crc kubenswrapper[4945]: I0108 23:28:41.212462 4945 generic.go:334] "Generic (PLEG): container finished" podID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerID="08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1" exitCode=0 Jan 08 23:28:41 crc kubenswrapper[4945]: I0108 23:28:41.212527 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69jhq" event={"ID":"fd54ea84-6f97-45ef-8678-19e877b8b4f1","Type":"ContainerDied","Data":"08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1"} Jan 08 23:28:42 crc kubenswrapper[4945]: I0108 23:28:42.220049 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69jhq" event={"ID":"fd54ea84-6f97-45ef-8678-19e877b8b4f1","Type":"ContainerStarted","Data":"630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3"} Jan 08 23:28:42 crc kubenswrapper[4945]: I0108 23:28:42.245728 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-69jhq" podStartSLOduration=1.818106526 podStartE2EDuration="4.245709601s" podCreationTimestamp="2026-01-08 23:28:38 +0000 UTC" firstStartedPulling="2026-01-08 23:28:39.195300949 +0000 UTC m=+789.506459895" lastFinishedPulling="2026-01-08 23:28:41.622904024 +0000 UTC m=+791.934062970" observedRunningTime="2026-01-08 23:28:42.244582152 +0000 UTC m=+792.555741138" watchObservedRunningTime="2026-01-08 23:28:42.245709601 +0000 UTC m=+792.556868547" Jan 08 23:28:43 crc kubenswrapper[4945]: I0108 23:28:43.578521 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:28:43 crc kubenswrapper[4945]: I0108 23:28:43.578586 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:28:43 crc kubenswrapper[4945]: I0108 23:28:43.578637 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:28:43 crc kubenswrapper[4945]: I0108 23:28:43.579237 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e226cbec52724f8156027ca6ca9d14289ac828814d24940b98909ac9a557fa94"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:28:43 crc kubenswrapper[4945]: I0108 23:28:43.579309 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://e226cbec52724f8156027ca6ca9d14289ac828814d24940b98909ac9a557fa94" gracePeriod=600 Jan 08 23:28:44 crc kubenswrapper[4945]: I0108 23:28:44.237254 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="e226cbec52724f8156027ca6ca9d14289ac828814d24940b98909ac9a557fa94" exitCode=0 Jan 08 23:28:44 crc kubenswrapper[4945]: I0108 23:28:44.237356 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"e226cbec52724f8156027ca6ca9d14289ac828814d24940b98909ac9a557fa94"} Jan 08 23:28:44 crc kubenswrapper[4945]: I0108 23:28:44.237852 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"bb86089d7fa453c2e2295e7a4532a4489dac612ea805bc40de7f57ca4589bf0f"} Jan 08 23:28:44 crc kubenswrapper[4945]: I0108 23:28:44.237891 4945 scope.go:117] "RemoveContainer" containerID="e98a6551c147bc5179d0050676356620bbb4e5029dcbf763406bae9cd08060cb" Jan 08 23:28:48 crc kubenswrapper[4945]: I0108 23:28:48.629423 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:48 crc kubenswrapper[4945]: I0108 23:28:48.630167 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:48 crc kubenswrapper[4945]: I0108 23:28:48.679057 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:49 crc kubenswrapper[4945]: I0108 23:28:49.341855 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:49 crc kubenswrapper[4945]: I0108 23:28:49.929430 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-69jhq"] Jan 08 23:28:51 crc kubenswrapper[4945]: I0108 23:28:51.286049 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-69jhq" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerName="registry-server" containerID="cri-o://630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3" gracePeriod=2 Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.223604 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.294496 4945 generic.go:334] "Generic (PLEG): container finished" podID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerID="630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3" exitCode=0 Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.294548 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69jhq" event={"ID":"fd54ea84-6f97-45ef-8678-19e877b8b4f1","Type":"ContainerDied","Data":"630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3"} Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.294587 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69jhq" event={"ID":"fd54ea84-6f97-45ef-8678-19e877b8b4f1","Type":"ContainerDied","Data":"572b8f110cca719df96d2f7d26523375170c7b9417ffdf8cf442a7c220979b44"} Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.294617 4945 scope.go:117] "RemoveContainer" containerID="630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.294659 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69jhq" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.313902 4945 scope.go:117] "RemoveContainer" containerID="08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.336087 4945 scope.go:117] "RemoveContainer" containerID="b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.336212 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7tw\" (UniqueName: \"kubernetes.io/projected/fd54ea84-6f97-45ef-8678-19e877b8b4f1-kube-api-access-fz7tw\") pod \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.336306 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-catalog-content\") pod \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.336361 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-utilities\") pod \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\" (UID: \"fd54ea84-6f97-45ef-8678-19e877b8b4f1\") " Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.338105 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-utilities" (OuterVolumeSpecName: "utilities") pod "fd54ea84-6f97-45ef-8678-19e877b8b4f1" (UID: "fd54ea84-6f97-45ef-8678-19e877b8b4f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.359479 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd54ea84-6f97-45ef-8678-19e877b8b4f1-kube-api-access-fz7tw" (OuterVolumeSpecName: "kube-api-access-fz7tw") pod "fd54ea84-6f97-45ef-8678-19e877b8b4f1" (UID: "fd54ea84-6f97-45ef-8678-19e877b8b4f1"). InnerVolumeSpecName "kube-api-access-fz7tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.406469 4945 scope.go:117] "RemoveContainer" containerID="630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3" Jan 08 23:28:52 crc kubenswrapper[4945]: E0108 23:28:52.407102 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3\": container with ID starting with 630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3 not found: ID does not exist" containerID="630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.407139 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3"} err="failed to get container status \"630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3\": rpc error: code = NotFound desc = could not find container \"630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3\": container with ID starting with 630cbb3340751959bcc0e1e909d374dd14157f7492740ffcc6b0d2bb8a39edf3 not found: ID does not exist" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.407163 4945 scope.go:117] "RemoveContainer" containerID="08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1" Jan 08 23:28:52 crc kubenswrapper[4945]: E0108 23:28:52.407542 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1\": container with ID starting with 08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1 not found: ID does not exist" containerID="08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.407589 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1"} err="failed to get container status \"08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1\": rpc error: code = NotFound desc = could not find container \"08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1\": container with ID starting with 08ae5b69d2fdb9dba72284be3857982cd3ffe837e714c642d26d1f7b9bc179a1 not found: ID does not exist" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.407624 4945 scope.go:117] "RemoveContainer" containerID="b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c" Jan 08 23:28:52 crc kubenswrapper[4945]: E0108 23:28:52.408027 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c\": container with ID starting with b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c not found: ID does not exist" containerID="b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.408060 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c"} err="failed to get container status \"b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c\": rpc error: code = NotFound desc = could not find container \"b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c\": container with ID starting with b0643e87f7344317a795e8162ab6add8d3c45b6f2cbe46a2202884e039748c7c not found: ID does not exist" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.421834 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd54ea84-6f97-45ef-8678-19e877b8b4f1" (UID: "fd54ea84-6f97-45ef-8678-19e877b8b4f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.437843 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz7tw\" (UniqueName: \"kubernetes.io/projected/fd54ea84-6f97-45ef-8678-19e877b8b4f1-kube-api-access-fz7tw\") on node \"crc\" DevicePath \"\"" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.437902 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.437919 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd54ea84-6f97-45ef-8678-19e877b8b4f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.630822 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-69jhq"] Jan 08 23:28:52 crc kubenswrapper[4945]: I0108 23:28:52.635039 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-69jhq"] Jan 08 23:28:54 crc kubenswrapper[4945]: I0108 23:28:54.014344 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" path="/var/lib/kubelet/pods/fd54ea84-6f97-45ef-8678-19e877b8b4f1/volumes" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.258715 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-457wb"] Jan 08 23:29:43 crc kubenswrapper[4945]: E0108 23:29:43.260206 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerName="registry-server" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.260241 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerName="registry-server" Jan 08 23:29:43 crc kubenswrapper[4945]: E0108 23:29:43.260272 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerName="extract-utilities" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.260289 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerName="extract-utilities" Jan 08 23:29:43 crc kubenswrapper[4945]: E0108 23:29:43.260343 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerName="extract-content" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.260360 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerName="extract-content" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.260621 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd54ea84-6f97-45ef-8678-19e877b8b4f1" containerName="registry-server" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.262656 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.274809 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-457wb"] Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.383916 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l7pk\" (UniqueName: \"kubernetes.io/projected/a3c2391c-01dc-4321-9090-6196b0073bf9-kube-api-access-7l7pk\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.384378 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-utilities\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.384519 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-catalog-content\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.486010 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-catalog-content\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.486129 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l7pk\" (UniqueName: \"kubernetes.io/projected/a3c2391c-01dc-4321-9090-6196b0073bf9-kube-api-access-7l7pk\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.486163 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-utilities\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.486642 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-catalog-content\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.487165 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-utilities\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.509859 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l7pk\" (UniqueName: \"kubernetes.io/projected/a3c2391c-01dc-4321-9090-6196b0073bf9-kube-api-access-7l7pk\") pod \"redhat-marketplace-457wb\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.587117 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:43 crc kubenswrapper[4945]: I0108 23:29:43.754215 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-457wb"] Jan 08 23:29:44 crc kubenswrapper[4945]: I0108 23:29:44.659213 4945 generic.go:334] "Generic (PLEG): container finished" podID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerID="cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862" exitCode=0 Jan 08 23:29:44 crc kubenswrapper[4945]: I0108 23:29:44.659278 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-457wb" event={"ID":"a3c2391c-01dc-4321-9090-6196b0073bf9","Type":"ContainerDied","Data":"cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862"} Jan 08 23:29:44 crc kubenswrapper[4945]: I0108 23:29:44.659340 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-457wb" event={"ID":"a3c2391c-01dc-4321-9090-6196b0073bf9","Type":"ContainerStarted","Data":"a75f9ac84a8d7a303835054a139ba4d2be4f1624ff49e238d23d1eb5196223ae"} Jan 08 23:29:46 crc kubenswrapper[4945]: I0108 23:29:46.676483 4945 generic.go:334] "Generic (PLEG): container finished" podID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerID="1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1" exitCode=0 Jan 08 23:29:46 crc kubenswrapper[4945]: I0108 23:29:46.676552 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-457wb" event={"ID":"a3c2391c-01dc-4321-9090-6196b0073bf9","Type":"ContainerDied","Data":"1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.199492 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gcbcl"] Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.200470 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovn-controller" containerID="cri-o://72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4" gracePeriod=30 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.200682 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="sbdb" containerID="cri-o://f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9" gracePeriod=30 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.200755 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85" gracePeriod=30 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.200788 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kube-rbac-proxy-node" containerID="cri-o://b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd" gracePeriod=30 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.200846 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="nbdb" containerID="cri-o://e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636" gracePeriod=30 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.200856 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovn-acl-logging" containerID="cri-o://62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2" gracePeriod=30 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.200919 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="northd" containerID="cri-o://dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436" gracePeriod=30 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.256221 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" containerID="cri-o://e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" gracePeriod=30 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.541050 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/3.log" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.544295 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovn-acl-logging/0.log" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.544931 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovn-controller/0.log" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.545469 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.600950 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zth5r"] Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601204 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovn-acl-logging" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601219 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovn-acl-logging" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601230 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kube-rbac-proxy-ovn-metrics" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601239 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kube-rbac-proxy-ovn-metrics" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601252 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovn-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601261 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovn-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601276 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601285 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601295 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kubecfg-setup" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601304 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kubecfg-setup" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601318 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601326 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601362 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="northd" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601371 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="northd" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601381 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601389 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601400 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="nbdb" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601408 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="nbdb" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601419 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kube-rbac-proxy-node" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601428 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kube-rbac-proxy-node" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601440 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601450 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601463 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="sbdb" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601471 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="sbdb" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601578 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kube-rbac-proxy-node" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601591 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601604 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="northd" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601614 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601624 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601636 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovn-acl-logging" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601650 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="sbdb" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601661 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601671 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovn-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601680 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="kube-rbac-proxy-ovn-metrics" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601689 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="nbdb" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.601806 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601816 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.601933 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerName="ovnkube-controller" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.604628 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.683949 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovnkube-controller/3.log" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.685748 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovn-acl-logging/0.log" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686210 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gcbcl_e12d0822-44c5-4bf0-a785-cf478c66210f/ovn-controller/0.log" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686521 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" exitCode=0 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686552 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9" exitCode=0 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686564 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636" exitCode=0 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686576 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436" exitCode=0 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686587 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85" exitCode=0 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686596 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd" exitCode=0 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686604 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2" exitCode=143 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686614 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d0822-44c5-4bf0-a785-cf478c66210f" containerID="72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4" exitCode=143 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686661 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686696 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686711 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686726 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686739 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686754 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686768 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686778 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686783 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686789 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686795 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686800 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686805 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686811 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686816 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686824 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686832 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686839 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686844 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686849 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686855 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686860 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686865 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686870 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686875 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686880 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686887 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686895 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686901 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686906 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686912 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686918 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686923 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686928 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686933 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686939 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686944 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686952 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" event={"ID":"e12d0822-44c5-4bf0-a785-cf478c66210f","Type":"ContainerDied","Data":"cddfdfb0a3634c8df82a1449f3c18bb29e25e559fc3ea4f736a830231b8deadd"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686960 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686967 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.686984 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687015 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687022 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687028 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687035 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687056 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687070 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687076 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687090 4945 scope.go:117] "RemoveContainer" containerID="e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.687273 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gcbcl" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.692274 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-457wb" event={"ID":"a3c2391c-01dc-4321-9090-6196b0073bf9","Type":"ContainerStarted","Data":"d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.698240 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/2.log" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.698925 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/1.log" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.698964 4945 generic.go:334] "Generic (PLEG): container finished" podID="0fa9b342-4b22-49db-9022-2dd852e7d835" containerID="df77f4c64c58686ccf143a83433aacc0707b1ac8a2d94b795b5f4e382d2142d8" exitCode=2 Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.699024 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsh4d" event={"ID":"0fa9b342-4b22-49db-9022-2dd852e7d835","Type":"ContainerDied","Data":"df77f4c64c58686ccf143a83433aacc0707b1ac8a2d94b795b5f4e382d2142d8"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.699045 4945 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921"} Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.699364 4945 scope.go:117] "RemoveContainer" containerID="df77f4c64c58686ccf143a83433aacc0707b1ac8a2d94b795b5f4e382d2142d8" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.706009 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.712026 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-457wb" podStartSLOduration=2.207636705 podStartE2EDuration="4.712013484s" podCreationTimestamp="2026-01-08 23:29:43 +0000 UTC" firstStartedPulling="2026-01-08 23:29:44.661392873 +0000 UTC m=+854.972551839" lastFinishedPulling="2026-01-08 23:29:47.165769662 +0000 UTC m=+857.476928618" observedRunningTime="2026-01-08 23:29:47.709563125 +0000 UTC m=+858.020722081" watchObservedRunningTime="2026-01-08 23:29:47.712013484 +0000 UTC m=+858.023172430" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.736212 4945 scope.go:117] "RemoveContainer" containerID="f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744413 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744449 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-openvswitch\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744503 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e12d0822-44c5-4bf0-a785-cf478c66210f-ovn-node-metrics-cert\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744523 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-netns\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744565 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-etc-openvswitch\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744559 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744576 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744582 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-netd\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744631 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744640 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744647 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-node-log\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744662 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744703 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-config\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744720 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-node-log" (OuterVolumeSpecName: "node-log") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744727 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-script-lib\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744752 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7px\" (UniqueName: \"kubernetes.io/projected/e12d0822-44c5-4bf0-a785-cf478c66210f-kube-api-access-vx7px\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744771 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-slash\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744802 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-bin\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744816 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-log-socket\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744835 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-var-lib-openvswitch\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744850 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-kubelet\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744874 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-env-overrides\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744894 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-ovn-kubernetes\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744912 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-ovn\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744933 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-systemd-units\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744922 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-slash" (OuterVolumeSpecName: "host-slash") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.744949 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-systemd\") pod \"e12d0822-44c5-4bf0-a785-cf478c66210f\" (UID: \"e12d0822-44c5-4bf0-a785-cf478c66210f\") " Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745225 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745244 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745263 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745260 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-systemd\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745278 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745289 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745327 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-log-socket" (OuterVolumeSpecName: "log-socket") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745347 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745331 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745407 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745424 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-run-ovn-kubernetes\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745571 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-cni-netd\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745600 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-var-lib-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745629 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-kubelet\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745652 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-systemd-units\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745659 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745745 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-ovnkube-script-lib\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745824 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745871 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-etc-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745894 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77k5m\" (UniqueName: \"kubernetes.io/projected/9486ce31-858c-46e8-a262-54345ab91dda-kube-api-access-77k5m\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745921 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-node-log\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745936 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-ovnkube-config\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745955 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-cni-bin\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.745972 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-env-overrides\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746003 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9486ce31-858c-46e8-a262-54345ab91dda-ovn-node-metrics-cert\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746023 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-slash\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746049 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-run-netns\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746071 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-ovn\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746086 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-log-socket\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746105 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746190 4945 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746204 4945 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746213 4945 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746222 4945 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746230 4945 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746239 4945 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746248 4945 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746267 4945 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746276 4945 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746286 4945 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746296 4945 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746305 4945 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746314 4945 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746322 4945 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746330 4945 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-node-log\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746340 4945 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.746348 4945 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e12d0822-44c5-4bf0-a785-cf478c66210f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.750540 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12d0822-44c5-4bf0-a785-cf478c66210f-kube-api-access-vx7px" (OuterVolumeSpecName: "kube-api-access-vx7px") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "kube-api-access-vx7px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.751595 4945 scope.go:117] "RemoveContainer" containerID="e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.751708 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d0822-44c5-4bf0-a785-cf478c66210f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.760045 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e12d0822-44c5-4bf0-a785-cf478c66210f" (UID: "e12d0822-44c5-4bf0-a785-cf478c66210f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.766065 4945 scope.go:117] "RemoveContainer" containerID="dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.780277 4945 scope.go:117] "RemoveContainer" containerID="675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.801141 4945 scope.go:117] "RemoveContainer" containerID="b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.820114 4945 scope.go:117] "RemoveContainer" containerID="62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.837706 4945 scope.go:117] "RemoveContainer" containerID="72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847261 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-etc-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847332 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77k5m\" (UniqueName: \"kubernetes.io/projected/9486ce31-858c-46e8-a262-54345ab91dda-kube-api-access-77k5m\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847364 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-node-log\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847385 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-ovnkube-config\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847411 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-cni-bin\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847433 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-env-overrides\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847458 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9486ce31-858c-46e8-a262-54345ab91dda-ovn-node-metrics-cert\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847454 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-etc-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847483 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-slash\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847821 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-run-netns\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847867 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-ovn\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847890 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-log-socket\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847916 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847949 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-systemd\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.847981 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-run-ovn-kubernetes\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848131 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-cni-netd\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848156 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-var-lib-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-kubelet\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848202 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-systemd-units\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848242 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-ovnkube-script-lib\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848266 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848369 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848402 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-slash\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848425 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-run-netns\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848447 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-ovn\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848465 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7px\" (UniqueName: \"kubernetes.io/projected/e12d0822-44c5-4bf0-a785-cf478c66210f-kube-api-access-vx7px\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848487 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-log-socket\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848508 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848529 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-run-systemd\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848521 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-env-overrides\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848543 4945 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e12d0822-44c5-4bf0-a785-cf478c66210f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848567 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-var-lib-openvswitch\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848599 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-cni-netd\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848708 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e12d0822-44c5-4bf0-a785-cf478c66210f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848780 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-node-log\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848815 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-run-ovn-kubernetes\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848854 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-cni-bin\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.848895 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-host-kubelet\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.849063 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9486ce31-858c-46e8-a262-54345ab91dda-systemd-units\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.849086 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-ovnkube-config\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.849735 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9486ce31-858c-46e8-a262-54345ab91dda-ovnkube-script-lib\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.851944 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9486ce31-858c-46e8-a262-54345ab91dda-ovn-node-metrics-cert\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.853795 4945 scope.go:117] "RemoveContainer" containerID="4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.864697 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77k5m\" (UniqueName: \"kubernetes.io/projected/9486ce31-858c-46e8-a262-54345ab91dda-kube-api-access-77k5m\") pod \"ovnkube-node-zth5r\" (UID: \"9486ce31-858c-46e8-a262-54345ab91dda\") " pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.877319 4945 scope.go:117] "RemoveContainer" containerID="e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.877805 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": container with ID starting with e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e not found: ID does not exist" containerID="e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.877852 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} err="failed to get container status \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": rpc error: code = NotFound desc = could not find container \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": container with ID starting with e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.877876 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.878426 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": container with ID starting with 054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d not found: ID does not exist" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.878485 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} err="failed to get container status \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": rpc error: code = NotFound desc = could not find container \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": container with ID starting with 054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.878523 4945 scope.go:117] "RemoveContainer" containerID="f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.879879 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": container with ID starting with f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9 not found: ID does not exist" containerID="f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.879922 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} err="failed to get container status \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": rpc error: code = NotFound desc = could not find container \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": container with ID starting with f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.879948 4945 scope.go:117] "RemoveContainer" containerID="e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.880719 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": container with ID starting with e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636 not found: ID does not exist" containerID="e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.880754 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} err="failed to get container status \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": rpc error: code = NotFound desc = could not find container \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": container with ID starting with e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.880776 4945 scope.go:117] "RemoveContainer" containerID="dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.881157 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": container with ID starting with dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436 not found: ID does not exist" containerID="dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.881190 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} err="failed to get container status \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": rpc error: code = NotFound desc = could not find container \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": container with ID starting with dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.881212 4945 scope.go:117] "RemoveContainer" containerID="675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.881481 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": container with ID starting with 675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85 not found: ID does not exist" containerID="675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.881515 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} err="failed to get container status \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": rpc error: code = NotFound desc = could not find container \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": container with ID starting with 675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.881536 4945 scope.go:117] "RemoveContainer" containerID="b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.881945 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": container with ID starting with b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd not found: ID does not exist" containerID="b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.881967 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} err="failed to get container status \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": rpc error: code = NotFound desc = could not find container \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": container with ID starting with b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.881981 4945 scope.go:117] "RemoveContainer" containerID="62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.882831 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": container with ID starting with 62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2 not found: ID does not exist" containerID="62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.882879 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} err="failed to get container status \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": rpc error: code = NotFound desc = could not find container \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": container with ID starting with 62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.882907 4945 scope.go:117] "RemoveContainer" containerID="72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.883332 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": container with ID starting with 72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4 not found: ID does not exist" containerID="72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.883371 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} err="failed to get container status \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": rpc error: code = NotFound desc = could not find container \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": container with ID starting with 72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.883391 4945 scope.go:117] "RemoveContainer" containerID="4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258" Jan 08 23:29:47 crc kubenswrapper[4945]: E0108 23:29:47.883731 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": container with ID starting with 4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258 not found: ID does not exist" containerID="4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.883775 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} err="failed to get container status \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": rpc error: code = NotFound desc = could not find container \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": container with ID starting with 4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.883802 4945 scope.go:117] "RemoveContainer" containerID="e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.884612 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} err="failed to get container status \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": rpc error: code = NotFound desc = could not find container \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": container with ID starting with e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.884630 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.884983 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} err="failed to get container status \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": rpc error: code = NotFound desc = could not find container \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": container with ID starting with 054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.885076 4945 scope.go:117] "RemoveContainer" containerID="f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.885402 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} err="failed to get container status \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": rpc error: code = NotFound desc = could not find container \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": container with ID starting with f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.885422 4945 scope.go:117] "RemoveContainer" containerID="e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.885922 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} err="failed to get container status \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": rpc error: code = NotFound desc = could not find container \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": container with ID starting with e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.885988 4945 scope.go:117] "RemoveContainer" containerID="dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.890108 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} err="failed to get container status \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": rpc error: code = NotFound desc = could not find container \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": container with ID starting with dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.890135 4945 scope.go:117] "RemoveContainer" containerID="675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.890609 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} err="failed to get container status \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": rpc error: code = NotFound desc = could not find container \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": container with ID starting with 675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.890647 4945 scope.go:117] "RemoveContainer" containerID="b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.893155 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} err="failed to get container status \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": rpc error: code = NotFound desc = could not find container \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": container with ID starting with b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.893205 4945 scope.go:117] "RemoveContainer" containerID="62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.893574 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} err="failed to get container status \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": rpc error: code = NotFound desc = could not find container \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": container with ID starting with 62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.893597 4945 scope.go:117] "RemoveContainer" containerID="72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.893849 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} err="failed to get container status \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": rpc error: code = NotFound desc = could not find container \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": container with ID starting with 72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.893888 4945 scope.go:117] "RemoveContainer" containerID="4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894153 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} err="failed to get container status \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": rpc error: code = NotFound desc = could not find container \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": container with ID starting with 4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894176 4945 scope.go:117] "RemoveContainer" containerID="e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894347 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} err="failed to get container status \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": rpc error: code = NotFound desc = could not find container \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": container with ID starting with e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894363 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894520 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} err="failed to get container status \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": rpc error: code = NotFound desc = could not find container \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": container with ID starting with 054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894540 4945 scope.go:117] "RemoveContainer" containerID="f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894700 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} err="failed to get container status \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": rpc error: code = NotFound desc = could not find container \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": container with ID starting with f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894713 4945 scope.go:117] "RemoveContainer" containerID="e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894863 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} err="failed to get container status \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": rpc error: code = NotFound desc = could not find container \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": container with ID starting with e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.894879 4945 scope.go:117] "RemoveContainer" containerID="dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.895038 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} err="failed to get container status \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": rpc error: code = NotFound desc = could not find container \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": container with ID starting with dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.895051 4945 scope.go:117] "RemoveContainer" containerID="675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.895257 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} err="failed to get container status \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": rpc error: code = NotFound desc = could not find container \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": container with ID starting with 675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.895271 4945 scope.go:117] "RemoveContainer" containerID="b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.895441 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} err="failed to get container status \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": rpc error: code = NotFound desc = could not find container \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": container with ID starting with b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.895463 4945 scope.go:117] "RemoveContainer" containerID="62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.895823 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} err="failed to get container status \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": rpc error: code = NotFound desc = could not find container \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": container with ID starting with 62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.895843 4945 scope.go:117] "RemoveContainer" containerID="72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.896098 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} err="failed to get container status \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": rpc error: code = NotFound desc = could not find container \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": container with ID starting with 72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.896145 4945 scope.go:117] "RemoveContainer" containerID="4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.896426 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} err="failed to get container status \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": rpc error: code = NotFound desc = could not find container \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": container with ID starting with 4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.896447 4945 scope.go:117] "RemoveContainer" containerID="e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.896694 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} err="failed to get container status \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": rpc error: code = NotFound desc = could not find container \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": container with ID starting with e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.896728 4945 scope.go:117] "RemoveContainer" containerID="054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.897039 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d"} err="failed to get container status \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": rpc error: code = NotFound desc = could not find container \"054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d\": container with ID starting with 054651e6d76589daa8659120930bcfc12bd8ec5d02dbb3c787481d8d43db807d not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.897056 4945 scope.go:117] "RemoveContainer" containerID="f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.897417 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9"} err="failed to get container status \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": rpc error: code = NotFound desc = could not find container \"f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9\": container with ID starting with f132c772a20d9cfad83d2c382877f8dd35d9e9750733142973a036b1790c28a9 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.897433 4945 scope.go:117] "RemoveContainer" containerID="e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.897669 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636"} err="failed to get container status \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": rpc error: code = NotFound desc = could not find container \"e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636\": container with ID starting with e2381477cb357e0c1718ec425cc1c1f378ed8839c7ff32ae259a3851ad604636 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.897708 4945 scope.go:117] "RemoveContainer" containerID="dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.897965 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436"} err="failed to get container status \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": rpc error: code = NotFound desc = could not find container \"dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436\": container with ID starting with dd888054f96eeaf0e994f7df5bd8141d4b30b7165ec23e118e5dc9a5e3555436 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.897981 4945 scope.go:117] "RemoveContainer" containerID="675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.898208 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85"} err="failed to get container status \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": rpc error: code = NotFound desc = could not find container \"675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85\": container with ID starting with 675e92a5321a2541d4d574fbe58fff7b7fbe9849e7a311c95942ef20eed44c85 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.898235 4945 scope.go:117] "RemoveContainer" containerID="b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.898458 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd"} err="failed to get container status \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": rpc error: code = NotFound desc = could not find container \"b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd\": container with ID starting with b57a0f5aa784faf332df1abc967f55e5648b15ae06304f0fdc706d79530c6ebd not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.898490 4945 scope.go:117] "RemoveContainer" containerID="62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.898851 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2"} err="failed to get container status \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": rpc error: code = NotFound desc = could not find container \"62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2\": container with ID starting with 62ea7229e0a2d037c20646e96e6a451d79ba61cbe637cc8b9ebdb5e9306500b2 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.898874 4945 scope.go:117] "RemoveContainer" containerID="72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.899184 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4"} err="failed to get container status \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": rpc error: code = NotFound desc = could not find container \"72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4\": container with ID starting with 72211f14a02dcb0f89bef7bdd47416b5007f0b173a788ae487c36e7bd8be37f4 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.899220 4945 scope.go:117] "RemoveContainer" containerID="4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.899452 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258"} err="failed to get container status \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": rpc error: code = NotFound desc = could not find container \"4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258\": container with ID starting with 4d6a342a1272b5d43129626056ee3623143cfc99de35adcf14774734c5e21258 not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.899477 4945 scope.go:117] "RemoveContainer" containerID="e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.899774 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e"} err="failed to get container status \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": rpc error: code = NotFound desc = could not find container \"e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e\": container with ID starting with e2d402dd8fba5d9b14746440e26315dd9a5a0166839bd82e27bc4ab6f457ac1e not found: ID does not exist" Jan 08 23:29:47 crc kubenswrapper[4945]: I0108 23:29:47.917449 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:48 crc kubenswrapper[4945]: I0108 23:29:48.014776 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gcbcl"] Jan 08 23:29:48 crc kubenswrapper[4945]: I0108 23:29:48.017835 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gcbcl"] Jan 08 23:29:48 crc kubenswrapper[4945]: I0108 23:29:48.707160 4945 generic.go:334] "Generic (PLEG): container finished" podID="9486ce31-858c-46e8-a262-54345ab91dda" containerID="54590d713624aa8223ed6be16ce209256095b51ebc3fc8a979b44cc6a19757bd" exitCode=0 Jan 08 23:29:48 crc kubenswrapper[4945]: I0108 23:29:48.707227 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerDied","Data":"54590d713624aa8223ed6be16ce209256095b51ebc3fc8a979b44cc6a19757bd"} Jan 08 23:29:48 crc kubenswrapper[4945]: I0108 23:29:48.707548 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"ab3f137750672af50eb92f3c6ab0db2ad5b7d1f649a3b9b268706030defa668c"} Jan 08 23:29:48 crc kubenswrapper[4945]: I0108 23:29:48.710745 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/2.log" Jan 08 23:29:48 crc kubenswrapper[4945]: I0108 23:29:48.711228 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/1.log" Jan 08 23:29:48 crc kubenswrapper[4945]: I0108 23:29:48.711368 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dsh4d" event={"ID":"0fa9b342-4b22-49db-9022-2dd852e7d835","Type":"ContainerStarted","Data":"4a2247dca312dc4ba9c6ebb9959f2724e7bc2d687af7ec08fc2082686a7a4efa"} Jan 08 23:29:49 crc kubenswrapper[4945]: I0108 23:29:49.720986 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"1242264c5197a9ee7f523e3e585c76eaf10b5b1d72dc5013bd6e037ef8885ef1"} Jan 08 23:29:49 crc kubenswrapper[4945]: I0108 23:29:49.721714 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"d8c65af59293f3ed9540d64a7df3fe0ea93431a5f87e7319ac9fc3d167773f7a"} Jan 08 23:29:49 crc kubenswrapper[4945]: I0108 23:29:49.721724 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"6dfa766a877c2ceec13853831851e66d784e22c5cd400562a6f573f053139b77"} Jan 08 23:29:49 crc kubenswrapper[4945]: I0108 23:29:49.721732 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"e8bd327cc3da9324be9ce56427e667878918c71604dffe4a4a864e1d1131b198"} Jan 08 23:29:49 crc kubenswrapper[4945]: I0108 23:29:49.721740 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"58fc3743b479db8d768a4c84a268493a36a3988b5b75b3baf07cebd830b5d127"} Jan 08 23:29:49 crc kubenswrapper[4945]: I0108 23:29:49.721750 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"36ec797a4f25cb724e3155361db9209d81df105c11a5da94cdfec2fbc1f48180"} Jan 08 23:29:50 crc kubenswrapper[4945]: I0108 23:29:50.007046 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12d0822-44c5-4bf0-a785-cf478c66210f" path="/var/lib/kubelet/pods/e12d0822-44c5-4bf0-a785-cf478c66210f/volumes" Jan 08 23:29:52 crc kubenswrapper[4945]: I0108 23:29:52.746804 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"1f2234dbd9094bc75abc3f65adba725fd4b8c2918950a96659095f997a65f083"} Jan 08 23:29:53 crc kubenswrapper[4945]: I0108 23:29:53.588330 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:53 crc kubenswrapper[4945]: I0108 23:29:53.588405 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:53 crc kubenswrapper[4945]: I0108 23:29:53.630410 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:53 crc kubenswrapper[4945]: I0108 23:29:53.789701 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:53 crc kubenswrapper[4945]: I0108 23:29:53.870562 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-457wb"] Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.290291 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-76w4z"] Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.291483 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.293735 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.294403 4945 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-vxd97" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.295371 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.298296 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.374737 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-node-mnt\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.374811 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6fc\" (UniqueName: \"kubernetes.io/projected/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-kube-api-access-pn6fc\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.374859 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-crc-storage\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.476297 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-node-mnt\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.476350 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6fc\" (UniqueName: \"kubernetes.io/projected/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-kube-api-access-pn6fc\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.476380 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-crc-storage\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.476631 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-node-mnt\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.477107 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-crc-storage\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.498072 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6fc\" (UniqueName: \"kubernetes.io/projected/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-kube-api-access-pn6fc\") pod \"crc-storage-crc-76w4z\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.604948 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: E0108 23:29:55.641678 4945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-76w4z_crc-storage_e424d84a-0bbd-48ba-aec1-e2fecd4578f1_0(7a6c3bdec4c65aab347493f7ce075e4b5615e048a4744aac93f4b81a8752196b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 08 23:29:55 crc kubenswrapper[4945]: E0108 23:29:55.641763 4945 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-76w4z_crc-storage_e424d84a-0bbd-48ba-aec1-e2fecd4578f1_0(7a6c3bdec4c65aab347493f7ce075e4b5615e048a4744aac93f4b81a8752196b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: E0108 23:29:55.641792 4945 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-76w4z_crc-storage_e424d84a-0bbd-48ba-aec1-e2fecd4578f1_0(7a6c3bdec4c65aab347493f7ce075e4b5615e048a4744aac93f4b81a8752196b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: E0108 23:29:55.641852 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-76w4z_crc-storage(e424d84a-0bbd-48ba-aec1-e2fecd4578f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-76w4z_crc-storage(e424d84a-0bbd-48ba-aec1-e2fecd4578f1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-76w4z_crc-storage_e424d84a-0bbd-48ba-aec1-e2fecd4578f1_0(7a6c3bdec4c65aab347493f7ce075e4b5615e048a4744aac93f4b81a8752196b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-76w4z" podUID="e424d84a-0bbd-48ba-aec1-e2fecd4578f1" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.768015 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" event={"ID":"9486ce31-858c-46e8-a262-54345ab91dda","Type":"ContainerStarted","Data":"da15c409af7bfe42875451f5b4449f95380b3fcd989592c3af925c80a1b86b5c"} Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.768244 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-457wb" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerName="registry-server" containerID="cri-o://d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464" gracePeriod=2 Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.807747 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" podStartSLOduration=8.807730925 podStartE2EDuration="8.807730925s" podCreationTimestamp="2026-01-08 23:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:29:55.803327728 +0000 UTC m=+866.114486674" watchObservedRunningTime="2026-01-08 23:29:55.807730925 +0000 UTC m=+866.118889861" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.902341 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-76w4z"] Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.902508 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: I0108 23:29:55.903104 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: E0108 23:29:55.927620 4945 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-76w4z_crc-storage_e424d84a-0bbd-48ba-aec1-e2fecd4578f1_0(eb1035ee40722932b00b597ec52493fbeb0f328c30d6be7d4212364856181d39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 08 23:29:55 crc kubenswrapper[4945]: E0108 23:29:55.927687 4945 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-76w4z_crc-storage_e424d84a-0bbd-48ba-aec1-e2fecd4578f1_0(eb1035ee40722932b00b597ec52493fbeb0f328c30d6be7d4212364856181d39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: E0108 23:29:55.927708 4945 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-76w4z_crc-storage_e424d84a-0bbd-48ba-aec1-e2fecd4578f1_0(eb1035ee40722932b00b597ec52493fbeb0f328c30d6be7d4212364856181d39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:29:55 crc kubenswrapper[4945]: E0108 23:29:55.927760 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-76w4z_crc-storage(e424d84a-0bbd-48ba-aec1-e2fecd4578f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-76w4z_crc-storage(e424d84a-0bbd-48ba-aec1-e2fecd4578f1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-76w4z_crc-storage_e424d84a-0bbd-48ba-aec1-e2fecd4578f1_0(eb1035ee40722932b00b597ec52493fbeb0f328c30d6be7d4212364856181d39): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-76w4z" podUID="e424d84a-0bbd-48ba-aec1-e2fecd4578f1" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.498259 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.590989 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l7pk\" (UniqueName: \"kubernetes.io/projected/a3c2391c-01dc-4321-9090-6196b0073bf9-kube-api-access-7l7pk\") pod \"a3c2391c-01dc-4321-9090-6196b0073bf9\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.591119 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-catalog-content\") pod \"a3c2391c-01dc-4321-9090-6196b0073bf9\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.591146 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-utilities\") pod \"a3c2391c-01dc-4321-9090-6196b0073bf9\" (UID: \"a3c2391c-01dc-4321-9090-6196b0073bf9\") " Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.591929 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-utilities" (OuterVolumeSpecName: "utilities") pod "a3c2391c-01dc-4321-9090-6196b0073bf9" (UID: "a3c2391c-01dc-4321-9090-6196b0073bf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.599747 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c2391c-01dc-4321-9090-6196b0073bf9-kube-api-access-7l7pk" (OuterVolumeSpecName: "kube-api-access-7l7pk") pod "a3c2391c-01dc-4321-9090-6196b0073bf9" (UID: "a3c2391c-01dc-4321-9090-6196b0073bf9"). InnerVolumeSpecName "kube-api-access-7l7pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.612935 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3c2391c-01dc-4321-9090-6196b0073bf9" (UID: "a3c2391c-01dc-4321-9090-6196b0073bf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.692150 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l7pk\" (UniqueName: \"kubernetes.io/projected/a3c2391c-01dc-4321-9090-6196b0073bf9-kube-api-access-7l7pk\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.692596 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.692609 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c2391c-01dc-4321-9090-6196b0073bf9-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.774977 4945 generic.go:334] "Generic (PLEG): container finished" podID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerID="d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464" exitCode=0 Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.775027 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-457wb" event={"ID":"a3c2391c-01dc-4321-9090-6196b0073bf9","Type":"ContainerDied","Data":"d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464"} Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.775068 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-457wb" event={"ID":"a3c2391c-01dc-4321-9090-6196b0073bf9","Type":"ContainerDied","Data":"a75f9ac84a8d7a303835054a139ba4d2be4f1624ff49e238d23d1eb5196223ae"} Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.775086 4945 scope.go:117] "RemoveContainer" containerID="d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.775193 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-457wb" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.775377 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.775750 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.775771 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.796700 4945 scope.go:117] "RemoveContainer" containerID="1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.805568 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-457wb"] Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.808456 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.808594 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.813784 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-457wb"] Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.824234 4945 scope.go:117] "RemoveContainer" containerID="cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.848296 4945 scope.go:117] "RemoveContainer" containerID="d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464" Jan 08 23:29:56 crc kubenswrapper[4945]: E0108 23:29:56.848942 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464\": container with ID starting with d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464 not found: ID does not exist" containerID="d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.849061 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464"} err="failed to get container status \"d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464\": rpc error: code = NotFound desc = could not find container \"d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464\": container with ID starting with d88de7ca63bf4dd5dce60b33ce8a4e7ae359d2200fb3b14a2722a0189f428464 not found: ID does not exist" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.849111 4945 scope.go:117] "RemoveContainer" containerID="1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1" Jan 08 23:29:56 crc kubenswrapper[4945]: E0108 23:29:56.849569 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1\": container with ID starting with 1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1 not found: ID does not exist" containerID="1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.849611 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1"} err="failed to get container status \"1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1\": rpc error: code = NotFound desc = could not find container \"1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1\": container with ID starting with 1495801aca89cdaa55e18cf0686a66816d3a0ecd9f270d6134a553690c974cb1 not found: ID does not exist" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.849637 4945 scope.go:117] "RemoveContainer" containerID="cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862" Jan 08 23:29:56 crc kubenswrapper[4945]: E0108 23:29:56.849963 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862\": container with ID starting with cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862 not found: ID does not exist" containerID="cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862" Jan 08 23:29:56 crc kubenswrapper[4945]: I0108 23:29:56.850040 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862"} err="failed to get container status \"cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862\": rpc error: code = NotFound desc = could not find container \"cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862\": container with ID starting with cd19dbc5d5946996c7de9c9b3ceeb79a1d937c40ef057d8f8843f2ee49498862 not found: ID does not exist" Jan 08 23:29:58 crc kubenswrapper[4945]: I0108 23:29:58.009232 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" path="/var/lib/kubelet/pods/a3c2391c-01dc-4321-9090-6196b0073bf9/volumes" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.149731 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g"] Jan 08 23:30:00 crc kubenswrapper[4945]: E0108 23:30:00.150306 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerName="extract-content" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.150321 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerName="extract-content" Jan 08 23:30:00 crc kubenswrapper[4945]: E0108 23:30:00.150341 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerName="extract-utilities" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.150348 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerName="extract-utilities" Jan 08 23:30:00 crc kubenswrapper[4945]: E0108 23:30:00.150359 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerName="registry-server" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.150366 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerName="registry-server" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.150455 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c2391c-01dc-4321-9090-6196b0073bf9" containerName="registry-server" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.150813 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.153358 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.154491 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.169034 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g"] Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.246477 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-config-volume\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.246557 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-secret-volume\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.246600 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmfb\" (UniqueName: \"kubernetes.io/projected/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-kube-api-access-fxmfb\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.347543 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-config-volume\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.347622 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-secret-volume\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.347671 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxmfb\" (UniqueName: \"kubernetes.io/projected/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-kube-api-access-fxmfb\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.348689 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-config-volume\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.359484 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-secret-volume\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.382779 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxmfb\" (UniqueName: \"kubernetes.io/projected/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-kube-api-access-fxmfb\") pod \"collect-profiles-29465250-5jt8g\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.468612 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.696519 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g"] Jan 08 23:30:00 crc kubenswrapper[4945]: I0108 23:30:00.817052 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" event={"ID":"d9f9421f-a7ab-44ad-b6f9-e77a2982552d","Type":"ContainerStarted","Data":"974fecd38262715bc051ed8b36784a0e2cbc14ff4475bb6cb6de38650dd21c35"} Jan 08 23:30:01 crc kubenswrapper[4945]: I0108 23:30:01.827743 4945 generic.go:334] "Generic (PLEG): container finished" podID="d9f9421f-a7ab-44ad-b6f9-e77a2982552d" containerID="6c851d206f5fde290eca6ae1e5aeb87ffac1d7da437b2933bc164b60babb85b4" exitCode=0 Jan 08 23:30:01 crc kubenswrapper[4945]: I0108 23:30:01.827827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" event={"ID":"d9f9421f-a7ab-44ad-b6f9-e77a2982552d","Type":"ContainerDied","Data":"6c851d206f5fde290eca6ae1e5aeb87ffac1d7da437b2933bc164b60babb85b4"} Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.146468 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.215350 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxmfb\" (UniqueName: \"kubernetes.io/projected/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-kube-api-access-fxmfb\") pod \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.215458 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-secret-volume\") pod \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.215525 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-config-volume\") pod \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\" (UID: \"d9f9421f-a7ab-44ad-b6f9-e77a2982552d\") " Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.216676 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9f9421f-a7ab-44ad-b6f9-e77a2982552d" (UID: "d9f9421f-a7ab-44ad-b6f9-e77a2982552d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.221315 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9f9421f-a7ab-44ad-b6f9-e77a2982552d" (UID: "d9f9421f-a7ab-44ad-b6f9-e77a2982552d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.221420 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-kube-api-access-fxmfb" (OuterVolumeSpecName: "kube-api-access-fxmfb") pod "d9f9421f-a7ab-44ad-b6f9-e77a2982552d" (UID: "d9f9421f-a7ab-44ad-b6f9-e77a2982552d"). InnerVolumeSpecName "kube-api-access-fxmfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.317064 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxmfb\" (UniqueName: \"kubernetes.io/projected/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-kube-api-access-fxmfb\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.317106 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.317116 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9f9421f-a7ab-44ad-b6f9-e77a2982552d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.838444 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" event={"ID":"d9f9421f-a7ab-44ad-b6f9-e77a2982552d","Type":"ContainerDied","Data":"974fecd38262715bc051ed8b36784a0e2cbc14ff4475bb6cb6de38650dd21c35"} Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.838485 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974fecd38262715bc051ed8b36784a0e2cbc14ff4475bb6cb6de38650dd21c35" Jan 08 23:30:03 crc kubenswrapper[4945]: I0108 23:30:03.838523 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g" Jan 08 23:30:08 crc kubenswrapper[4945]: I0108 23:30:08.999379 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:30:09 crc kubenswrapper[4945]: I0108 23:30:09.000387 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:30:09 crc kubenswrapper[4945]: I0108 23:30:09.441134 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-76w4z"] Jan 08 23:30:09 crc kubenswrapper[4945]: I0108 23:30:09.875330 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-76w4z" event={"ID":"e424d84a-0bbd-48ba-aec1-e2fecd4578f1","Type":"ContainerStarted","Data":"684af05fca50277721a4c4494be0004c8991823e220131a3039f72d1b5a3e298"} Jan 08 23:30:12 crc kubenswrapper[4945]: I0108 23:30:12.897644 4945 generic.go:334] "Generic (PLEG): container finished" podID="e424d84a-0bbd-48ba-aec1-e2fecd4578f1" containerID="f0326f7a37d125abd9922bfa94bf9e314ae209d84d469f312aa9fc2eb56ace0d" exitCode=0 Jan 08 23:30:12 crc kubenswrapper[4945]: I0108 23:30:12.897731 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-76w4z" event={"ID":"e424d84a-0bbd-48ba-aec1-e2fecd4578f1","Type":"ContainerDied","Data":"f0326f7a37d125abd9922bfa94bf9e314ae209d84d469f312aa9fc2eb56ace0d"} Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.160894 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.287852 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn6fc\" (UniqueName: \"kubernetes.io/projected/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-kube-api-access-pn6fc\") pod \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.288380 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-crc-storage\") pod \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.288411 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-node-mnt\") pod \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\" (UID: \"e424d84a-0bbd-48ba-aec1-e2fecd4578f1\") " Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.288770 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "e424d84a-0bbd-48ba-aec1-e2fecd4578f1" (UID: "e424d84a-0bbd-48ba-aec1-e2fecd4578f1"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.296729 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-kube-api-access-pn6fc" (OuterVolumeSpecName: "kube-api-access-pn6fc") pod "e424d84a-0bbd-48ba-aec1-e2fecd4578f1" (UID: "e424d84a-0bbd-48ba-aec1-e2fecd4578f1"). InnerVolumeSpecName "kube-api-access-pn6fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.307854 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "e424d84a-0bbd-48ba-aec1-e2fecd4578f1" (UID: "e424d84a-0bbd-48ba-aec1-e2fecd4578f1"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.389484 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn6fc\" (UniqueName: \"kubernetes.io/projected/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-kube-api-access-pn6fc\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.389519 4945 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.389531 4945 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/e424d84a-0bbd-48ba-aec1-e2fecd4578f1-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.932569 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-76w4z" event={"ID":"e424d84a-0bbd-48ba-aec1-e2fecd4578f1","Type":"ContainerDied","Data":"684af05fca50277721a4c4494be0004c8991823e220131a3039f72d1b5a3e298"} Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.932647 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="684af05fca50277721a4c4494be0004c8991823e220131a3039f72d1b5a3e298" Jan 08 23:30:14 crc kubenswrapper[4945]: I0108 23:30:14.932668 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-76w4z" Jan 08 23:30:17 crc kubenswrapper[4945]: I0108 23:30:17.994763 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zth5r" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.336415 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b"] Jan 08 23:30:22 crc kubenswrapper[4945]: E0108 23:30:22.336890 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f9421f-a7ab-44ad-b6f9-e77a2982552d" containerName="collect-profiles" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.336903 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f9421f-a7ab-44ad-b6f9-e77a2982552d" containerName="collect-profiles" Jan 08 23:30:22 crc kubenswrapper[4945]: E0108 23:30:22.336920 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e424d84a-0bbd-48ba-aec1-e2fecd4578f1" containerName="storage" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.336927 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e424d84a-0bbd-48ba-aec1-e2fecd4578f1" containerName="storage" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.337025 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e424d84a-0bbd-48ba-aec1-e2fecd4578f1" containerName="storage" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.337037 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f9421f-a7ab-44ad-b6f9-e77a2982552d" containerName="collect-profiles" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.337676 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.339959 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.353170 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b"] Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.403024 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.403096 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.403130 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxm7r\" (UniqueName: \"kubernetes.io/projected/9fb4e4b3-9425-474f-9125-d98c958af414-kube-api-access-vxm7r\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.503861 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.503922 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxm7r\" (UniqueName: \"kubernetes.io/projected/9fb4e4b3-9425-474f-9125-d98c958af414-kube-api-access-vxm7r\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.503987 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.504452 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.504477 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.521075 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxm7r\" (UniqueName: \"kubernetes.io/projected/9fb4e4b3-9425-474f-9125-d98c958af414-kube-api-access-vxm7r\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.651593 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.841941 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b"] Jan 08 23:30:22 crc kubenswrapper[4945]: I0108 23:30:22.977518 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" event={"ID":"9fb4e4b3-9425-474f-9125-d98c958af414","Type":"ContainerStarted","Data":"caf821d932c0684599516f90eefa189608dfeb37966c08cb61d50fc5d09ca72a"} Jan 08 23:30:23 crc kubenswrapper[4945]: I0108 23:30:23.987868 4945 generic.go:334] "Generic (PLEG): container finished" podID="9fb4e4b3-9425-474f-9125-d98c958af414" containerID="5a702c6ea5b29e9198e93e3dbe778a70e43fbd052b7e3a37d00cc640cee26754" exitCode=0 Jan 08 23:30:23 crc kubenswrapper[4945]: I0108 23:30:23.987964 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" event={"ID":"9fb4e4b3-9425-474f-9125-d98c958af414","Type":"ContainerDied","Data":"5a702c6ea5b29e9198e93e3dbe778a70e43fbd052b7e3a37d00cc640cee26754"} Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.721474 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sdx2b"] Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.722684 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.730394 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdx2b"] Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.840114 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-catalog-content\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.840813 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-utilities\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.841108 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzhfh\" (UniqueName: \"kubernetes.io/projected/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-kube-api-access-jzhfh\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.942743 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-catalog-content\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.942790 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-utilities\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.942822 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzhfh\" (UniqueName: \"kubernetes.io/projected/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-kube-api-access-jzhfh\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.943620 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-catalog-content\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.944231 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-utilities\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:24 crc kubenswrapper[4945]: I0108 23:30:24.966232 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzhfh\" (UniqueName: \"kubernetes.io/projected/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-kube-api-access-jzhfh\") pod \"redhat-operators-sdx2b\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:25 crc kubenswrapper[4945]: I0108 23:30:25.047170 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:25 crc kubenswrapper[4945]: I0108 23:30:25.253985 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdx2b"] Jan 08 23:30:26 crc kubenswrapper[4945]: I0108 23:30:25.999663 4945 generic.go:334] "Generic (PLEG): container finished" podID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerID="34996b930035aee61d41af630393e1f61e85fa61b8741ca678b2bcab04032769" exitCode=0 Jan 08 23:30:26 crc kubenswrapper[4945]: I0108 23:30:26.001661 4945 generic.go:334] "Generic (PLEG): container finished" podID="9fb4e4b3-9425-474f-9125-d98c958af414" containerID="e459975e506b5db7e8fe7bd7ace54c6ed7e7f7e6b16975d5c0d6ec6b9ccb38d0" exitCode=0 Jan 08 23:30:26 crc kubenswrapper[4945]: I0108 23:30:26.009644 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdx2b" event={"ID":"c7feb15c-d6ca-48cb-ab35-b898fbcddd33","Type":"ContainerDied","Data":"34996b930035aee61d41af630393e1f61e85fa61b8741ca678b2bcab04032769"} Jan 08 23:30:26 crc kubenswrapper[4945]: I0108 23:30:26.009767 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdx2b" event={"ID":"c7feb15c-d6ca-48cb-ab35-b898fbcddd33","Type":"ContainerStarted","Data":"5c6ad150d50ba1b16e63a792084003a0c941bb6e50ca6974e24193baa188afd4"} Jan 08 23:30:26 crc kubenswrapper[4945]: I0108 23:30:26.009865 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" event={"ID":"9fb4e4b3-9425-474f-9125-d98c958af414","Type":"ContainerDied","Data":"e459975e506b5db7e8fe7bd7ace54c6ed7e7f7e6b16975d5c0d6ec6b9ccb38d0"} Jan 08 23:30:27 crc kubenswrapper[4945]: I0108 23:30:27.014246 4945 generic.go:334] "Generic (PLEG): container finished" podID="9fb4e4b3-9425-474f-9125-d98c958af414" containerID="e8575326a2faeee8cdd7aac147e7c9e4a1b7b39ac7546ba32d9eab5652444231" exitCode=0 Jan 08 23:30:27 crc kubenswrapper[4945]: I0108 23:30:27.014362 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" event={"ID":"9fb4e4b3-9425-474f-9125-d98c958af414","Type":"ContainerDied","Data":"e8575326a2faeee8cdd7aac147e7c9e4a1b7b39ac7546ba32d9eab5652444231"} Jan 08 23:30:27 crc kubenswrapper[4945]: I0108 23:30:27.016767 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdx2b" event={"ID":"c7feb15c-d6ca-48cb-ab35-b898fbcddd33","Type":"ContainerStarted","Data":"ec5b1f76043fdd9d378f7d91873c2cbecc48fc766c0e9db4d5275b6bd1b205fb"} Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.027135 4945 generic.go:334] "Generic (PLEG): container finished" podID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerID="ec5b1f76043fdd9d378f7d91873c2cbecc48fc766c0e9db4d5275b6bd1b205fb" exitCode=0 Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.027188 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdx2b" event={"ID":"c7feb15c-d6ca-48cb-ab35-b898fbcddd33","Type":"ContainerDied","Data":"ec5b1f76043fdd9d378f7d91873c2cbecc48fc766c0e9db4d5275b6bd1b205fb"} Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.267141 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.410622 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxm7r\" (UniqueName: \"kubernetes.io/projected/9fb4e4b3-9425-474f-9125-d98c958af414-kube-api-access-vxm7r\") pod \"9fb4e4b3-9425-474f-9125-d98c958af414\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.410961 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-util\") pod \"9fb4e4b3-9425-474f-9125-d98c958af414\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.411004 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-bundle\") pod \"9fb4e4b3-9425-474f-9125-d98c958af414\" (UID: \"9fb4e4b3-9425-474f-9125-d98c958af414\") " Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.411563 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-bundle" (OuterVolumeSpecName: "bundle") pod "9fb4e4b3-9425-474f-9125-d98c958af414" (UID: "9fb4e4b3-9425-474f-9125-d98c958af414"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.419059 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb4e4b3-9425-474f-9125-d98c958af414-kube-api-access-vxm7r" (OuterVolumeSpecName: "kube-api-access-vxm7r") pod "9fb4e4b3-9425-474f-9125-d98c958af414" (UID: "9fb4e4b3-9425-474f-9125-d98c958af414"). InnerVolumeSpecName "kube-api-access-vxm7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.512548 4945 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.512593 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxm7r\" (UniqueName: \"kubernetes.io/projected/9fb4e4b3-9425-474f-9125-d98c958af414-kube-api-access-vxm7r\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.653339 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-util" (OuterVolumeSpecName: "util") pod "9fb4e4b3-9425-474f-9125-d98c958af414" (UID: "9fb4e4b3-9425-474f-9125-d98c958af414"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:30:28 crc kubenswrapper[4945]: I0108 23:30:28.716202 4945 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9fb4e4b3-9425-474f-9125-d98c958af414-util\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:29 crc kubenswrapper[4945]: I0108 23:30:29.035784 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdx2b" event={"ID":"c7feb15c-d6ca-48cb-ab35-b898fbcddd33","Type":"ContainerStarted","Data":"5968fb7344ff7fbad9120c055c89f70199a1b750a757c95428b47e9c2710e5ab"} Jan 08 23:30:29 crc kubenswrapper[4945]: I0108 23:30:29.037824 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" event={"ID":"9fb4e4b3-9425-474f-9125-d98c958af414","Type":"ContainerDied","Data":"caf821d932c0684599516f90eefa189608dfeb37966c08cb61d50fc5d09ca72a"} Jan 08 23:30:29 crc kubenswrapper[4945]: I0108 23:30:29.037873 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf821d932c0684599516f90eefa189608dfeb37966c08cb61d50fc5d09ca72a" Jan 08 23:30:29 crc kubenswrapper[4945]: I0108 23:30:29.037891 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b" Jan 08 23:30:29 crc kubenswrapper[4945]: I0108 23:30:29.056114 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sdx2b" podStartSLOduration=2.513258569 podStartE2EDuration="5.056092441s" podCreationTimestamp="2026-01-08 23:30:24 +0000 UTC" firstStartedPulling="2026-01-08 23:30:26.001944604 +0000 UTC m=+896.313103550" lastFinishedPulling="2026-01-08 23:30:28.544778476 +0000 UTC m=+898.855937422" observedRunningTime="2026-01-08 23:30:29.051538 +0000 UTC m=+899.362696976" watchObservedRunningTime="2026-01-08 23:30:29.056092441 +0000 UTC m=+899.367251417" Jan 08 23:30:30 crc kubenswrapper[4945]: I0108 23:30:30.323061 4945 scope.go:117] "RemoveContainer" containerID="614e85201ea3be7b566c1a53f206334251065d8d611ea95071f08d79979fb921" Jan 08 23:30:31 crc kubenswrapper[4945]: I0108 23:30:31.048782 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dsh4d_0fa9b342-4b22-49db-9022-2dd852e7d835/kube-multus/2.log" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.809118 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-9jxqm"] Jan 08 23:30:32 crc kubenswrapper[4945]: E0108 23:30:32.809642 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb4e4b3-9425-474f-9125-d98c958af414" containerName="pull" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.809657 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb4e4b3-9425-474f-9125-d98c958af414" containerName="pull" Jan 08 23:30:32 crc kubenswrapper[4945]: E0108 23:30:32.809665 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb4e4b3-9425-474f-9125-d98c958af414" containerName="util" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.809671 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb4e4b3-9425-474f-9125-d98c958af414" containerName="util" Jan 08 23:30:32 crc kubenswrapper[4945]: E0108 23:30:32.809679 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb4e4b3-9425-474f-9125-d98c958af414" containerName="extract" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.809685 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb4e4b3-9425-474f-9125-d98c958af414" containerName="extract" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.809771 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb4e4b3-9425-474f-9125-d98c958af414" containerName="extract" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.810124 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-9jxqm" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.812868 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xwl7v" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.816445 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.817094 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.827871 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-9jxqm"] Jan 08 23:30:32 crc kubenswrapper[4945]: I0108 23:30:32.969776 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlxv2\" (UniqueName: \"kubernetes.io/projected/44391049-db83-4408-bf6c-086285c20de7-kube-api-access-tlxv2\") pod \"nmstate-operator-6769fb99d-9jxqm\" (UID: \"44391049-db83-4408-bf6c-086285c20de7\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-9jxqm" Jan 08 23:30:33 crc kubenswrapper[4945]: I0108 23:30:33.070557 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlxv2\" (UniqueName: \"kubernetes.io/projected/44391049-db83-4408-bf6c-086285c20de7-kube-api-access-tlxv2\") pod \"nmstate-operator-6769fb99d-9jxqm\" (UID: \"44391049-db83-4408-bf6c-086285c20de7\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-9jxqm" Jan 08 23:30:33 crc kubenswrapper[4945]: I0108 23:30:33.104712 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlxv2\" (UniqueName: \"kubernetes.io/projected/44391049-db83-4408-bf6c-086285c20de7-kube-api-access-tlxv2\") pod \"nmstate-operator-6769fb99d-9jxqm\" (UID: \"44391049-db83-4408-bf6c-086285c20de7\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-9jxqm" Jan 08 23:30:33 crc kubenswrapper[4945]: I0108 23:30:33.124356 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-9jxqm" Jan 08 23:30:33 crc kubenswrapper[4945]: I0108 23:30:33.326563 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-9jxqm"] Jan 08 23:30:34 crc kubenswrapper[4945]: I0108 23:30:34.063621 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-9jxqm" event={"ID":"44391049-db83-4408-bf6c-086285c20de7","Type":"ContainerStarted","Data":"5a58450bd3a464df7f2ce927e2ade4dd1afeaa4ab4a85edb700471f1103035f8"} Jan 08 23:30:35 crc kubenswrapper[4945]: I0108 23:30:35.047957 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:35 crc kubenswrapper[4945]: I0108 23:30:35.048039 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:35 crc kubenswrapper[4945]: I0108 23:30:35.086417 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:36 crc kubenswrapper[4945]: I0108 23:30:36.114137 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:37 crc kubenswrapper[4945]: I0108 23:30:37.084180 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-9jxqm" event={"ID":"44391049-db83-4408-bf6c-086285c20de7","Type":"ContainerStarted","Data":"7b45faff09a7e0bed4fd7973b8ea47258228aab7a1bc2fa376605e8360e40ce5"} Jan 08 23:30:37 crc kubenswrapper[4945]: I0108 23:30:37.698158 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-9jxqm" podStartSLOduration=2.347304865 podStartE2EDuration="5.698138826s" podCreationTimestamp="2026-01-08 23:30:32 +0000 UTC" firstStartedPulling="2026-01-08 23:30:33.335224112 +0000 UTC m=+903.646383058" lastFinishedPulling="2026-01-08 23:30:36.686058073 +0000 UTC m=+906.997217019" observedRunningTime="2026-01-08 23:30:37.098951618 +0000 UTC m=+907.410110554" watchObservedRunningTime="2026-01-08 23:30:37.698138826 +0000 UTC m=+908.009297772" Jan 08 23:30:37 crc kubenswrapper[4945]: I0108 23:30:37.701124 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdx2b"] Jan 08 23:30:38 crc kubenswrapper[4945]: I0108 23:30:38.088775 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sdx2b" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerName="registry-server" containerID="cri-o://5968fb7344ff7fbad9120c055c89f70199a1b750a757c95428b47e9c2710e5ab" gracePeriod=2 Jan 08 23:30:40 crc kubenswrapper[4945]: I0108 23:30:40.108127 4945 generic.go:334] "Generic (PLEG): container finished" podID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerID="5968fb7344ff7fbad9120c055c89f70199a1b750a757c95428b47e9c2710e5ab" exitCode=0 Jan 08 23:30:40 crc kubenswrapper[4945]: I0108 23:30:40.108210 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdx2b" event={"ID":"c7feb15c-d6ca-48cb-ab35-b898fbcddd33","Type":"ContainerDied","Data":"5968fb7344ff7fbad9120c055c89f70199a1b750a757c95428b47e9c2710e5ab"} Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.007099 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.115643 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdx2b" event={"ID":"c7feb15c-d6ca-48cb-ab35-b898fbcddd33","Type":"ContainerDied","Data":"5c6ad150d50ba1b16e63a792084003a0c941bb6e50ca6974e24193baa188afd4"} Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.115699 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdx2b" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.116014 4945 scope.go:117] "RemoveContainer" containerID="5968fb7344ff7fbad9120c055c89f70199a1b750a757c95428b47e9c2710e5ab" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.132158 4945 scope.go:117] "RemoveContainer" containerID="ec5b1f76043fdd9d378f7d91873c2cbecc48fc766c0e9db4d5275b6bd1b205fb" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.147513 4945 scope.go:117] "RemoveContainer" containerID="34996b930035aee61d41af630393e1f61e85fa61b8741ca678b2bcab04032769" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.173053 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzhfh\" (UniqueName: \"kubernetes.io/projected/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-kube-api-access-jzhfh\") pod \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.173153 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-utilities\") pod \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.173202 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-catalog-content\") pod \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\" (UID: \"c7feb15c-d6ca-48cb-ab35-b898fbcddd33\") " Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.174775 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-utilities" (OuterVolumeSpecName: "utilities") pod "c7feb15c-d6ca-48cb-ab35-b898fbcddd33" (UID: "c7feb15c-d6ca-48cb-ab35-b898fbcddd33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.182260 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-kube-api-access-jzhfh" (OuterVolumeSpecName: "kube-api-access-jzhfh") pod "c7feb15c-d6ca-48cb-ab35-b898fbcddd33" (UID: "c7feb15c-d6ca-48cb-ab35-b898fbcddd33"). InnerVolumeSpecName "kube-api-access-jzhfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.274371 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzhfh\" (UniqueName: \"kubernetes.io/projected/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-kube-api-access-jzhfh\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.274398 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.290205 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7feb15c-d6ca-48cb-ab35-b898fbcddd33" (UID: "c7feb15c-d6ca-48cb-ab35-b898fbcddd33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.375837 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7feb15c-d6ca-48cb-ab35-b898fbcddd33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.458532 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdx2b"] Jan 08 23:30:41 crc kubenswrapper[4945]: I0108 23:30:41.461850 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sdx2b"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.432571 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" path="/var/lib/kubelet/pods/c7feb15c-d6ca-48cb-ab35-b898fbcddd33/volumes" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.493646 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks"] Jan 08 23:30:42 crc kubenswrapper[4945]: E0108 23:30:42.493844 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerName="extract-content" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.493856 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerName="extract-content" Jan 08 23:30:42 crc kubenswrapper[4945]: E0108 23:30:42.493879 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerName="registry-server" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.493885 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerName="registry-server" Jan 08 23:30:42 crc kubenswrapper[4945]: E0108 23:30:42.493896 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerName="extract-utilities" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.493902 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerName="extract-utilities" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.493983 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7feb15c-d6ca-48cb-ab35-b898fbcddd33" containerName="registry-server" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.494500 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.495900 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9vd2v" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.505828 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.533982 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fz94s"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.534723 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.538272 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.542911 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fz94s"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.549665 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-dkw8b"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.550329 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.590075 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp7cr\" (UniqueName: \"kubernetes.io/projected/9b83df9c-7230-4d13-922b-de7ea806d98d-kube-api-access-vp7cr\") pod \"nmstate-metrics-7f7f7578db-wpxks\" (UID: \"9b83df9c-7230-4d13-922b-de7ea806d98d\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.646441 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.647476 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.652713 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-692bn" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.653329 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.653729 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.667015 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.691262 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ndlz\" (UniqueName: \"kubernetes.io/projected/333350be-c9ef-44d4-9de8-58b29fe9da27-kube-api-access-8ndlz\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.691341 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-dbus-socket\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.691370 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jrc7\" (UniqueName: \"kubernetes.io/projected/9011efad-d324-473e-bb95-cb0ee8905fc1-kube-api-access-5jrc7\") pod \"nmstate-webhook-f8fb84555-fz94s\" (UID: \"9011efad-d324-473e-bb95-cb0ee8905fc1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.691703 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-nmstate-lock\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.691785 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-ovs-socket\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.691836 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9011efad-d324-473e-bb95-cb0ee8905fc1-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fz94s\" (UID: \"9011efad-d324-473e-bb95-cb0ee8905fc1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.691958 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp7cr\" (UniqueName: \"kubernetes.io/projected/9b83df9c-7230-4d13-922b-de7ea806d98d-kube-api-access-vp7cr\") pod \"nmstate-metrics-7f7f7578db-wpxks\" (UID: \"9b83df9c-7230-4d13-922b-de7ea806d98d\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.711870 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp7cr\" (UniqueName: \"kubernetes.io/projected/9b83df9c-7230-4d13-922b-de7ea806d98d-kube-api-access-vp7cr\") pod \"nmstate-metrics-7f7f7578db-wpxks\" (UID: \"9b83df9c-7230-4d13-922b-de7ea806d98d\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793343 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ndlz\" (UniqueName: \"kubernetes.io/projected/333350be-c9ef-44d4-9de8-58b29fe9da27-kube-api-access-8ndlz\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793393 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdcpg\" (UniqueName: \"kubernetes.io/projected/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-kube-api-access-zdcpg\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793417 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793442 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-dbus-socket\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793627 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jrc7\" (UniqueName: \"kubernetes.io/projected/9011efad-d324-473e-bb95-cb0ee8905fc1-kube-api-access-5jrc7\") pod \"nmstate-webhook-f8fb84555-fz94s\" (UID: \"9011efad-d324-473e-bb95-cb0ee8905fc1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793681 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-dbus-socket\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793719 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793861 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-nmstate-lock\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793899 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-ovs-socket\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.793928 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9011efad-d324-473e-bb95-cb0ee8905fc1-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fz94s\" (UID: \"9011efad-d324-473e-bb95-cb0ee8905fc1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.794022 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-nmstate-lock\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.794081 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/333350be-c9ef-44d4-9de8-58b29fe9da27-ovs-socket\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.804694 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9011efad-d324-473e-bb95-cb0ee8905fc1-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-fz94s\" (UID: \"9011efad-d324-473e-bb95-cb0ee8905fc1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.812184 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.813274 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jrc7\" (UniqueName: \"kubernetes.io/projected/9011efad-d324-473e-bb95-cb0ee8905fc1-kube-api-access-5jrc7\") pod \"nmstate-webhook-f8fb84555-fz94s\" (UID: \"9011efad-d324-473e-bb95-cb0ee8905fc1\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.815865 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ndlz\" (UniqueName: \"kubernetes.io/projected/333350be-c9ef-44d4-9de8-58b29fe9da27-kube-api-access-8ndlz\") pod \"nmstate-handler-dkw8b\" (UID: \"333350be-c9ef-44d4-9de8-58b29fe9da27\") " pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.843652 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fd7864499-skqrz"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.844760 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.851176 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.852414 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fd7864499-skqrz"] Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.870223 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.895010 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdcpg\" (UniqueName: \"kubernetes.io/projected/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-kube-api-access-zdcpg\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.895044 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.895092 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.896127 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.899916 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: W0108 23:30:42.905897 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod333350be_c9ef_44d4_9de8_58b29fe9da27.slice/crio-24ccc972d19e087cb7b06ec9345ea9c4c344c9aad78eb2c8fe4e73d5ae978fcd WatchSource:0}: Error finding container 24ccc972d19e087cb7b06ec9345ea9c4c344c9aad78eb2c8fe4e73d5ae978fcd: Status 404 returned error can't find the container with id 24ccc972d19e087cb7b06ec9345ea9c4c344c9aad78eb2c8fe4e73d5ae978fcd Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.927973 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdcpg\" (UniqueName: \"kubernetes.io/projected/4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c-kube-api-access-zdcpg\") pod \"nmstate-console-plugin-6ff7998486-phj4q\" (UID: \"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.983819 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.998377 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6z5f\" (UniqueName: \"kubernetes.io/projected/8245712c-0171-416f-b3fc-6872b428a7e6-kube-api-access-f6z5f\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.998447 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-trusted-ca-bundle\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.998500 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-oauth-serving-cert\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.998523 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8245712c-0171-416f-b3fc-6872b428a7e6-console-oauth-config\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.998575 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-service-ca\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.998597 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8245712c-0171-416f-b3fc-6872b428a7e6-console-serving-cert\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:42 crc kubenswrapper[4945]: I0108 23:30:42.998612 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-console-config\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.099961 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-service-ca\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.100062 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8245712c-0171-416f-b3fc-6872b428a7e6-console-serving-cert\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.100093 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-console-config\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.102092 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6z5f\" (UniqueName: \"kubernetes.io/projected/8245712c-0171-416f-b3fc-6872b428a7e6-kube-api-access-f6z5f\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.102423 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-trusted-ca-bundle\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.102467 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-oauth-serving-cert\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.102503 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8245712c-0171-416f-b3fc-6872b428a7e6-console-oauth-config\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.101222 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-console-config\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.100979 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-service-ca\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.104007 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-oauth-serving-cert\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.104097 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8245712c-0171-416f-b3fc-6872b428a7e6-trusted-ca-bundle\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.106908 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8245712c-0171-416f-b3fc-6872b428a7e6-console-oauth-config\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.110092 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8245712c-0171-416f-b3fc-6872b428a7e6-console-serving-cert\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.119604 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6z5f\" (UniqueName: \"kubernetes.io/projected/8245712c-0171-416f-b3fc-6872b428a7e6-kube-api-access-f6z5f\") pod \"console-6fd7864499-skqrz\" (UID: \"8245712c-0171-416f-b3fc-6872b428a7e6\") " pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.160085 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-fz94s"] Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.177164 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.212754 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q"] Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.305025 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks"] Jan 08 23:30:43 crc kubenswrapper[4945]: W0108 23:30:43.311399 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b83df9c_7230_4d13_922b_de7ea806d98d.slice/crio-2840ec1cb95f691a269f89223b42d700ace7a1921a8182d9db0c3336b28ef19d WatchSource:0}: Error finding container 2840ec1cb95f691a269f89223b42d700ace7a1921a8182d9db0c3336b28ef19d: Status 404 returned error can't find the container with id 2840ec1cb95f691a269f89223b42d700ace7a1921a8182d9db0c3336b28ef19d Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.366283 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fd7864499-skqrz"] Jan 08 23:30:43 crc kubenswrapper[4945]: W0108 23:30:43.370110 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8245712c_0171_416f_b3fc_6872b428a7e6.slice/crio-c32ab128997eae2761d405a7f5732ac84b8e9ac16d70a1b4386f761b9430b254 WatchSource:0}: Error finding container c32ab128997eae2761d405a7f5732ac84b8e9ac16d70a1b4386f761b9430b254: Status 404 returned error can't find the container with id c32ab128997eae2761d405a7f5732ac84b8e9ac16d70a1b4386f761b9430b254 Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.440003 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" event={"ID":"9011efad-d324-473e-bb95-cb0ee8905fc1","Type":"ContainerStarted","Data":"af395482e5dd8ed153e3c782a419de47577c9395a4bdf3c0ded672ffa327fd14"} Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.440954 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" event={"ID":"9b83df9c-7230-4d13-922b-de7ea806d98d","Type":"ContainerStarted","Data":"2840ec1cb95f691a269f89223b42d700ace7a1921a8182d9db0c3336b28ef19d"} Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.441709 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" event={"ID":"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c","Type":"ContainerStarted","Data":"8a911bcb425994eb2bc0745e370b16e8ab4ebbc187ffb6fa426808c21bb55e2c"} Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.443138 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fd7864499-skqrz" event={"ID":"8245712c-0171-416f-b3fc-6872b428a7e6","Type":"ContainerStarted","Data":"c32ab128997eae2761d405a7f5732ac84b8e9ac16d70a1b4386f761b9430b254"} Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.444006 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dkw8b" event={"ID":"333350be-c9ef-44d4-9de8-58b29fe9da27","Type":"ContainerStarted","Data":"24ccc972d19e087cb7b06ec9345ea9c4c344c9aad78eb2c8fe4e73d5ae978fcd"} Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.578068 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:30:43 crc kubenswrapper[4945]: I0108 23:30:43.578326 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:30:44 crc kubenswrapper[4945]: I0108 23:30:44.450121 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fd7864499-skqrz" event={"ID":"8245712c-0171-416f-b3fc-6872b428a7e6","Type":"ContainerStarted","Data":"6cafd1056f1a0350c381f49db87a8e03238cf440f07d7a0c81b5a8f27fdf9c71"} Jan 08 23:30:44 crc kubenswrapper[4945]: I0108 23:30:44.471387 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fd7864499-skqrz" podStartSLOduration=2.471349404 podStartE2EDuration="2.471349404s" podCreationTimestamp="2026-01-08 23:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:30:44.467122171 +0000 UTC m=+914.778281117" watchObservedRunningTime="2026-01-08 23:30:44.471349404 +0000 UTC m=+914.782508350" Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.476155 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" event={"ID":"9011efad-d324-473e-bb95-cb0ee8905fc1","Type":"ContainerStarted","Data":"53293e200b58cd5d88e7c8ff9783bb484bbc4df9342955c18493ee5d720cf343"} Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.479032 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.480207 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" event={"ID":"9b83df9c-7230-4d13-922b-de7ea806d98d","Type":"ContainerStarted","Data":"15a72f71ac0a025443ca39297d72f6916f73fe5c517b57d3f440454d299a85a1"} Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.481279 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" event={"ID":"4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c","Type":"ContainerStarted","Data":"2cec1c9bb921e02014017d778ce3bb038745992877c3a843844506efdc16703e"} Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.482785 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-dkw8b" event={"ID":"333350be-c9ef-44d4-9de8-58b29fe9da27","Type":"ContainerStarted","Data":"8798609fec82edc59581799b532298c3e7a4d6027f7f5b5c720780d514bc7ac9"} Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.483278 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.527702 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" podStartSLOduration=2.215950711 podStartE2EDuration="6.527687575s" podCreationTimestamp="2026-01-08 23:30:42 +0000 UTC" firstStartedPulling="2026-01-08 23:30:43.168352545 +0000 UTC m=+913.479511491" lastFinishedPulling="2026-01-08 23:30:47.480089409 +0000 UTC m=+917.791248355" observedRunningTime="2026-01-08 23:30:48.525716427 +0000 UTC m=+918.836875413" watchObservedRunningTime="2026-01-08 23:30:48.527687575 +0000 UTC m=+918.838846521" Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.544575 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-phj4q" podStartSLOduration=2.310143428 podStartE2EDuration="6.544553824s" podCreationTimestamp="2026-01-08 23:30:42 +0000 UTC" firstStartedPulling="2026-01-08 23:30:43.230208627 +0000 UTC m=+913.541367573" lastFinishedPulling="2026-01-08 23:30:47.464619023 +0000 UTC m=+917.775777969" observedRunningTime="2026-01-08 23:30:48.538916267 +0000 UTC m=+918.850075223" watchObservedRunningTime="2026-01-08 23:30:48.544553824 +0000 UTC m=+918.855712770" Jan 08 23:30:48 crc kubenswrapper[4945]: I0108 23:30:48.557896 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-dkw8b" podStartSLOduration=2.017407941 podStartE2EDuration="6.557876608s" podCreationTimestamp="2026-01-08 23:30:42 +0000 UTC" firstStartedPulling="2026-01-08 23:30:42.923164172 +0000 UTC m=+913.234323118" lastFinishedPulling="2026-01-08 23:30:47.463632839 +0000 UTC m=+917.774791785" observedRunningTime="2026-01-08 23:30:48.557867577 +0000 UTC m=+918.869026523" watchObservedRunningTime="2026-01-08 23:30:48.557876608 +0000 UTC m=+918.869035554" Jan 08 23:30:50 crc kubenswrapper[4945]: I0108 23:30:50.493627 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" event={"ID":"9b83df9c-7230-4d13-922b-de7ea806d98d","Type":"ContainerStarted","Data":"bfc1e3e4736f74c8f481dde74f4ba0643e8c93629cdea2449b74319917215024"} Jan 08 23:30:50 crc kubenswrapper[4945]: I0108 23:30:50.516675 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-wpxks" podStartSLOduration=2.304128573 podStartE2EDuration="8.516661809s" podCreationTimestamp="2026-01-08 23:30:42 +0000 UTC" firstStartedPulling="2026-01-08 23:30:43.313396077 +0000 UTC m=+913.624555023" lastFinishedPulling="2026-01-08 23:30:49.525929303 +0000 UTC m=+919.837088259" observedRunningTime="2026-01-08 23:30:50.513601045 +0000 UTC m=+920.824759991" watchObservedRunningTime="2026-01-08 23:30:50.516661809 +0000 UTC m=+920.827820755" Jan 08 23:30:52 crc kubenswrapper[4945]: I0108 23:30:52.897876 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-dkw8b" Jan 08 23:30:53 crc kubenswrapper[4945]: I0108 23:30:53.178265 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:53 crc kubenswrapper[4945]: I0108 23:30:53.178312 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:53 crc kubenswrapper[4945]: I0108 23:30:53.183298 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:53 crc kubenswrapper[4945]: I0108 23:30:53.518971 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fd7864499-skqrz" Jan 08 23:30:53 crc kubenswrapper[4945]: I0108 23:30:53.609163 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pmfct"] Jan 08 23:31:02 crc kubenswrapper[4945]: I0108 23:31:02.860102 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-fz94s" Jan 08 23:31:13 crc kubenswrapper[4945]: I0108 23:31:13.577968 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:31:13 crc kubenswrapper[4945]: I0108 23:31:13.578778 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.313292 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc"] Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.316197 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.322343 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.324403 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc"] Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.478912 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwpg4\" (UniqueName: \"kubernetes.io/projected/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-kube-api-access-lwpg4\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.478966 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.479078 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.580080 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwpg4\" (UniqueName: \"kubernetes.io/projected/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-kube-api-access-lwpg4\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.580149 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.580212 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.580796 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.580823 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.605713 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwpg4\" (UniqueName: \"kubernetes.io/projected/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-kube-api-access-lwpg4\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.633626 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:15 crc kubenswrapper[4945]: I0108 23:31:15.840522 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc"] Jan 08 23:31:16 crc kubenswrapper[4945]: I0108 23:31:16.667750 4945 generic.go:334] "Generic (PLEG): container finished" podID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerID="fb1d60a0a8680e63eb844bf2de1e8d6e70b7cc22659f66f9b981e869508cc570" exitCode=0 Jan 08 23:31:16 crc kubenswrapper[4945]: I0108 23:31:16.667923 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" event={"ID":"57ccdf78-e21f-4952-a6a3-2a66d509b1bb","Type":"ContainerDied","Data":"fb1d60a0a8680e63eb844bf2de1e8d6e70b7cc22659f66f9b981e869508cc570"} Jan 08 23:31:16 crc kubenswrapper[4945]: I0108 23:31:16.669956 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" event={"ID":"57ccdf78-e21f-4952-a6a3-2a66d509b1bb","Type":"ContainerStarted","Data":"7cfbc95836fa827827f7c9965b2c16018d605be104d3987aa463536aa6e3a4b8"} Jan 08 23:31:18 crc kubenswrapper[4945]: I0108 23:31:18.659894 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-pmfct" podUID="490f6a3a-21e2-4264-8a92-75202ba3db64" containerName="console" containerID="cri-o://c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5" gracePeriod=15 Jan 08 23:31:18 crc kubenswrapper[4945]: I0108 23:31:18.687704 4945 generic.go:334] "Generic (PLEG): container finished" podID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerID="7602b0bd9dd5fe3b969872982afe7b52aee2c5621deff7b7d551cd244835bf5e" exitCode=0 Jan 08 23:31:18 crc kubenswrapper[4945]: I0108 23:31:18.687745 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" event={"ID":"57ccdf78-e21f-4952-a6a3-2a66d509b1bb","Type":"ContainerDied","Data":"7602b0bd9dd5fe3b969872982afe7b52aee2c5621deff7b7d551cd244835bf5e"} Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.142954 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pmfct_490f6a3a-21e2-4264-8a92-75202ba3db64/console/0.log" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.143031 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.236810 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-console-config\") pod \"490f6a3a-21e2-4264-8a92-75202ba3db64\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.236898 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsdw8\" (UniqueName: \"kubernetes.io/projected/490f6a3a-21e2-4264-8a92-75202ba3db64-kube-api-access-zsdw8\") pod \"490f6a3a-21e2-4264-8a92-75202ba3db64\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.236984 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-oauth-serving-cert\") pod \"490f6a3a-21e2-4264-8a92-75202ba3db64\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.237733 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-service-ca\") pod \"490f6a3a-21e2-4264-8a92-75202ba3db64\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.237575 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-console-config" (OuterVolumeSpecName: "console-config") pod "490f6a3a-21e2-4264-8a92-75202ba3db64" (UID: "490f6a3a-21e2-4264-8a92-75202ba3db64"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.237666 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "490f6a3a-21e2-4264-8a92-75202ba3db64" (UID: "490f6a3a-21e2-4264-8a92-75202ba3db64"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.238351 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-service-ca" (OuterVolumeSpecName: "service-ca") pod "490f6a3a-21e2-4264-8a92-75202ba3db64" (UID: "490f6a3a-21e2-4264-8a92-75202ba3db64"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.238539 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-oauth-config\") pod \"490f6a3a-21e2-4264-8a92-75202ba3db64\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.238583 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-trusted-ca-bundle\") pod \"490f6a3a-21e2-4264-8a92-75202ba3db64\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.239177 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "490f6a3a-21e2-4264-8a92-75202ba3db64" (UID: "490f6a3a-21e2-4264-8a92-75202ba3db64"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.239218 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-serving-cert\") pod \"490f6a3a-21e2-4264-8a92-75202ba3db64\" (UID: \"490f6a3a-21e2-4264-8a92-75202ba3db64\") " Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.239674 4945 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-console-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.239696 4945 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.239711 4945 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-service-ca\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.239749 4945 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/490f6a3a-21e2-4264-8a92-75202ba3db64-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.242947 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "490f6a3a-21e2-4264-8a92-75202ba3db64" (UID: "490f6a3a-21e2-4264-8a92-75202ba3db64"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.243298 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "490f6a3a-21e2-4264-8a92-75202ba3db64" (UID: "490f6a3a-21e2-4264-8a92-75202ba3db64"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.243709 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490f6a3a-21e2-4264-8a92-75202ba3db64-kube-api-access-zsdw8" (OuterVolumeSpecName: "kube-api-access-zsdw8") pod "490f6a3a-21e2-4264-8a92-75202ba3db64" (UID: "490f6a3a-21e2-4264-8a92-75202ba3db64"). InnerVolumeSpecName "kube-api-access-zsdw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.341386 4945 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.341418 4945 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/490f6a3a-21e2-4264-8a92-75202ba3db64-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.341427 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsdw8\" (UniqueName: \"kubernetes.io/projected/490f6a3a-21e2-4264-8a92-75202ba3db64-kube-api-access-zsdw8\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.697190 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pmfct_490f6a3a-21e2-4264-8a92-75202ba3db64/console/0.log" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.697249 4945 generic.go:334] "Generic (PLEG): container finished" podID="490f6a3a-21e2-4264-8a92-75202ba3db64" containerID="c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5" exitCode=2 Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.697318 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pmfct" event={"ID":"490f6a3a-21e2-4264-8a92-75202ba3db64","Type":"ContainerDied","Data":"c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5"} Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.697350 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pmfct" event={"ID":"490f6a3a-21e2-4264-8a92-75202ba3db64","Type":"ContainerDied","Data":"46fe5819e68033243c5f61ea9954be646947bd6f99c5f158ddbdd1cfaef47667"} Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.697372 4945 scope.go:117] "RemoveContainer" containerID="c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.697692 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pmfct" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.701853 4945 generic.go:334] "Generic (PLEG): container finished" podID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerID="a2374a603045c52f145db08023dbc8e02e7dfe2f6f72885bc3ee25d40a225f01" exitCode=0 Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.701903 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" event={"ID":"57ccdf78-e21f-4952-a6a3-2a66d509b1bb","Type":"ContainerDied","Data":"a2374a603045c52f145db08023dbc8e02e7dfe2f6f72885bc3ee25d40a225f01"} Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.725160 4945 scope.go:117] "RemoveContainer" containerID="c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5" Jan 08 23:31:19 crc kubenswrapper[4945]: E0108 23:31:19.725527 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5\": container with ID starting with c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5 not found: ID does not exist" containerID="c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.725551 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5"} err="failed to get container status \"c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5\": rpc error: code = NotFound desc = could not find container \"c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5\": container with ID starting with c58b8617832b1b3bf09789f40bf2a968daf251c0ae96b79d3f162e946481a8e5 not found: ID does not exist" Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.733475 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pmfct"] Jan 08 23:31:19 crc kubenswrapper[4945]: I0108 23:31:19.736808 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-pmfct"] Jan 08 23:31:20 crc kubenswrapper[4945]: I0108 23:31:20.008005 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="490f6a3a-21e2-4264-8a92-75202ba3db64" path="/var/lib/kubelet/pods/490f6a3a-21e2-4264-8a92-75202ba3db64/volumes" Jan 08 23:31:20 crc kubenswrapper[4945]: I0108 23:31:20.967442 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.056831 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-util\") pod \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.057220 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-bundle\") pod \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.057278 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwpg4\" (UniqueName: \"kubernetes.io/projected/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-kube-api-access-lwpg4\") pod \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\" (UID: \"57ccdf78-e21f-4952-a6a3-2a66d509b1bb\") " Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.058121 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-bundle" (OuterVolumeSpecName: "bundle") pod "57ccdf78-e21f-4952-a6a3-2a66d509b1bb" (UID: "57ccdf78-e21f-4952-a6a3-2a66d509b1bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.060599 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-kube-api-access-lwpg4" (OuterVolumeSpecName: "kube-api-access-lwpg4") pod "57ccdf78-e21f-4952-a6a3-2a66d509b1bb" (UID: "57ccdf78-e21f-4952-a6a3-2a66d509b1bb"). InnerVolumeSpecName "kube-api-access-lwpg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.070141 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-util" (OuterVolumeSpecName: "util") pod "57ccdf78-e21f-4952-a6a3-2a66d509b1bb" (UID: "57ccdf78-e21f-4952-a6a3-2a66d509b1bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.158829 4945 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-util\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.158865 4945 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.158874 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwpg4\" (UniqueName: \"kubernetes.io/projected/57ccdf78-e21f-4952-a6a3-2a66d509b1bb-kube-api-access-lwpg4\") on node \"crc\" DevicePath \"\"" Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.717686 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" event={"ID":"57ccdf78-e21f-4952-a6a3-2a66d509b1bb","Type":"ContainerDied","Data":"7cfbc95836fa827827f7c9965b2c16018d605be104d3987aa463536aa6e3a4b8"} Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.717730 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cfbc95836fa827827f7c9965b2c16018d605be104d3987aa463536aa6e3a4b8" Jan 08 23:31:21 crc kubenswrapper[4945]: I0108 23:31:21.717805 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.241889 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m"] Jan 08 23:31:30 crc kubenswrapper[4945]: E0108 23:31:30.242888 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerName="util" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.242900 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerName="util" Jan 08 23:31:30 crc kubenswrapper[4945]: E0108 23:31:30.242911 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerName="extract" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.242917 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerName="extract" Jan 08 23:31:30 crc kubenswrapper[4945]: E0108 23:31:30.242927 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerName="pull" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.242933 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerName="pull" Jan 08 23:31:30 crc kubenswrapper[4945]: E0108 23:31:30.242958 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490f6a3a-21e2-4264-8a92-75202ba3db64" containerName="console" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.242964 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="490f6a3a-21e2-4264-8a92-75202ba3db64" containerName="console" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.243147 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="490f6a3a-21e2-4264-8a92-75202ba3db64" containerName="console" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.243184 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ccdf78-e21f-4952-a6a3-2a66d509b1bb" containerName="extract" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.243730 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.247060 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m"] Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.250283 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.250535 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.250668 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.250852 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.251109 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xb2b7" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.371063 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcpxn\" (UniqueName: \"kubernetes.io/projected/b278de9b-64f4-4686-811f-bee3eff92638-kube-api-access-mcpxn\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.371277 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b278de9b-64f4-4686-811f-bee3eff92638-apiservice-cert\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.371394 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b278de9b-64f4-4686-811f-bee3eff92638-webhook-cert\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.468537 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9"] Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.469213 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.472445 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b278de9b-64f4-4686-811f-bee3eff92638-apiservice-cert\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.472500 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b278de9b-64f4-4686-811f-bee3eff92638-webhook-cert\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.472543 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcpxn\" (UniqueName: \"kubernetes.io/projected/b278de9b-64f4-4686-811f-bee3eff92638-kube-api-access-mcpxn\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.472993 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.472988 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.475174 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-smhh6" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.477937 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b278de9b-64f4-4686-811f-bee3eff92638-webhook-cert\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.487349 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9"] Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.490596 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b278de9b-64f4-4686-811f-bee3eff92638-apiservice-cert\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.495758 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcpxn\" (UniqueName: \"kubernetes.io/projected/b278de9b-64f4-4686-811f-bee3eff92638-kube-api-access-mcpxn\") pod \"metallb-operator-controller-manager-7748b8f8-75z5m\" (UID: \"b278de9b-64f4-4686-811f-bee3eff92638\") " pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.564461 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.573763 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce1c5efe-289d-4419-9089-bb1fc5761690-webhook-cert\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.573815 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqzh\" (UniqueName: \"kubernetes.io/projected/ce1c5efe-289d-4419-9089-bb1fc5761690-kube-api-access-tzqzh\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.573838 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce1c5efe-289d-4419-9089-bb1fc5761690-apiservice-cert\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.675394 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce1c5efe-289d-4419-9089-bb1fc5761690-webhook-cert\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.675448 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqzh\" (UniqueName: \"kubernetes.io/projected/ce1c5efe-289d-4419-9089-bb1fc5761690-kube-api-access-tzqzh\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.675481 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce1c5efe-289d-4419-9089-bb1fc5761690-apiservice-cert\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.679597 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce1c5efe-289d-4419-9089-bb1fc5761690-apiservice-cert\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.680113 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce1c5efe-289d-4419-9089-bb1fc5761690-webhook-cert\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.694948 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqzh\" (UniqueName: \"kubernetes.io/projected/ce1c5efe-289d-4419-9089-bb1fc5761690-kube-api-access-tzqzh\") pod \"metallb-operator-webhook-server-759575b9df-pnhm9\" (UID: \"ce1c5efe-289d-4419-9089-bb1fc5761690\") " pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.833284 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:30 crc kubenswrapper[4945]: I0108 23:31:30.965785 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m"] Jan 08 23:31:31 crc kubenswrapper[4945]: I0108 23:31:31.149686 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9"] Jan 08 23:31:31 crc kubenswrapper[4945]: W0108 23:31:31.156469 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce1c5efe_289d_4419_9089_bb1fc5761690.slice/crio-1b1e01856e690d675a0bce99ac852600f0900876d64b01420e92304420d3771f WatchSource:0}: Error finding container 1b1e01856e690d675a0bce99ac852600f0900876d64b01420e92304420d3771f: Status 404 returned error can't find the container with id 1b1e01856e690d675a0bce99ac852600f0900876d64b01420e92304420d3771f Jan 08 23:31:31 crc kubenswrapper[4945]: I0108 23:31:31.778055 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" event={"ID":"ce1c5efe-289d-4419-9089-bb1fc5761690","Type":"ContainerStarted","Data":"1b1e01856e690d675a0bce99ac852600f0900876d64b01420e92304420d3771f"} Jan 08 23:31:31 crc kubenswrapper[4945]: I0108 23:31:31.779435 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" event={"ID":"b278de9b-64f4-4686-811f-bee3eff92638","Type":"ContainerStarted","Data":"9b89435604ac59d1f846bef04c38cda59dce1ba9cbee1e7b98ecb4eeebc495c2"} Jan 08 23:31:36 crc kubenswrapper[4945]: I0108 23:31:36.821360 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" event={"ID":"b278de9b-64f4-4686-811f-bee3eff92638","Type":"ContainerStarted","Data":"223696f62f1eb5b19c75eb4ac091af49066d871ffd8a4ddfab21f9d669690e35"} Jan 08 23:31:36 crc kubenswrapper[4945]: I0108 23:31:36.822030 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:31:36 crc kubenswrapper[4945]: I0108 23:31:36.823770 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" event={"ID":"ce1c5efe-289d-4419-9089-bb1fc5761690","Type":"ContainerStarted","Data":"971172b92d3279860fce34a46fbe0898865b800da40bb5faf1ab5dd5884571fa"} Jan 08 23:31:36 crc kubenswrapper[4945]: I0108 23:31:36.824054 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:31:36 crc kubenswrapper[4945]: I0108 23:31:36.887345 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" podStartSLOduration=1.689263312 podStartE2EDuration="6.887316975s" podCreationTimestamp="2026-01-08 23:31:30 +0000 UTC" firstStartedPulling="2026-01-08 23:31:30.983994777 +0000 UTC m=+961.295153723" lastFinishedPulling="2026-01-08 23:31:36.18204844 +0000 UTC m=+966.493207386" observedRunningTime="2026-01-08 23:31:36.85826757 +0000 UTC m=+967.169426556" watchObservedRunningTime="2026-01-08 23:31:36.887316975 +0000 UTC m=+967.198475931" Jan 08 23:31:36 crc kubenswrapper[4945]: I0108 23:31:36.890362 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" podStartSLOduration=1.81312757 podStartE2EDuration="6.890354829s" podCreationTimestamp="2026-01-08 23:31:30 +0000 UTC" firstStartedPulling="2026-01-08 23:31:31.159437017 +0000 UTC m=+961.470595953" lastFinishedPulling="2026-01-08 23:31:36.236664266 +0000 UTC m=+966.547823212" observedRunningTime="2026-01-08 23:31:36.888189736 +0000 UTC m=+967.199348702" watchObservedRunningTime="2026-01-08 23:31:36.890354829 +0000 UTC m=+967.201513785" Jan 08 23:31:43 crc kubenswrapper[4945]: I0108 23:31:43.577729 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:31:43 crc kubenswrapper[4945]: I0108 23:31:43.578264 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:31:43 crc kubenswrapper[4945]: I0108 23:31:43.578315 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:31:43 crc kubenswrapper[4945]: I0108 23:31:43.578951 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb86089d7fa453c2e2295e7a4532a4489dac612ea805bc40de7f57ca4589bf0f"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:31:43 crc kubenswrapper[4945]: I0108 23:31:43.579015 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://bb86089d7fa453c2e2295e7a4532a4489dac612ea805bc40de7f57ca4589bf0f" gracePeriod=600 Jan 08 23:31:43 crc kubenswrapper[4945]: I0108 23:31:43.876981 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="bb86089d7fa453c2e2295e7a4532a4489dac612ea805bc40de7f57ca4589bf0f" exitCode=0 Jan 08 23:31:43 crc kubenswrapper[4945]: I0108 23:31:43.877027 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"bb86089d7fa453c2e2295e7a4532a4489dac612ea805bc40de7f57ca4589bf0f"} Jan 08 23:31:43 crc kubenswrapper[4945]: I0108 23:31:43.877326 4945 scope.go:117] "RemoveContainer" containerID="e226cbec52724f8156027ca6ca9d14289ac828814d24940b98909ac9a557fa94" Jan 08 23:31:44 crc kubenswrapper[4945]: I0108 23:31:44.884629 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"e3f7c0fd5402fc991541e7265a64423cf96ba0034b54b94c9210237909eb4a91"} Jan 08 23:31:50 crc kubenswrapper[4945]: I0108 23:31:50.838583 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-759575b9df-pnhm9" Jan 08 23:32:10 crc kubenswrapper[4945]: I0108 23:32:10.567296 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7748b8f8-75z5m" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.383882 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6"] Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.384684 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.386547 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nhrfk" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.387614 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.394800 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-tcq8p"] Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.397014 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.402075 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.402935 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.438502 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6"] Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.470100 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nhjd2"] Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.470922 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.473871 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-hd828" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.474783 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.474922 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.475074 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.488934 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-mdsjd"] Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.489809 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.495586 4945 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.509964 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6l6\" (UniqueName: \"kubernetes.io/projected/8e87d1f4-a552-4809-9897-d28efa1967da-kube-api-access-jw6l6\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510038 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-frr-sockets\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510071 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-reloader\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510092 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510112 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-metrics\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510137 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7nkh\" (UniqueName: \"kubernetes.io/projected/4bcee75f-e456-41cb-bf09-b6cc6052c849-kube-api-access-m7nkh\") pod \"frr-k8s-webhook-server-7784b6fcf-rfzl6\" (UID: \"4bcee75f-e456-41cb-bf09-b6cc6052c849\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510164 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bcee75f-e456-41cb-bf09-b6cc6052c849-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-rfzl6\" (UID: \"4bcee75f-e456-41cb-bf09-b6cc6052c849\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510190 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27da940b-5334-4559-8cf3-754a90037ef5-metrics-certs\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510207 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metrics-certs\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510227 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-frr-conf\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510264 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwd4g\" (UniqueName: \"kubernetes.io/projected/27da940b-5334-4559-8cf3-754a90037ef5-kube-api-access-fwd4g\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510290 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-cert\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510305 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metallb-excludel2\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510405 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27da940b-5334-4559-8cf3-754a90037ef5-frr-startup\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510425 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcn6k\" (UniqueName: \"kubernetes.io/projected/9be4e803-c45e-4443-b1f5-2ea89eed04e6-kube-api-access-hcn6k\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.510449 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-metrics-certs\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.511862 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-mdsjd"] Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.611933 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-frr-sockets\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612332 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-frr-sockets\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612386 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-reloader\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612409 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: E0108 23:32:11.612574 4945 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612598 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-reloader\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612624 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-metrics\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612648 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7nkh\" (UniqueName: \"kubernetes.io/projected/4bcee75f-e456-41cb-bf09-b6cc6052c849-kube-api-access-m7nkh\") pod \"frr-k8s-webhook-server-7784b6fcf-rfzl6\" (UID: \"4bcee75f-e456-41cb-bf09-b6cc6052c849\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:11 crc kubenswrapper[4945]: E0108 23:32:11.612662 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist podName:9be4e803-c45e-4443-b1f5-2ea89eed04e6 nodeName:}" failed. No retries permitted until 2026-01-08 23:32:12.112644922 +0000 UTC m=+1002.423803868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist") pod "speaker-nhjd2" (UID: "9be4e803-c45e-4443-b1f5-2ea89eed04e6") : secret "metallb-memberlist" not found Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612690 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bcee75f-e456-41cb-bf09-b6cc6052c849-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-rfzl6\" (UID: \"4bcee75f-e456-41cb-bf09-b6cc6052c849\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612760 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metrics-certs\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612786 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27da940b-5334-4559-8cf3-754a90037ef5-metrics-certs\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612813 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-frr-conf\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612833 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwd4g\" (UniqueName: \"kubernetes.io/projected/27da940b-5334-4559-8cf3-754a90037ef5-kube-api-access-fwd4g\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612889 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-cert\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612908 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metallb-excludel2\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: E0108 23:32:11.612924 4945 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612939 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27da940b-5334-4559-8cf3-754a90037ef5-frr-startup\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: E0108 23:32:11.612961 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metrics-certs podName:9be4e803-c45e-4443-b1f5-2ea89eed04e6 nodeName:}" failed. No retries permitted until 2026-01-08 23:32:12.112947479 +0000 UTC m=+1002.424106425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metrics-certs") pod "speaker-nhjd2" (UID: "9be4e803-c45e-4443-b1f5-2ea89eed04e6") : secret "speaker-certs-secret" not found Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612980 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcn6k\" (UniqueName: \"kubernetes.io/projected/9be4e803-c45e-4443-b1f5-2ea89eed04e6-kube-api-access-hcn6k\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.612980 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-metrics\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.613022 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-metrics-certs\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.613049 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw6l6\" (UniqueName: \"kubernetes.io/projected/8e87d1f4-a552-4809-9897-d28efa1967da-kube-api-access-jw6l6\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: E0108 23:32:11.613318 4945 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 08 23:32:11 crc kubenswrapper[4945]: E0108 23:32:11.613357 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-metrics-certs podName:8e87d1f4-a552-4809-9897-d28efa1967da nodeName:}" failed. No retries permitted until 2026-01-08 23:32:12.113341549 +0000 UTC m=+1002.424500495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-metrics-certs") pod "controller-5bddd4b946-mdsjd" (UID: "8e87d1f4-a552-4809-9897-d28efa1967da") : secret "controller-certs-secret" not found Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.613829 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27da940b-5334-4559-8cf3-754a90037ef5-frr-startup\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.614365 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27da940b-5334-4559-8cf3-754a90037ef5-frr-conf\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.614969 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metallb-excludel2\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.618716 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-cert\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.630254 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27da940b-5334-4559-8cf3-754a90037ef5-metrics-certs\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.635584 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4bcee75f-e456-41cb-bf09-b6cc6052c849-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-rfzl6\" (UID: \"4bcee75f-e456-41cb-bf09-b6cc6052c849\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.655767 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw6l6\" (UniqueName: \"kubernetes.io/projected/8e87d1f4-a552-4809-9897-d28efa1967da-kube-api-access-jw6l6\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.659663 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7nkh\" (UniqueName: \"kubernetes.io/projected/4bcee75f-e456-41cb-bf09-b6cc6052c849-kube-api-access-m7nkh\") pod \"frr-k8s-webhook-server-7784b6fcf-rfzl6\" (UID: \"4bcee75f-e456-41cb-bf09-b6cc6052c849\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.669654 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwd4g\" (UniqueName: \"kubernetes.io/projected/27da940b-5334-4559-8cf3-754a90037ef5-kube-api-access-fwd4g\") pod \"frr-k8s-tcq8p\" (UID: \"27da940b-5334-4559-8cf3-754a90037ef5\") " pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.670239 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcn6k\" (UniqueName: \"kubernetes.io/projected/9be4e803-c45e-4443-b1f5-2ea89eed04e6-kube-api-access-hcn6k\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.699418 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:11 crc kubenswrapper[4945]: I0108 23:32:11.710340 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.052118 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerStarted","Data":"0eeb0a96dc6e04c17426a42a6f9eab7635ac04b07b4cb7e2ba71558c08c61f75"} Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.129777 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.129839 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metrics-certs\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.129883 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-metrics-certs\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:12 crc kubenswrapper[4945]: E0108 23:32:12.129974 4945 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 08 23:32:12 crc kubenswrapper[4945]: E0108 23:32:12.130077 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist podName:9be4e803-c45e-4443-b1f5-2ea89eed04e6 nodeName:}" failed. No retries permitted until 2026-01-08 23:32:13.130057451 +0000 UTC m=+1003.441216397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist") pod "speaker-nhjd2" (UID: "9be4e803-c45e-4443-b1f5-2ea89eed04e6") : secret "metallb-memberlist" not found Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.131474 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6"] Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.134502 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e87d1f4-a552-4809-9897-d28efa1967da-metrics-certs\") pod \"controller-5bddd4b946-mdsjd\" (UID: \"8e87d1f4-a552-4809-9897-d28efa1967da\") " pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.135363 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-metrics-certs\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:12 crc kubenswrapper[4945]: W0108 23:32:12.138944 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bcee75f_e456_41cb_bf09_b6cc6052c849.slice/crio-8698c72cc1959e25cdcc64d1c2119d3c58eb428ecc869d25545aaf4908693761 WatchSource:0}: Error finding container 8698c72cc1959e25cdcc64d1c2119d3c58eb428ecc869d25545aaf4908693761: Status 404 returned error can't find the container with id 8698c72cc1959e25cdcc64d1c2119d3c58eb428ecc869d25545aaf4908693761 Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.413172 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:12 crc kubenswrapper[4945]: I0108 23:32:12.639796 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-mdsjd"] Jan 08 23:32:12 crc kubenswrapper[4945]: W0108 23:32:12.656507 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e87d1f4_a552_4809_9897_d28efa1967da.slice/crio-6fbcd3b03e5fccddc5b731b3520a07c7cfec5d6c6745ba7bde8321cc8caf4249 WatchSource:0}: Error finding container 6fbcd3b03e5fccddc5b731b3520a07c7cfec5d6c6745ba7bde8321cc8caf4249: Status 404 returned error can't find the container with id 6fbcd3b03e5fccddc5b731b3520a07c7cfec5d6c6745ba7bde8321cc8caf4249 Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.064631 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-mdsjd" event={"ID":"8e87d1f4-a552-4809-9897-d28efa1967da","Type":"ContainerStarted","Data":"0c6b72fad95a40f4589b763e65da564c971533a8b26a72724cb69a915f8e5101"} Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.065280 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-mdsjd" event={"ID":"8e87d1f4-a552-4809-9897-d28efa1967da","Type":"ContainerStarted","Data":"8bef72990ff0928252e25e81395c8cd90f0431fad7230fea14503b3973ae236e"} Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.065390 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-mdsjd" event={"ID":"8e87d1f4-a552-4809-9897-d28efa1967da","Type":"ContainerStarted","Data":"6fbcd3b03e5fccddc5b731b3520a07c7cfec5d6c6745ba7bde8321cc8caf4249"} Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.065507 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.067821 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" event={"ID":"4bcee75f-e456-41cb-bf09-b6cc6052c849","Type":"ContainerStarted","Data":"8698c72cc1959e25cdcc64d1c2119d3c58eb428ecc869d25545aaf4908693761"} Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.083801 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-mdsjd" podStartSLOduration=2.08378368 podStartE2EDuration="2.08378368s" podCreationTimestamp="2026-01-08 23:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:32:13.08133254 +0000 UTC m=+1003.392491486" watchObservedRunningTime="2026-01-08 23:32:13.08378368 +0000 UTC m=+1003.394942626" Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.142986 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.150630 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9be4e803-c45e-4443-b1f5-2ea89eed04e6-memberlist\") pod \"speaker-nhjd2\" (UID: \"9be4e803-c45e-4443-b1f5-2ea89eed04e6\") " pod="metallb-system/speaker-nhjd2" Jan 08 23:32:13 crc kubenswrapper[4945]: I0108 23:32:13.283508 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nhjd2" Jan 08 23:32:14 crc kubenswrapper[4945]: I0108 23:32:14.084645 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nhjd2" event={"ID":"9be4e803-c45e-4443-b1f5-2ea89eed04e6","Type":"ContainerStarted","Data":"c2a433f4877b5486c34f75a607d40c6448e67be428bfb700f59cc4e124b5d37b"} Jan 08 23:32:14 crc kubenswrapper[4945]: I0108 23:32:14.085170 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nhjd2" event={"ID":"9be4e803-c45e-4443-b1f5-2ea89eed04e6","Type":"ContainerStarted","Data":"896e49e07ea67a420656816e10606f1034467bd0b1d44f368a00150fa9fb4f7c"} Jan 08 23:32:14 crc kubenswrapper[4945]: I0108 23:32:14.085186 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nhjd2" event={"ID":"9be4e803-c45e-4443-b1f5-2ea89eed04e6","Type":"ContainerStarted","Data":"e295c71638642f829e29144598d246f6d70c752dbf9220caa26065e22a9145d5"} Jan 08 23:32:14 crc kubenswrapper[4945]: I0108 23:32:14.085393 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nhjd2" Jan 08 23:32:20 crc kubenswrapper[4945]: I0108 23:32:20.034094 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nhjd2" podStartSLOduration=9.034071554 podStartE2EDuration="9.034071554s" podCreationTimestamp="2026-01-08 23:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:32:14.104094414 +0000 UTC m=+1004.415253370" watchObservedRunningTime="2026-01-08 23:32:20.034071554 +0000 UTC m=+1010.345230500" Jan 08 23:32:21 crc kubenswrapper[4945]: I0108 23:32:21.134630 4945 generic.go:334] "Generic (PLEG): container finished" podID="27da940b-5334-4559-8cf3-754a90037ef5" containerID="12bcd9ae4b44ae1b0e617eb87971f72644a22e38b579d864a320ed310288ba0e" exitCode=0 Jan 08 23:32:21 crc kubenswrapper[4945]: I0108 23:32:21.135021 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerDied","Data":"12bcd9ae4b44ae1b0e617eb87971f72644a22e38b579d864a320ed310288ba0e"} Jan 08 23:32:21 crc kubenswrapper[4945]: I0108 23:32:21.139665 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" event={"ID":"4bcee75f-e456-41cb-bf09-b6cc6052c849","Type":"ContainerStarted","Data":"8e28bafac13ccb1ce1e35cf186179406570f1641ac04617cf76e608079e45c19"} Jan 08 23:32:21 crc kubenswrapper[4945]: I0108 23:32:21.139890 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:21 crc kubenswrapper[4945]: I0108 23:32:21.184323 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" podStartSLOduration=2.139706915 podStartE2EDuration="10.184304233s" podCreationTimestamp="2026-01-08 23:32:11 +0000 UTC" firstStartedPulling="2026-01-08 23:32:12.141566252 +0000 UTC m=+1002.452725198" lastFinishedPulling="2026-01-08 23:32:20.18616357 +0000 UTC m=+1010.497322516" observedRunningTime="2026-01-08 23:32:21.183565125 +0000 UTC m=+1011.494724071" watchObservedRunningTime="2026-01-08 23:32:21.184304233 +0000 UTC m=+1011.495463179" Jan 08 23:32:22 crc kubenswrapper[4945]: I0108 23:32:22.146828 4945 generic.go:334] "Generic (PLEG): container finished" podID="27da940b-5334-4559-8cf3-754a90037ef5" containerID="dda89be3b7d75ffd3955f5f7d9c227b204ee56f6cc7812d2586defb24ff5825b" exitCode=0 Jan 08 23:32:22 crc kubenswrapper[4945]: I0108 23:32:22.146866 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerDied","Data":"dda89be3b7d75ffd3955f5f7d9c227b204ee56f6cc7812d2586defb24ff5825b"} Jan 08 23:32:22 crc kubenswrapper[4945]: I0108 23:32:22.417341 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-mdsjd" Jan 08 23:32:22 crc kubenswrapper[4945]: E0108 23:32:22.422404 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27da940b_5334_4559_8cf3_754a90037ef5.slice/crio-conmon-ff0f2b1e776b45e62d016fd7ae512e22548f66fb5f357d953aa1a9128d17c6b5.scope\": RecentStats: unable to find data in memory cache]" Jan 08 23:32:23 crc kubenswrapper[4945]: I0108 23:32:23.155261 4945 generic.go:334] "Generic (PLEG): container finished" podID="27da940b-5334-4559-8cf3-754a90037ef5" containerID="ff0f2b1e776b45e62d016fd7ae512e22548f66fb5f357d953aa1a9128d17c6b5" exitCode=0 Jan 08 23:32:23 crc kubenswrapper[4945]: I0108 23:32:23.155312 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerDied","Data":"ff0f2b1e776b45e62d016fd7ae512e22548f66fb5f357d953aa1a9128d17c6b5"} Jan 08 23:32:23 crc kubenswrapper[4945]: I0108 23:32:23.287136 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nhjd2" Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.163954 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerStarted","Data":"c3b480e2cc7c022234005afa548d2418b6d29c8b13ec8079bab538d104fb4dab"} Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.164297 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerStarted","Data":"7f6a3e116f0912fb86b3f056e325fd584cff89f02f9ff88ddde12128a88c025c"} Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.164310 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerStarted","Data":"3bb4bfeea61ed6587adef554790479a8d2b6fdc9b6cfc30e7437804d55aa294b"} Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.164319 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerStarted","Data":"823a040d82d51585ab5b7b2f98bb5c14de9fd04fa30417642c707ccbde4f17a6"} Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.164330 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerStarted","Data":"00134ec85e6e7beea69cb03da2422a5bdb0c845dba450f69e6666f3540fe0cfb"} Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.164339 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tcq8p" event={"ID":"27da940b-5334-4559-8cf3-754a90037ef5","Type":"ContainerStarted","Data":"ae019ab3ccbd5ab56ea3bd57cc6dce2bbf05fcea8814e5ab57aefcb9d89f9739"} Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.164393 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.183586 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-tcq8p" podStartSLOduration=4.842920301 podStartE2EDuration="13.18356704s" podCreationTimestamp="2026-01-08 23:32:11 +0000 UTC" firstStartedPulling="2026-01-08 23:32:11.831001016 +0000 UTC m=+1002.142159962" lastFinishedPulling="2026-01-08 23:32:20.171647755 +0000 UTC m=+1010.482806701" observedRunningTime="2026-01-08 23:32:24.183278983 +0000 UTC m=+1014.494437929" watchObservedRunningTime="2026-01-08 23:32:24.18356704 +0000 UTC m=+1014.494725976" Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.996136 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs"] Jan 08 23:32:24 crc kubenswrapper[4945]: I0108 23:32:24.998318 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.004319 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.009546 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs"] Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.018337 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.018461 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.018645 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7xk\" (UniqueName: \"kubernetes.io/projected/75c30e97-a22c-489e-84d5-053809039e77-kube-api-access-9w7xk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.120306 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.120371 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.120422 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7xk\" (UniqueName: \"kubernetes.io/projected/75c30e97-a22c-489e-84d5-053809039e77-kube-api-access-9w7xk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.121361 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.121724 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.146207 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7xk\" (UniqueName: \"kubernetes.io/projected/75c30e97-a22c-489e-84d5-053809039e77-kube-api-access-9w7xk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.374394 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:25 crc kubenswrapper[4945]: I0108 23:32:25.802475 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs"] Jan 08 23:32:25 crc kubenswrapper[4945]: W0108 23:32:25.817821 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75c30e97_a22c_489e_84d5_053809039e77.slice/crio-3bc72dbde9a17a4cc9d75fa5252992ee6af5b671318d787cf692f49c6ab9fa5b WatchSource:0}: Error finding container 3bc72dbde9a17a4cc9d75fa5252992ee6af5b671318d787cf692f49c6ab9fa5b: Status 404 returned error can't find the container with id 3bc72dbde9a17a4cc9d75fa5252992ee6af5b671318d787cf692f49c6ab9fa5b Jan 08 23:32:26 crc kubenswrapper[4945]: I0108 23:32:26.176596 4945 generic.go:334] "Generic (PLEG): container finished" podID="75c30e97-a22c-489e-84d5-053809039e77" containerID="4feb59e8d9f7bf1ffd68016670386fa21917c95d3e52352de9b935b3c7fdb702" exitCode=0 Jan 08 23:32:26 crc kubenswrapper[4945]: I0108 23:32:26.176639 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" event={"ID":"75c30e97-a22c-489e-84d5-053809039e77","Type":"ContainerDied","Data":"4feb59e8d9f7bf1ffd68016670386fa21917c95d3e52352de9b935b3c7fdb702"} Jan 08 23:32:26 crc kubenswrapper[4945]: I0108 23:32:26.176663 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" event={"ID":"75c30e97-a22c-489e-84d5-053809039e77","Type":"ContainerStarted","Data":"3bc72dbde9a17a4cc9d75fa5252992ee6af5b671318d787cf692f49c6ab9fa5b"} Jan 08 23:32:26 crc kubenswrapper[4945]: I0108 23:32:26.711330 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:26 crc kubenswrapper[4945]: I0108 23:32:26.760793 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:31 crc kubenswrapper[4945]: I0108 23:32:31.703948 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rfzl6" Jan 08 23:32:41 crc kubenswrapper[4945]: I0108 23:32:41.716296 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-tcq8p" Jan 08 23:32:43 crc kubenswrapper[4945]: I0108 23:32:43.282867 4945 generic.go:334] "Generic (PLEG): container finished" podID="75c30e97-a22c-489e-84d5-053809039e77" containerID="51aebfe4b141fe52dc615e11cc3c35b99236124909d9609125c726bed2d2b164" exitCode=0 Jan 08 23:32:43 crc kubenswrapper[4945]: I0108 23:32:43.283051 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" event={"ID":"75c30e97-a22c-489e-84d5-053809039e77","Type":"ContainerDied","Data":"51aebfe4b141fe52dc615e11cc3c35b99236124909d9609125c726bed2d2b164"} Jan 08 23:32:44 crc kubenswrapper[4945]: I0108 23:32:44.293859 4945 generic.go:334] "Generic (PLEG): container finished" podID="75c30e97-a22c-489e-84d5-053809039e77" containerID="880c2e7168a7a0085d016cec81396001fc5174e3031bce6c75952ff4a5db6b01" exitCode=0 Jan 08 23:32:44 crc kubenswrapper[4945]: I0108 23:32:44.293969 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" event={"ID":"75c30e97-a22c-489e-84d5-053809039e77","Type":"ContainerDied","Data":"880c2e7168a7a0085d016cec81396001fc5174e3031bce6c75952ff4a5db6b01"} Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.587714 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.602392 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w7xk\" (UniqueName: \"kubernetes.io/projected/75c30e97-a22c-489e-84d5-053809039e77-kube-api-access-9w7xk\") pod \"75c30e97-a22c-489e-84d5-053809039e77\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.602500 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-bundle\") pod \"75c30e97-a22c-489e-84d5-053809039e77\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.602604 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-util\") pod \"75c30e97-a22c-489e-84d5-053809039e77\" (UID: \"75c30e97-a22c-489e-84d5-053809039e77\") " Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.603704 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-bundle" (OuterVolumeSpecName: "bundle") pod "75c30e97-a22c-489e-84d5-053809039e77" (UID: "75c30e97-a22c-489e-84d5-053809039e77"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.611472 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c30e97-a22c-489e-84d5-053809039e77-kube-api-access-9w7xk" (OuterVolumeSpecName: "kube-api-access-9w7xk") pod "75c30e97-a22c-489e-84d5-053809039e77" (UID: "75c30e97-a22c-489e-84d5-053809039e77"). InnerVolumeSpecName "kube-api-access-9w7xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.615495 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-util" (OuterVolumeSpecName: "util") pod "75c30e97-a22c-489e-84d5-053809039e77" (UID: "75c30e97-a22c-489e-84d5-053809039e77"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.703973 4945 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-util\") on node \"crc\" DevicePath \"\"" Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.704049 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w7xk\" (UniqueName: \"kubernetes.io/projected/75c30e97-a22c-489e-84d5-053809039e77-kube-api-access-9w7xk\") on node \"crc\" DevicePath \"\"" Jan 08 23:32:45 crc kubenswrapper[4945]: I0108 23:32:45.704062 4945 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/75c30e97-a22c-489e-84d5-053809039e77-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:32:46 crc kubenswrapper[4945]: I0108 23:32:46.309240 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" event={"ID":"75c30e97-a22c-489e-84d5-053809039e77","Type":"ContainerDied","Data":"3bc72dbde9a17a4cc9d75fa5252992ee6af5b671318d787cf692f49c6ab9fa5b"} Jan 08 23:32:46 crc kubenswrapper[4945]: I0108 23:32:46.309287 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc72dbde9a17a4cc9d75fa5252992ee6af5b671318d787cf692f49c6ab9fa5b" Jan 08 23:32:46 crc kubenswrapper[4945]: I0108 23:32:46.309290 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.132278 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw"] Jan 08 23:32:53 crc kubenswrapper[4945]: E0108 23:32:53.133723 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c30e97-a22c-489e-84d5-053809039e77" containerName="util" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.133792 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c30e97-a22c-489e-84d5-053809039e77" containerName="util" Jan 08 23:32:53 crc kubenswrapper[4945]: E0108 23:32:53.133821 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c30e97-a22c-489e-84d5-053809039e77" containerName="pull" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.133836 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c30e97-a22c-489e-84d5-053809039e77" containerName="pull" Jan 08 23:32:53 crc kubenswrapper[4945]: E0108 23:32:53.133877 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c30e97-a22c-489e-84d5-053809039e77" containerName="extract" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.133890 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c30e97-a22c-489e-84d5-053809039e77" containerName="extract" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.140329 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c30e97-a22c-489e-84d5-053809039e77" containerName="extract" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.141231 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.142675 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw"] Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.144956 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.145448 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.146148 4945 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-k7gn2" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.297753 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/803318e5-5161-43d9-8a2a-fb4820b354d9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fsfdw\" (UID: \"803318e5-5161-43d9-8a2a-fb4820b354d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.297809 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw9md\" (UniqueName: \"kubernetes.io/projected/803318e5-5161-43d9-8a2a-fb4820b354d9-kube-api-access-xw9md\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fsfdw\" (UID: \"803318e5-5161-43d9-8a2a-fb4820b354d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.399222 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/803318e5-5161-43d9-8a2a-fb4820b354d9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fsfdw\" (UID: \"803318e5-5161-43d9-8a2a-fb4820b354d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.399263 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw9md\" (UniqueName: \"kubernetes.io/projected/803318e5-5161-43d9-8a2a-fb4820b354d9-kube-api-access-xw9md\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fsfdw\" (UID: \"803318e5-5161-43d9-8a2a-fb4820b354d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.400176 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/803318e5-5161-43d9-8a2a-fb4820b354d9-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fsfdw\" (UID: \"803318e5-5161-43d9-8a2a-fb4820b354d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.426165 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw9md\" (UniqueName: \"kubernetes.io/projected/803318e5-5161-43d9-8a2a-fb4820b354d9-kube-api-access-xw9md\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fsfdw\" (UID: \"803318e5-5161-43d9-8a2a-fb4820b354d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.474598 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" Jan 08 23:32:53 crc kubenswrapper[4945]: I0108 23:32:53.818614 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw"] Jan 08 23:32:54 crc kubenswrapper[4945]: I0108 23:32:54.391488 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" event={"ID":"803318e5-5161-43d9-8a2a-fb4820b354d9","Type":"ContainerStarted","Data":"7511ad0544016142532fb970636fa3e2d471eb697fcf4d353bdd31e83b2bdcd1"} Jan 08 23:33:10 crc kubenswrapper[4945]: I0108 23:33:10.484196 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" event={"ID":"803318e5-5161-43d9-8a2a-fb4820b354d9","Type":"ContainerStarted","Data":"527e7c2033bc4e64e430d6a0105264841207a94422f21147d390aa7e262a3155"} Jan 08 23:33:10 crc kubenswrapper[4945]: I0108 23:33:10.501838 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fsfdw" podStartSLOduration=1.271828829 podStartE2EDuration="17.501821103s" podCreationTimestamp="2026-01-08 23:32:53 +0000 UTC" firstStartedPulling="2026-01-08 23:32:53.82937672 +0000 UTC m=+1044.140535656" lastFinishedPulling="2026-01-08 23:33:10.059368984 +0000 UTC m=+1060.370527930" observedRunningTime="2026-01-08 23:33:10.500829089 +0000 UTC m=+1060.811988045" watchObservedRunningTime="2026-01-08 23:33:10.501821103 +0000 UTC m=+1060.812980069" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.708879 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-csm2r"] Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.710168 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.717433 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.718023 4945 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5m4hx" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.720986 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-csm2r"] Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.724638 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.862345 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d143d502-c896-48f2-bbd3-f9fbc4e814fc-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-csm2r\" (UID: \"d143d502-c896-48f2-bbd3-f9fbc4e814fc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.862410 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpx9p\" (UniqueName: \"kubernetes.io/projected/d143d502-c896-48f2-bbd3-f9fbc4e814fc-kube-api-access-kpx9p\") pod \"cert-manager-webhook-f4fb5df64-csm2r\" (UID: \"d143d502-c896-48f2-bbd3-f9fbc4e814fc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.964111 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d143d502-c896-48f2-bbd3-f9fbc4e814fc-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-csm2r\" (UID: \"d143d502-c896-48f2-bbd3-f9fbc4e814fc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.964182 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpx9p\" (UniqueName: \"kubernetes.io/projected/d143d502-c896-48f2-bbd3-f9fbc4e814fc-kube-api-access-kpx9p\") pod \"cert-manager-webhook-f4fb5df64-csm2r\" (UID: \"d143d502-c896-48f2-bbd3-f9fbc4e814fc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.986765 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d143d502-c896-48f2-bbd3-f9fbc4e814fc-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-csm2r\" (UID: \"d143d502-c896-48f2-bbd3-f9fbc4e814fc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:12 crc kubenswrapper[4945]: I0108 23:33:12.989156 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpx9p\" (UniqueName: \"kubernetes.io/projected/d143d502-c896-48f2-bbd3-f9fbc4e814fc-kube-api-access-kpx9p\") pod \"cert-manager-webhook-f4fb5df64-csm2r\" (UID: \"d143d502-c896-48f2-bbd3-f9fbc4e814fc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:13 crc kubenswrapper[4945]: I0108 23:33:13.027853 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:13 crc kubenswrapper[4945]: I0108 23:33:13.464018 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-csm2r"] Jan 08 23:33:13 crc kubenswrapper[4945]: I0108 23:33:13.501250 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" event={"ID":"d143d502-c896-48f2-bbd3-f9fbc4e814fc","Type":"ContainerStarted","Data":"c33ade85500a1548807a7323cebb97800d675413dc0840baa7986d14b9991db4"} Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.288833 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-vb49h"] Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.289971 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.292326 4945 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-d2l78" Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.341699 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-vb49h"] Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.409867 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c1f140e-13b8-4135-bd0f-ac210dd8d3d1-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-vb49h\" (UID: \"2c1f140e-13b8-4135-bd0f-ac210dd8d3d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.409940 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkrwn\" (UniqueName: \"kubernetes.io/projected/2c1f140e-13b8-4135-bd0f-ac210dd8d3d1-kube-api-access-bkrwn\") pod \"cert-manager-cainjector-855d9ccff4-vb49h\" (UID: \"2c1f140e-13b8-4135-bd0f-ac210dd8d3d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.513024 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c1f140e-13b8-4135-bd0f-ac210dd8d3d1-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-vb49h\" (UID: \"2c1f140e-13b8-4135-bd0f-ac210dd8d3d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.513211 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkrwn\" (UniqueName: \"kubernetes.io/projected/2c1f140e-13b8-4135-bd0f-ac210dd8d3d1-kube-api-access-bkrwn\") pod \"cert-manager-cainjector-855d9ccff4-vb49h\" (UID: \"2c1f140e-13b8-4135-bd0f-ac210dd8d3d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.530832 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkrwn\" (UniqueName: \"kubernetes.io/projected/2c1f140e-13b8-4135-bd0f-ac210dd8d3d1-kube-api-access-bkrwn\") pod \"cert-manager-cainjector-855d9ccff4-vb49h\" (UID: \"2c1f140e-13b8-4135-bd0f-ac210dd8d3d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.535164 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c1f140e-13b8-4135-bd0f-ac210dd8d3d1-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-vb49h\" (UID: \"2c1f140e-13b8-4135-bd0f-ac210dd8d3d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" Jan 08 23:33:15 crc kubenswrapper[4945]: I0108 23:33:15.607320 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" Jan 08 23:33:16 crc kubenswrapper[4945]: I0108 23:33:16.130889 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-vb49h"] Jan 08 23:33:16 crc kubenswrapper[4945]: W0108 23:33:16.145071 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c1f140e_13b8_4135_bd0f_ac210dd8d3d1.slice/crio-5a44a70a250634e4505026e4cd5f67e75cd3f8464fa9a63279e164f0ab0c8f02 WatchSource:0}: Error finding container 5a44a70a250634e4505026e4cd5f67e75cd3f8464fa9a63279e164f0ab0c8f02: Status 404 returned error can't find the container with id 5a44a70a250634e4505026e4cd5f67e75cd3f8464fa9a63279e164f0ab0c8f02 Jan 08 23:33:16 crc kubenswrapper[4945]: I0108 23:33:16.517792 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" event={"ID":"2c1f140e-13b8-4135-bd0f-ac210dd8d3d1","Type":"ContainerStarted","Data":"5a44a70a250634e4505026e4cd5f67e75cd3f8464fa9a63279e164f0ab0c8f02"} Jan 08 23:33:22 crc kubenswrapper[4945]: I0108 23:33:22.592328 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" event={"ID":"d143d502-c896-48f2-bbd3-f9fbc4e814fc","Type":"ContainerStarted","Data":"462d1496aed915036ebd333c55343913473a0504a4b185c5cd52e698769547aa"} Jan 08 23:33:22 crc kubenswrapper[4945]: I0108 23:33:22.593127 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:22 crc kubenswrapper[4945]: I0108 23:33:22.593783 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" event={"ID":"2c1f140e-13b8-4135-bd0f-ac210dd8d3d1","Type":"ContainerStarted","Data":"9a93e8dbbd5989cbfaea0316e56c4664c6682cdab1e8b4e3ff793fa1c3c5b27e"} Jan 08 23:33:22 crc kubenswrapper[4945]: I0108 23:33:22.609445 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" podStartSLOduration=1.728799592 podStartE2EDuration="10.609417372s" podCreationTimestamp="2026-01-08 23:33:12 +0000 UTC" firstStartedPulling="2026-01-08 23:33:13.46996477 +0000 UTC m=+1063.781123716" lastFinishedPulling="2026-01-08 23:33:22.35058255 +0000 UTC m=+1072.661741496" observedRunningTime="2026-01-08 23:33:22.609407762 +0000 UTC m=+1072.920566718" watchObservedRunningTime="2026-01-08 23:33:22.609417372 +0000 UTC m=+1072.920576358" Jan 08 23:33:28 crc kubenswrapper[4945]: I0108 23:33:28.030897 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-csm2r" Jan 08 23:33:28 crc kubenswrapper[4945]: I0108 23:33:28.065827 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-vb49h" podStartSLOduration=6.888628153 podStartE2EDuration="13.065799172s" podCreationTimestamp="2026-01-08 23:33:15 +0000 UTC" firstStartedPulling="2026-01-08 23:33:16.147281052 +0000 UTC m=+1066.458439998" lastFinishedPulling="2026-01-08 23:33:22.324452071 +0000 UTC m=+1072.635611017" observedRunningTime="2026-01-08 23:33:22.635385586 +0000 UTC m=+1072.946544542" watchObservedRunningTime="2026-01-08 23:33:28.065799172 +0000 UTC m=+1078.376958148" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.168768 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-rkvnx"] Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.172351 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-rkvnx" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.175168 4945 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hcbmk" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.177303 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-rkvnx"] Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.255078 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fksp7\" (UniqueName: \"kubernetes.io/projected/8b1362ec-1b4b-44f9-8f19-29f48e9d443e-kube-api-access-fksp7\") pod \"cert-manager-86cb77c54b-rkvnx\" (UID: \"8b1362ec-1b4b-44f9-8f19-29f48e9d443e\") " pod="cert-manager/cert-manager-86cb77c54b-rkvnx" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.255217 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b1362ec-1b4b-44f9-8f19-29f48e9d443e-bound-sa-token\") pod \"cert-manager-86cb77c54b-rkvnx\" (UID: \"8b1362ec-1b4b-44f9-8f19-29f48e9d443e\") " pod="cert-manager/cert-manager-86cb77c54b-rkvnx" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.357671 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fksp7\" (UniqueName: \"kubernetes.io/projected/8b1362ec-1b4b-44f9-8f19-29f48e9d443e-kube-api-access-fksp7\") pod \"cert-manager-86cb77c54b-rkvnx\" (UID: \"8b1362ec-1b4b-44f9-8f19-29f48e9d443e\") " pod="cert-manager/cert-manager-86cb77c54b-rkvnx" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.357736 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b1362ec-1b4b-44f9-8f19-29f48e9d443e-bound-sa-token\") pod \"cert-manager-86cb77c54b-rkvnx\" (UID: \"8b1362ec-1b4b-44f9-8f19-29f48e9d443e\") " pod="cert-manager/cert-manager-86cb77c54b-rkvnx" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.384181 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fksp7\" (UniqueName: \"kubernetes.io/projected/8b1362ec-1b4b-44f9-8f19-29f48e9d443e-kube-api-access-fksp7\") pod \"cert-manager-86cb77c54b-rkvnx\" (UID: \"8b1362ec-1b4b-44f9-8f19-29f48e9d443e\") " pod="cert-manager/cert-manager-86cb77c54b-rkvnx" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.386331 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b1362ec-1b4b-44f9-8f19-29f48e9d443e-bound-sa-token\") pod \"cert-manager-86cb77c54b-rkvnx\" (UID: \"8b1362ec-1b4b-44f9-8f19-29f48e9d443e\") " pod="cert-manager/cert-manager-86cb77c54b-rkvnx" Jan 08 23:33:32 crc kubenswrapper[4945]: I0108 23:33:32.489604 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-rkvnx" Jan 08 23:33:33 crc kubenswrapper[4945]: I0108 23:33:33.049053 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-rkvnx"] Jan 08 23:33:33 crc kubenswrapper[4945]: W0108 23:33:33.055445 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1362ec_1b4b_44f9_8f19_29f48e9d443e.slice/crio-ff7d513d9198aad248eef6719e2348c982f64122d76f1021b765f63f2c087154 WatchSource:0}: Error finding container ff7d513d9198aad248eef6719e2348c982f64122d76f1021b765f63f2c087154: Status 404 returned error can't find the container with id ff7d513d9198aad248eef6719e2348c982f64122d76f1021b765f63f2c087154 Jan 08 23:33:33 crc kubenswrapper[4945]: I0108 23:33:33.665038 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-rkvnx" event={"ID":"8b1362ec-1b4b-44f9-8f19-29f48e9d443e","Type":"ContainerStarted","Data":"ff7d513d9198aad248eef6719e2348c982f64122d76f1021b765f63f2c087154"} Jan 08 23:33:34 crc kubenswrapper[4945]: I0108 23:33:34.672062 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-rkvnx" event={"ID":"8b1362ec-1b4b-44f9-8f19-29f48e9d443e","Type":"ContainerStarted","Data":"2e5588c27c512dc104a9ed714cacfa1c8d182f793156c04930b541089f19daa2"} Jan 08 23:33:34 crc kubenswrapper[4945]: I0108 23:33:34.693172 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-rkvnx" podStartSLOduration=2.693147447 podStartE2EDuration="2.693147447s" podCreationTimestamp="2026-01-08 23:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:33:34.687557401 +0000 UTC m=+1084.998716367" watchObservedRunningTime="2026-01-08 23:33:34.693147447 +0000 UTC m=+1085.004306413" Jan 08 23:33:42 crc kubenswrapper[4945]: I0108 23:33:42.931630 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7fhtm"] Jan 08 23:33:42 crc kubenswrapper[4945]: I0108 23:33:42.932870 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7fhtm" Jan 08 23:33:42 crc kubenswrapper[4945]: I0108 23:33:42.935133 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 08 23:33:42 crc kubenswrapper[4945]: I0108 23:33:42.935723 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-v68qv" Jan 08 23:33:42 crc kubenswrapper[4945]: I0108 23:33:42.937736 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 08 23:33:42 crc kubenswrapper[4945]: I0108 23:33:42.961638 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7fhtm"] Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.112468 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trvrg\" (UniqueName: \"kubernetes.io/projected/19e141c7-ad2e-4b78-9f96-0fd24fbd68e9-kube-api-access-trvrg\") pod \"openstack-operator-index-7fhtm\" (UID: \"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9\") " pod="openstack-operators/openstack-operator-index-7fhtm" Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.214392 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trvrg\" (UniqueName: \"kubernetes.io/projected/19e141c7-ad2e-4b78-9f96-0fd24fbd68e9-kube-api-access-trvrg\") pod \"openstack-operator-index-7fhtm\" (UID: \"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9\") " pod="openstack-operators/openstack-operator-index-7fhtm" Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.233742 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trvrg\" (UniqueName: \"kubernetes.io/projected/19e141c7-ad2e-4b78-9f96-0fd24fbd68e9-kube-api-access-trvrg\") pod \"openstack-operator-index-7fhtm\" (UID: \"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9\") " pod="openstack-operators/openstack-operator-index-7fhtm" Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.260015 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7fhtm" Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.448555 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7fhtm"] Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.459189 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.577757 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.577814 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:33:43 crc kubenswrapper[4945]: I0108 23:33:43.732963 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7fhtm" event={"ID":"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9","Type":"ContainerStarted","Data":"0ef980d34825da4e17a931b670643340d1421e0b97b4c58dc73e554675108b16"} Jan 08 23:33:45 crc kubenswrapper[4945]: I0108 23:33:45.706327 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7fhtm"] Jan 08 23:33:45 crc kubenswrapper[4945]: I0108 23:33:45.748153 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7fhtm" event={"ID":"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9","Type":"ContainerStarted","Data":"b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61"} Jan 08 23:33:45 crc kubenswrapper[4945]: I0108 23:33:45.770066 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7fhtm" podStartSLOduration=1.709177515 podStartE2EDuration="3.770042388s" podCreationTimestamp="2026-01-08 23:33:42 +0000 UTC" firstStartedPulling="2026-01-08 23:33:43.458911371 +0000 UTC m=+1093.770070317" lastFinishedPulling="2026-01-08 23:33:45.519776234 +0000 UTC m=+1095.830935190" observedRunningTime="2026-01-08 23:33:45.766472661 +0000 UTC m=+1096.077631627" watchObservedRunningTime="2026-01-08 23:33:45.770042388 +0000 UTC m=+1096.081201344" Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.315886 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-q987z"] Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.316606 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.338417 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q987z"] Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.458076 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z8qg\" (UniqueName: \"kubernetes.io/projected/bd224d58-5cf6-4728-bf36-4676b288ec46-kube-api-access-6z8qg\") pod \"openstack-operator-index-q987z\" (UID: \"bd224d58-5cf6-4728-bf36-4676b288ec46\") " pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.559575 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z8qg\" (UniqueName: \"kubernetes.io/projected/bd224d58-5cf6-4728-bf36-4676b288ec46-kube-api-access-6z8qg\") pod \"openstack-operator-index-q987z\" (UID: \"bd224d58-5cf6-4728-bf36-4676b288ec46\") " pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.581825 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z8qg\" (UniqueName: \"kubernetes.io/projected/bd224d58-5cf6-4728-bf36-4676b288ec46-kube-api-access-6z8qg\") pod \"openstack-operator-index-q987z\" (UID: \"bd224d58-5cf6-4728-bf36-4676b288ec46\") " pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.645413 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.761391 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-7fhtm" podUID="19e141c7-ad2e-4b78-9f96-0fd24fbd68e9" containerName="registry-server" containerID="cri-o://b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61" gracePeriod=2 Jan 08 23:33:46 crc kubenswrapper[4945]: W0108 23:33:46.877908 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd224d58_5cf6_4728_bf36_4676b288ec46.slice/crio-188b6cb5a5f31f6c6f370aa13a28a3fb81c45a4fa29b4ff597177e8b43ce3508 WatchSource:0}: Error finding container 188b6cb5a5f31f6c6f370aa13a28a3fb81c45a4fa29b4ff597177e8b43ce3508: Status 404 returned error can't find the container with id 188b6cb5a5f31f6c6f370aa13a28a3fb81c45a4fa29b4ff597177e8b43ce3508 Jan 08 23:33:46 crc kubenswrapper[4945]: I0108 23:33:46.878476 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q987z"] Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.082151 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7fhtm" Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.166865 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trvrg\" (UniqueName: \"kubernetes.io/projected/19e141c7-ad2e-4b78-9f96-0fd24fbd68e9-kube-api-access-trvrg\") pod \"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9\" (UID: \"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9\") " Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.172646 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e141c7-ad2e-4b78-9f96-0fd24fbd68e9-kube-api-access-trvrg" (OuterVolumeSpecName: "kube-api-access-trvrg") pod "19e141c7-ad2e-4b78-9f96-0fd24fbd68e9" (UID: "19e141c7-ad2e-4b78-9f96-0fd24fbd68e9"). InnerVolumeSpecName "kube-api-access-trvrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.269310 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trvrg\" (UniqueName: \"kubernetes.io/projected/19e141c7-ad2e-4b78-9f96-0fd24fbd68e9-kube-api-access-trvrg\") on node \"crc\" DevicePath \"\"" Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.768039 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q987z" event={"ID":"bd224d58-5cf6-4728-bf36-4676b288ec46","Type":"ContainerStarted","Data":"6b30ddafb99c35a4eabcfe35971daa00719d1b800ce067671868d4f973f6531a"} Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.768092 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q987z" event={"ID":"bd224d58-5cf6-4728-bf36-4676b288ec46","Type":"ContainerStarted","Data":"188b6cb5a5f31f6c6f370aa13a28a3fb81c45a4fa29b4ff597177e8b43ce3508"} Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.769231 4945 generic.go:334] "Generic (PLEG): container finished" podID="19e141c7-ad2e-4b78-9f96-0fd24fbd68e9" containerID="b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61" exitCode=0 Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.769253 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7fhtm" event={"ID":"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9","Type":"ContainerDied","Data":"b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61"} Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.769268 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7fhtm" event={"ID":"19e141c7-ad2e-4b78-9f96-0fd24fbd68e9","Type":"ContainerDied","Data":"0ef980d34825da4e17a931b670643340d1421e0b97b4c58dc73e554675108b16"} Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.769287 4945 scope.go:117] "RemoveContainer" containerID="b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61" Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.769379 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7fhtm" Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.790904 4945 scope.go:117] "RemoveContainer" containerID="b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61" Jan 08 23:33:47 crc kubenswrapper[4945]: E0108 23:33:47.791751 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61\": container with ID starting with b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61 not found: ID does not exist" containerID="b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61" Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.791799 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61"} err="failed to get container status \"b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61\": rpc error: code = NotFound desc = could not find container \"b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61\": container with ID starting with b8c623f2238a5b1741a5c2140f4574062f018f74280b75db12335fadddc4ed61 not found: ID does not exist" Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.800413 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-q987z" podStartSLOduration=1.746718365 podStartE2EDuration="1.800392236s" podCreationTimestamp="2026-01-08 23:33:46 +0000 UTC" firstStartedPulling="2026-01-08 23:33:46.881435978 +0000 UTC m=+1097.192594924" lastFinishedPulling="2026-01-08 23:33:46.935109849 +0000 UTC m=+1097.246268795" observedRunningTime="2026-01-08 23:33:47.786738262 +0000 UTC m=+1098.097897228" watchObservedRunningTime="2026-01-08 23:33:47.800392236 +0000 UTC m=+1098.111551192" Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.802063 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7fhtm"] Jan 08 23:33:47 crc kubenswrapper[4945]: I0108 23:33:47.806557 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-7fhtm"] Jan 08 23:33:48 crc kubenswrapper[4945]: I0108 23:33:48.006829 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e141c7-ad2e-4b78-9f96-0fd24fbd68e9" path="/var/lib/kubelet/pods/19e141c7-ad2e-4b78-9f96-0fd24fbd68e9/volumes" Jan 08 23:33:56 crc kubenswrapper[4945]: I0108 23:33:56.646062 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:56 crc kubenswrapper[4945]: I0108 23:33:56.646801 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:56 crc kubenswrapper[4945]: I0108 23:33:56.683043 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:56 crc kubenswrapper[4945]: I0108 23:33:56.852929 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-q987z" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.150761 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2"] Jan 08 23:33:59 crc kubenswrapper[4945]: E0108 23:33:59.151082 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e141c7-ad2e-4b78-9f96-0fd24fbd68e9" containerName="registry-server" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.151100 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e141c7-ad2e-4b78-9f96-0fd24fbd68e9" containerName="registry-server" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.151238 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e141c7-ad2e-4b78-9f96-0fd24fbd68e9" containerName="registry-server" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.152327 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.155303 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-85kw5" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.157508 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2"] Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.238400 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-bundle\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.238477 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-util\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.238618 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74pp\" (UniqueName: \"kubernetes.io/projected/9200d52e-875b-4b26-b67b-7515bd99f30a-kube-api-access-m74pp\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.339666 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-bundle\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.339766 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-util\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.339796 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74pp\" (UniqueName: \"kubernetes.io/projected/9200d52e-875b-4b26-b67b-7515bd99f30a-kube-api-access-m74pp\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.340332 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-bundle\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.340355 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-util\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.358961 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74pp\" (UniqueName: \"kubernetes.io/projected/9200d52e-875b-4b26-b67b-7515bd99f30a-kube-api-access-m74pp\") pod \"af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.469251 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:33:59 crc kubenswrapper[4945]: I0108 23:33:59.922827 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2"] Jan 08 23:33:59 crc kubenswrapper[4945]: W0108 23:33:59.930322 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9200d52e_875b_4b26_b67b_7515bd99f30a.slice/crio-e32bcdb71ed5e85fdcd3fb5c933f81606a627f2597a3c179f44bf1dbe564a918 WatchSource:0}: Error finding container e32bcdb71ed5e85fdcd3fb5c933f81606a627f2597a3c179f44bf1dbe564a918: Status 404 returned error can't find the container with id e32bcdb71ed5e85fdcd3fb5c933f81606a627f2597a3c179f44bf1dbe564a918 Jan 08 23:34:00 crc kubenswrapper[4945]: I0108 23:34:00.853359 4945 generic.go:334] "Generic (PLEG): container finished" podID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerID="661514d7a27e9a323878843a581041d6e1c4bdeed6e03b4b68d55bfaecff52f3" exitCode=0 Jan 08 23:34:00 crc kubenswrapper[4945]: I0108 23:34:00.853399 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" event={"ID":"9200d52e-875b-4b26-b67b-7515bd99f30a","Type":"ContainerDied","Data":"661514d7a27e9a323878843a581041d6e1c4bdeed6e03b4b68d55bfaecff52f3"} Jan 08 23:34:00 crc kubenswrapper[4945]: I0108 23:34:00.853425 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" event={"ID":"9200d52e-875b-4b26-b67b-7515bd99f30a","Type":"ContainerStarted","Data":"e32bcdb71ed5e85fdcd3fb5c933f81606a627f2597a3c179f44bf1dbe564a918"} Jan 08 23:34:01 crc kubenswrapper[4945]: I0108 23:34:01.861398 4945 generic.go:334] "Generic (PLEG): container finished" podID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerID="93d37bf2277336e744b2194920557f6859d5e3e7df024bd46a29b27d56669968" exitCode=0 Jan 08 23:34:01 crc kubenswrapper[4945]: I0108 23:34:01.861497 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" event={"ID":"9200d52e-875b-4b26-b67b-7515bd99f30a","Type":"ContainerDied","Data":"93d37bf2277336e744b2194920557f6859d5e3e7df024bd46a29b27d56669968"} Jan 08 23:34:02 crc kubenswrapper[4945]: I0108 23:34:02.876355 4945 generic.go:334] "Generic (PLEG): container finished" podID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerID="82cd1ecc6be38284ba584da3f2eff21f8c043688a51b5869943942756e87be17" exitCode=0 Jan 08 23:34:02 crc kubenswrapper[4945]: I0108 23:34:02.876400 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" event={"ID":"9200d52e-875b-4b26-b67b-7515bd99f30a","Type":"ContainerDied","Data":"82cd1ecc6be38284ba584da3f2eff21f8c043688a51b5869943942756e87be17"} Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.103324 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.208547 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-util\") pod \"9200d52e-875b-4b26-b67b-7515bd99f30a\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.208612 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-bundle\") pod \"9200d52e-875b-4b26-b67b-7515bd99f30a\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.208670 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m74pp\" (UniqueName: \"kubernetes.io/projected/9200d52e-875b-4b26-b67b-7515bd99f30a-kube-api-access-m74pp\") pod \"9200d52e-875b-4b26-b67b-7515bd99f30a\" (UID: \"9200d52e-875b-4b26-b67b-7515bd99f30a\") " Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.209770 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-bundle" (OuterVolumeSpecName: "bundle") pod "9200d52e-875b-4b26-b67b-7515bd99f30a" (UID: "9200d52e-875b-4b26-b67b-7515bd99f30a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.213452 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9200d52e-875b-4b26-b67b-7515bd99f30a-kube-api-access-m74pp" (OuterVolumeSpecName: "kube-api-access-m74pp") pod "9200d52e-875b-4b26-b67b-7515bd99f30a" (UID: "9200d52e-875b-4b26-b67b-7515bd99f30a"). InnerVolumeSpecName "kube-api-access-m74pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.232697 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-util" (OuterVolumeSpecName: "util") pod "9200d52e-875b-4b26-b67b-7515bd99f30a" (UID: "9200d52e-875b-4b26-b67b-7515bd99f30a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.310585 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m74pp\" (UniqueName: \"kubernetes.io/projected/9200d52e-875b-4b26-b67b-7515bd99f30a-kube-api-access-m74pp\") on node \"crc\" DevicePath \"\"" Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.310623 4945 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-util\") on node \"crc\" DevicePath \"\"" Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.310632 4945 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9200d52e-875b-4b26-b67b-7515bd99f30a-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.895643 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" event={"ID":"9200d52e-875b-4b26-b67b-7515bd99f30a","Type":"ContainerDied","Data":"e32bcdb71ed5e85fdcd3fb5c933f81606a627f2597a3c179f44bf1dbe564a918"} Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.895688 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32bcdb71ed5e85fdcd3fb5c933f81606a627f2597a3c179f44bf1dbe564a918" Jan 08 23:34:04 crc kubenswrapper[4945]: I0108 23:34:04.895803 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.293057 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b"] Jan 08 23:34:11 crc kubenswrapper[4945]: E0108 23:34:11.293827 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerName="pull" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.293839 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerName="pull" Jan 08 23:34:11 crc kubenswrapper[4945]: E0108 23:34:11.293853 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerName="extract" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.293859 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerName="extract" Jan 08 23:34:11 crc kubenswrapper[4945]: E0108 23:34:11.293871 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerName="util" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.293877 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerName="util" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.294011 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9200d52e-875b-4b26-b67b-7515bd99f30a" containerName="extract" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.294470 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.297050 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-jft4q" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.342848 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b"] Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.425960 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgkjp\" (UniqueName: \"kubernetes.io/projected/1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31-kube-api-access-sgkjp\") pod \"openstack-operator-controller-operator-7b756b7698-j2d2b\" (UID: \"1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31\") " pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.527264 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgkjp\" (UniqueName: \"kubernetes.io/projected/1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31-kube-api-access-sgkjp\") pod \"openstack-operator-controller-operator-7b756b7698-j2d2b\" (UID: \"1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31\") " pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.551728 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgkjp\" (UniqueName: \"kubernetes.io/projected/1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31-kube-api-access-sgkjp\") pod \"openstack-operator-controller-operator-7b756b7698-j2d2b\" (UID: \"1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31\") " pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" Jan 08 23:34:11 crc kubenswrapper[4945]: I0108 23:34:11.610752 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" Jan 08 23:34:12 crc kubenswrapper[4945]: I0108 23:34:12.111873 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b"] Jan 08 23:34:12 crc kubenswrapper[4945]: I0108 23:34:12.944805 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" event={"ID":"1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31","Type":"ContainerStarted","Data":"67fc3c4c388ae1ec91d4d94ec4b5cf7eb1c3893836d9990e3952cdd3e9b51178"} Jan 08 23:34:13 crc kubenswrapper[4945]: I0108 23:34:13.578465 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:34:13 crc kubenswrapper[4945]: I0108 23:34:13.578813 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:34:17 crc kubenswrapper[4945]: I0108 23:34:17.973858 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" event={"ID":"1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31","Type":"ContainerStarted","Data":"ae295129e13e16959db9e454752527bb52663fb9be2f82fb54257dcf98ac4348"} Jan 08 23:34:17 crc kubenswrapper[4945]: I0108 23:34:17.974605 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" Jan 08 23:34:18 crc kubenswrapper[4945]: I0108 23:34:18.023457 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" podStartSLOduration=2.290302612 podStartE2EDuration="7.023437208s" podCreationTimestamp="2026-01-08 23:34:11 +0000 UTC" firstStartedPulling="2026-01-08 23:34:12.105717871 +0000 UTC m=+1122.416876817" lastFinishedPulling="2026-01-08 23:34:16.838852467 +0000 UTC m=+1127.150011413" observedRunningTime="2026-01-08 23:34:18.014562962 +0000 UTC m=+1128.325721908" watchObservedRunningTime="2026-01-08 23:34:18.023437208 +0000 UTC m=+1128.334596174" Jan 08 23:34:21 crc kubenswrapper[4945]: I0108 23:34:21.614439 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b756b7698-j2d2b" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.142404 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.145026 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.153306 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.154270 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.158012 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.158746 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.165669 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zv48x" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.167688 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8962p" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.167892 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-dnv6t" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.171157 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.182509 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.189905 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.224062 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.224758 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.227549 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-n7lcf" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.227743 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnns5\" (UniqueName: \"kubernetes.io/projected/18520dae-623d-499b-95c6-96b4de8d9bf4-kube-api-access-nnns5\") pod \"barbican-operator-controller-manager-f6f74d6db-xvn4t\" (UID: \"18520dae-623d-499b-95c6-96b4de8d9bf4\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.227793 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sk8p\" (UniqueName: \"kubernetes.io/projected/1604a21a-39a8-4c27-886f-dd74a9c6ed92-kube-api-access-6sk8p\") pod \"cinder-operator-controller-manager-78979fc445-qwj2j\" (UID: \"1604a21a-39a8-4c27-886f-dd74a9c6ed92\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.227819 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s8f6\" (UniqueName: \"kubernetes.io/projected/90070b95-ae61-4ae4-b6b4-a4436fe457ef-kube-api-access-5s8f6\") pod \"designate-operator-controller-manager-66f8b87655-k72k4\" (UID: \"90070b95-ae61-4ae4-b6b4-a4436fe457ef\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.239620 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.262615 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.264609 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.266948 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-r2hpl" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.271190 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.272174 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.278643 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zt92s" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.283481 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.287517 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.296856 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.297779 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.304217 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.304496 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2grqc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.320388 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.328647 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sk8p\" (UniqueName: \"kubernetes.io/projected/1604a21a-39a8-4c27-886f-dd74a9c6ed92-kube-api-access-6sk8p\") pod \"cinder-operator-controller-manager-78979fc445-qwj2j\" (UID: \"1604a21a-39a8-4c27-886f-dd74a9c6ed92\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.334034 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s8f6\" (UniqueName: \"kubernetes.io/projected/90070b95-ae61-4ae4-b6b4-a4436fe457ef-kube-api-access-5s8f6\") pod \"designate-operator-controller-manager-66f8b87655-k72k4\" (UID: \"90070b95-ae61-4ae4-b6b4-a4436fe457ef\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.342142 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw4bz\" (UniqueName: \"kubernetes.io/projected/f9869afd-afb1-4974-9b35-61c60e107d86-kube-api-access-nw4bz\") pod \"glance-operator-controller-manager-7b549fc966-9t9tj\" (UID: \"f9869afd-afb1-4974-9b35-61c60e107d86\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.342357 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9g58\" (UniqueName: \"kubernetes.io/projected/c32027c9-9810-4f56-afaf-b680d5baed3c-kube-api-access-k9g58\") pod \"heat-operator-controller-manager-658dd65b86-hnk29\" (UID: \"c32027c9-9810-4f56-afaf-b680d5baed3c\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.342495 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnns5\" (UniqueName: \"kubernetes.io/projected/18520dae-623d-499b-95c6-96b4de8d9bf4-kube-api-access-nnns5\") pod \"barbican-operator-controller-manager-f6f74d6db-xvn4t\" (UID: \"18520dae-623d-499b-95c6-96b4de8d9bf4\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.334319 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.343583 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.343793 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.344211 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.349481 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mlx4r" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.349686 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pxfgh" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.357487 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.357730 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s8f6\" (UniqueName: \"kubernetes.io/projected/90070b95-ae61-4ae4-b6b4-a4436fe457ef-kube-api-access-5s8f6\") pod \"designate-operator-controller-manager-66f8b87655-k72k4\" (UID: \"90070b95-ae61-4ae4-b6b4-a4436fe457ef\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.358045 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sk8p\" (UniqueName: \"kubernetes.io/projected/1604a21a-39a8-4c27-886f-dd74a9c6ed92-kube-api-access-6sk8p\") pod \"cinder-operator-controller-manager-78979fc445-qwj2j\" (UID: \"1604a21a-39a8-4c27-886f-dd74a9c6ed92\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.386182 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.387784 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.391469 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnns5\" (UniqueName: \"kubernetes.io/projected/18520dae-623d-499b-95c6-96b4de8d9bf4-kube-api-access-nnns5\") pod \"barbican-operator-controller-manager-f6f74d6db-xvn4t\" (UID: \"18520dae-623d-499b-95c6-96b4de8d9bf4\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.391810 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kjvqr" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.409663 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.412916 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.417098 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.417585 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7wck8" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.440061 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.443982 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcfj\" (UniqueName: \"kubernetes.io/projected/750dd75e-3bcc-4490-a77b-0e759b74b760-kube-api-access-lfcfj\") pod \"keystone-operator-controller-manager-568985c78-wbhqw\" (UID: \"750dd75e-3bcc-4490-a77b-0e759b74b760\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.444054 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9g58\" (UniqueName: \"kubernetes.io/projected/c32027c9-9810-4f56-afaf-b680d5baed3c-kube-api-access-k9g58\") pod \"heat-operator-controller-manager-658dd65b86-hnk29\" (UID: \"c32027c9-9810-4f56-afaf-b680d5baed3c\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.444107 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvnw\" (UniqueName: \"kubernetes.io/projected/b9560bc6-7245-4df3-8daa-19f79d9d3d12-kube-api-access-jxvnw\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-gh5xr\" (UID: \"b9560bc6-7245-4df3-8daa-19f79d9d3d12\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.444138 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw4bz\" (UniqueName: \"kubernetes.io/projected/f9869afd-afb1-4974-9b35-61c60e107d86-kube-api-access-nw4bz\") pod \"glance-operator-controller-manager-7b549fc966-9t9tj\" (UID: \"f9869afd-afb1-4974-9b35-61c60e107d86\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.444180 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vdz2\" (UniqueName: \"kubernetes.io/projected/18eb8b42-2595-4104-aaf4-005fda7ded69-kube-api-access-8vdz2\") pod \"ironic-operator-controller-manager-f99f54bc8-fsjwc\" (UID: \"18eb8b42-2595-4104-aaf4-005fda7ded69\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.444208 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww29l\" (UniqueName: \"kubernetes.io/projected/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-kube-api-access-ww29l\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.444230 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.454692 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.469380 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.472357 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9g58\" (UniqueName: \"kubernetes.io/projected/c32027c9-9810-4f56-afaf-b680d5baed3c-kube-api-access-k9g58\") pod \"heat-operator-controller-manager-658dd65b86-hnk29\" (UID: \"c32027c9-9810-4f56-afaf-b680d5baed3c\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.472537 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.488049 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.498582 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.498921 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.499145 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.499404 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.502628 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.503856 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-trmx9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.503862 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-t2vqb" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.508588 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw4bz\" (UniqueName: \"kubernetes.io/projected/f9869afd-afb1-4974-9b35-61c60e107d86-kube-api-access-nw4bz\") pod \"glance-operator-controller-manager-7b549fc966-9t9tj\" (UID: \"f9869afd-afb1-4974-9b35-61c60e107d86\") " pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.513971 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.540975 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.548568 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.558479 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvnw\" (UniqueName: \"kubernetes.io/projected/b9560bc6-7245-4df3-8daa-19f79d9d3d12-kube-api-access-jxvnw\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-gh5xr\" (UID: \"b9560bc6-7245-4df3-8daa-19f79d9d3d12\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.558712 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vdz2\" (UniqueName: \"kubernetes.io/projected/18eb8b42-2595-4104-aaf4-005fda7ded69-kube-api-access-8vdz2\") pod \"ironic-operator-controller-manager-f99f54bc8-fsjwc\" (UID: \"18eb8b42-2595-4104-aaf4-005fda7ded69\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.558893 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww29l\" (UniqueName: \"kubernetes.io/projected/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-kube-api-access-ww29l\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.559113 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.559277 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcfj\" (UniqueName: \"kubernetes.io/projected/750dd75e-3bcc-4490-a77b-0e759b74b760-kube-api-access-lfcfj\") pod \"keystone-operator-controller-manager-568985c78-wbhqw\" (UID: \"750dd75e-3bcc-4490-a77b-0e759b74b760\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.559437 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn27d\" (UniqueName: \"kubernetes.io/projected/3c233910-81d5-42fb-8fe9-af3f45260c72-kube-api-access-rn27d\") pod \"mariadb-operator-controller-manager-7b88bfc995-48rg6\" (UID: \"3c233910-81d5-42fb-8fe9-af3f45260c72\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.559636 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stccg\" (UniqueName: \"kubernetes.io/projected/7c55b3d4-07b6-45cd-8216-90e88ffe9e59-kube-api-access-stccg\") pod \"manila-operator-controller-manager-598945d5b8-npd8w\" (UID: \"7c55b3d4-07b6-45cd-8216-90e88ffe9e59\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" Jan 08 23:34:40 crc kubenswrapper[4945]: E0108 23:34:40.563262 4945 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:40 crc kubenswrapper[4945]: E0108 23:34:40.563386 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert podName:f50a75b2-71ab-49c6-b184-f630dbfd9cc0 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:41.063315398 +0000 UTC m=+1151.374474344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert") pod "infra-operator-controller-manager-6d99759cf-v5nq9" (UID: "f50a75b2-71ab-49c6-b184-f630dbfd9cc0") : secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.569160 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jmmn9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.570608 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.588268 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.594925 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.603263 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vdz2\" (UniqueName: \"kubernetes.io/projected/18eb8b42-2595-4104-aaf4-005fda7ded69-kube-api-access-8vdz2\") pod \"ironic-operator-controller-manager-f99f54bc8-fsjwc\" (UID: \"18eb8b42-2595-4104-aaf4-005fda7ded69\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.603716 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcfj\" (UniqueName: \"kubernetes.io/projected/750dd75e-3bcc-4490-a77b-0e759b74b760-kube-api-access-lfcfj\") pod \"keystone-operator-controller-manager-568985c78-wbhqw\" (UID: \"750dd75e-3bcc-4490-a77b-0e759b74b760\") " pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.608567 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww29l\" (UniqueName: \"kubernetes.io/projected/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-kube-api-access-ww29l\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.611778 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvnw\" (UniqueName: \"kubernetes.io/projected/b9560bc6-7245-4df3-8daa-19f79d9d3d12-kube-api-access-jxvnw\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-gh5xr\" (UID: \"b9560bc6-7245-4df3-8daa-19f79d9d3d12\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.619592 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.620702 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.627113 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.628459 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.631960 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.632247 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-v79ff" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.633259 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hg6ln" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.652647 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.657711 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.662017 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jbw6\" (UniqueName: \"kubernetes.io/projected/77d96cd2-5140-418c-8c4b-0789ccd534e1-kube-api-access-7jbw6\") pod \"neutron-operator-controller-manager-7cd87b778f-q7lmc\" (UID: \"77d96cd2-5140-418c-8c4b-0789ccd534e1\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.662050 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dlf\" (UniqueName: \"kubernetes.io/projected/1dfe9903-3eb9-4c02-a453-fe2a8314d79b-kube-api-access-b5dlf\") pod \"octavia-operator-controller-manager-68c649d9d-xnlp6\" (UID: \"1dfe9903-3eb9-4c02-a453-fe2a8314d79b\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.662079 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn27d\" (UniqueName: \"kubernetes.io/projected/3c233910-81d5-42fb-8fe9-af3f45260c72-kube-api-access-rn27d\") pod \"mariadb-operator-controller-manager-7b88bfc995-48rg6\" (UID: \"3c233910-81d5-42fb-8fe9-af3f45260c72\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.662105 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stccg\" (UniqueName: \"kubernetes.io/projected/7c55b3d4-07b6-45cd-8216-90e88ffe9e59-kube-api-access-stccg\") pod \"manila-operator-controller-manager-598945d5b8-npd8w\" (UID: \"7c55b3d4-07b6-45cd-8216-90e88ffe9e59\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.662153 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcltb\" (UniqueName: \"kubernetes.io/projected/f1e9bdc2-26c3-4304-8af5-5423dccf220e-kube-api-access-kcltb\") pod \"nova-operator-controller-manager-5fbbf8b6cc-f6n8j\" (UID: \"f1e9bdc2-26c3-4304-8af5-5423dccf220e\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.663889 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.664640 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.669467 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-wv28k" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.688851 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stccg\" (UniqueName: \"kubernetes.io/projected/7c55b3d4-07b6-45cd-8216-90e88ffe9e59-kube-api-access-stccg\") pod \"manila-operator-controller-manager-598945d5b8-npd8w\" (UID: \"7c55b3d4-07b6-45cd-8216-90e88ffe9e59\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.688860 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.695308 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.700673 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn27d\" (UniqueName: \"kubernetes.io/projected/3c233910-81d5-42fb-8fe9-af3f45260c72-kube-api-access-rn27d\") pod \"mariadb-operator-controller-manager-7b88bfc995-48rg6\" (UID: \"3c233910-81d5-42fb-8fe9-af3f45260c72\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.710518 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-xvlwh" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.746288 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.751314 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.751572 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.760762 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.764506 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jbw6\" (UniqueName: \"kubernetes.io/projected/77d96cd2-5140-418c-8c4b-0789ccd534e1-kube-api-access-7jbw6\") pod \"neutron-operator-controller-manager-7cd87b778f-q7lmc\" (UID: \"77d96cd2-5140-418c-8c4b-0789ccd534e1\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.764557 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5dlf\" (UniqueName: \"kubernetes.io/projected/1dfe9903-3eb9-4c02-a453-fe2a8314d79b-kube-api-access-b5dlf\") pod \"octavia-operator-controller-manager-68c649d9d-xnlp6\" (UID: \"1dfe9903-3eb9-4c02-a453-fe2a8314d79b\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.764589 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.764620 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8w9\" (UniqueName: \"kubernetes.io/projected/cd1e4942-f561-478c-b456-93d8886d0a31-kube-api-access-ps8w9\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.764653 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjkm\" (UniqueName: \"kubernetes.io/projected/0b0d4e49-ab97-4808-96da-642983c3bafa-kube-api-access-zsjkm\") pod \"ovn-operator-controller-manager-bf6d4f946-4qb9v\" (UID: \"0b0d4e49-ab97-4808-96da-642983c3bafa\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.764719 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcltb\" (UniqueName: \"kubernetes.io/projected/f1e9bdc2-26c3-4304-8af5-5423dccf220e-kube-api-access-kcltb\") pod \"nova-operator-controller-manager-5fbbf8b6cc-f6n8j\" (UID: \"f1e9bdc2-26c3-4304-8af5-5423dccf220e\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.764759 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clgm\" (UniqueName: \"kubernetes.io/projected/0383d95b-a4e8-401c-92f9-7564a1aa286a-kube-api-access-5clgm\") pod \"placement-operator-controller-manager-9b6f8f78c-vcpxm\" (UID: \"0383d95b-a4e8-401c-92f9-7564a1aa286a\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.785334 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.786260 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.786802 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.790031 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.797380 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-psmkn" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.803144 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jbw6\" (UniqueName: \"kubernetes.io/projected/77d96cd2-5140-418c-8c4b-0789ccd534e1-kube-api-access-7jbw6\") pod \"neutron-operator-controller-manager-7cd87b778f-q7lmc\" (UID: \"77d96cd2-5140-418c-8c4b-0789ccd534e1\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.809650 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.810816 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.811527 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5dlf\" (UniqueName: \"kubernetes.io/projected/1dfe9903-3eb9-4c02-a453-fe2a8314d79b-kube-api-access-b5dlf\") pod \"octavia-operator-controller-manager-68c649d9d-xnlp6\" (UID: \"1dfe9903-3eb9-4c02-a453-fe2a8314d79b\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.813941 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hfq9f" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.821700 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcltb\" (UniqueName: \"kubernetes.io/projected/f1e9bdc2-26c3-4304-8af5-5423dccf220e-kube-api-access-kcltb\") pod \"nova-operator-controller-manager-5fbbf8b6cc-f6n8j\" (UID: \"f1e9bdc2-26c3-4304-8af5-5423dccf220e\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.822210 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.860043 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.860929 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.863076 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-x5vxr" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.872632 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.872683 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps8w9\" (UniqueName: \"kubernetes.io/projected/cd1e4942-f561-478c-b456-93d8886d0a31-kube-api-access-ps8w9\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.872711 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rt2x\" (UniqueName: \"kubernetes.io/projected/eed07e92-bbf8-4845-ac9a-3ab8eafadb58-kube-api-access-2rt2x\") pod \"telemetry-operator-controller-manager-68d988df55-h6wf5\" (UID: \"eed07e92-bbf8-4845-ac9a-3ab8eafadb58\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.872735 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsjkm\" (UniqueName: \"kubernetes.io/projected/0b0d4e49-ab97-4808-96da-642983c3bafa-kube-api-access-zsjkm\") pod \"ovn-operator-controller-manager-bf6d4f946-4qb9v\" (UID: \"0b0d4e49-ab97-4808-96da-642983c3bafa\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.872762 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvrt\" (UniqueName: \"kubernetes.io/projected/aed9ac7f-f2e8-4729-8caa-d6c81f09a392-kube-api-access-9fvrt\") pod \"swift-operator-controller-manager-bb586bbf4-5twfl\" (UID: \"aed9ac7f-f2e8-4729-8caa-d6c81f09a392\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.872816 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clgm\" (UniqueName: \"kubernetes.io/projected/0383d95b-a4e8-401c-92f9-7564a1aa286a-kube-api-access-5clgm\") pod \"placement-operator-controller-manager-9b6f8f78c-vcpxm\" (UID: \"0383d95b-a4e8-401c-92f9-7564a1aa286a\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" Jan 08 23:34:40 crc kubenswrapper[4945]: E0108 23:34:40.873112 4945 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:40 crc kubenswrapper[4945]: E0108 23:34:40.873168 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert podName:cd1e4942-f561-478c-b456-93d8886d0a31 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:41.373147596 +0000 UTC m=+1151.684306542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" (UID: "cd1e4942-f561-478c-b456-93d8886d0a31") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.878351 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.892701 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.904901 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clgm\" (UniqueName: \"kubernetes.io/projected/0383d95b-a4e8-401c-92f9-7564a1aa286a-kube-api-access-5clgm\") pod \"placement-operator-controller-manager-9b6f8f78c-vcpxm\" (UID: \"0383d95b-a4e8-401c-92f9-7564a1aa286a\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.909614 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.909810 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsjkm\" (UniqueName: \"kubernetes.io/projected/0b0d4e49-ab97-4808-96da-642983c3bafa-kube-api-access-zsjkm\") pod \"ovn-operator-controller-manager-bf6d4f946-4qb9v\" (UID: \"0b0d4e49-ab97-4808-96da-642983c3bafa\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.913298 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps8w9\" (UniqueName: \"kubernetes.io/projected/cd1e4942-f561-478c-b456-93d8886d0a31-kube-api-access-ps8w9\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.914099 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.928143 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.946566 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.954833 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.955618 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.962850 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9"] Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.967222 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.967921 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.980703 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5frj\" (UniqueName: \"kubernetes.io/projected/a62c3d41-f859-4eea-8e77-468d06e687bd-kube-api-access-h5frj\") pod \"watcher-operator-controller-manager-9dbdf6486-m49gl\" (UID: \"a62c3d41-f859-4eea-8e77-468d06e687bd\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.980764 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxmr\" (UniqueName: \"kubernetes.io/projected/37d70ea8-8db9-4194-a697-5e8f77c89be0-kube-api-access-9mxmr\") pod \"test-operator-controller-manager-6c866cfdcb-dg8tj\" (UID: \"37d70ea8-8db9-4194-a697-5e8f77c89be0\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.980907 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rt2x\" (UniqueName: \"kubernetes.io/projected/eed07e92-bbf8-4845-ac9a-3ab8eafadb58-kube-api-access-2rt2x\") pod \"telemetry-operator-controller-manager-68d988df55-h6wf5\" (UID: \"eed07e92-bbf8-4845-ac9a-3ab8eafadb58\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.980952 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvrt\" (UniqueName: \"kubernetes.io/projected/aed9ac7f-f2e8-4729-8caa-d6c81f09a392-kube-api-access-9fvrt\") pod \"swift-operator-controller-manager-bb586bbf4-5twfl\" (UID: \"aed9ac7f-f2e8-4729-8caa-d6c81f09a392\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" Jan 08 23:34:40 crc kubenswrapper[4945]: I0108 23:34:40.985186 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-wdzlw" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:40.990595 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.021299 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rt2x\" (UniqueName: \"kubernetes.io/projected/eed07e92-bbf8-4845-ac9a-3ab8eafadb58-kube-api-access-2rt2x\") pod \"telemetry-operator-controller-manager-68d988df55-h6wf5\" (UID: \"eed07e92-bbf8-4845-ac9a-3ab8eafadb58\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.043099 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvrt\" (UniqueName: \"kubernetes.io/projected/aed9ac7f-f2e8-4729-8caa-d6c81f09a392-kube-api-access-9fvrt\") pod \"swift-operator-controller-manager-bb586bbf4-5twfl\" (UID: \"aed9ac7f-f2e8-4729-8caa-d6c81f09a392\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.055068 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.060364 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.061904 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.089936 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt"] Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.094181 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.100308 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-m9mlr" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.104052 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59kkp\" (UniqueName: \"kubernetes.io/projected/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-kube-api-access-59kkp\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.104108 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.104135 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.104175 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5frj\" (UniqueName: \"kubernetes.io/projected/a62c3d41-f859-4eea-8e77-468d06e687bd-kube-api-access-h5frj\") pod \"watcher-operator-controller-manager-9dbdf6486-m49gl\" (UID: \"a62c3d41-f859-4eea-8e77-468d06e687bd\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.104206 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mxmr\" (UniqueName: \"kubernetes.io/projected/37d70ea8-8db9-4194-a697-5e8f77c89be0-kube-api-access-9mxmr\") pod \"test-operator-controller-manager-6c866cfdcb-dg8tj\" (UID: \"37d70ea8-8db9-4194-a697-5e8f77c89be0\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.104222 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.104239 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrpr\" (UniqueName: \"kubernetes.io/projected/e16d1a8a-0404-4007-86fd-886fed232b0b-kube-api-access-rgrpr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v7fnt\" (UID: \"e16d1a8a-0404-4007-86fd-886fed232b0b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.109737 4945 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.109894 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert podName:f50a75b2-71ab-49c6-b184-f630dbfd9cc0 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:42.109880717 +0000 UTC m=+1152.421039663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert") pod "infra-operator-controller-manager-6d99759cf-v5nq9" (UID: "f50a75b2-71ab-49c6-b184-f630dbfd9cc0") : secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.125861 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt"] Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.130825 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mxmr\" (UniqueName: \"kubernetes.io/projected/37d70ea8-8db9-4194-a697-5e8f77c89be0-kube-api-access-9mxmr\") pod \"test-operator-controller-manager-6c866cfdcb-dg8tj\" (UID: \"37d70ea8-8db9-4194-a697-5e8f77c89be0\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.147928 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.156609 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5frj\" (UniqueName: \"kubernetes.io/projected/a62c3d41-f859-4eea-8e77-468d06e687bd-kube-api-access-h5frj\") pod \"watcher-operator-controller-manager-9dbdf6486-m49gl\" (UID: \"a62c3d41-f859-4eea-8e77-468d06e687bd\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.176921 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" event={"ID":"18520dae-623d-499b-95c6-96b4de8d9bf4","Type":"ContainerStarted","Data":"c714f3c8de9743b1f6b571f46f6c9f2cf84c7f868e3a1ad88c6efd958d87775e"} Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.186951 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4"] Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.199865 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.205383 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59kkp\" (UniqueName: \"kubernetes.io/projected/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-kube-api-access-59kkp\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.205431 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.205483 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.205503 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgrpr\" (UniqueName: \"kubernetes.io/projected/e16d1a8a-0404-4007-86fd-886fed232b0b-kube-api-access-rgrpr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v7fnt\" (UID: \"e16d1a8a-0404-4007-86fd-886fed232b0b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.205944 4945 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.205983 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:41.705969851 +0000 UTC m=+1152.017128797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "webhook-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.206249 4945 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.206277 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:41.706269458 +0000 UTC m=+1152.017428404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "metrics-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.219307 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.228139 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59kkp\" (UniqueName: \"kubernetes.io/projected/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-kube-api-access-59kkp\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.238086 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgrpr\" (UniqueName: \"kubernetes.io/projected/e16d1a8a-0404-4007-86fd-886fed232b0b-kube-api-access-rgrpr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v7fnt\" (UID: \"e16d1a8a-0404-4007-86fd-886fed232b0b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.238521 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj"] Jan 08 23:34:41 crc kubenswrapper[4945]: W0108 23:34:41.245045 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90070b95_ae61_4ae4_b6b4_a4436fe457ef.slice/crio-8abebe7f73677f57fa2fa264e66dc5b3d4f4143562ae9a7436058c10ff613d65 WatchSource:0}: Error finding container 8abebe7f73677f57fa2fa264e66dc5b3d4f4143562ae9a7436058c10ff613d65: Status 404 returned error can't find the container with id 8abebe7f73677f57fa2fa264e66dc5b3d4f4143562ae9a7436058c10ff613d65 Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.389577 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j"] Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.413748 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.413912 4945 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.413960 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert podName:cd1e4942-f561-478c-b456-93d8886d0a31 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:42.413944434 +0000 UTC m=+1152.725103380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" (UID: "cd1e4942-f561-478c-b456-93d8886d0a31") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.430111 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.515313 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw"] Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.520707 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29"] Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.717980 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.718075 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.718211 4945 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.718263 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:42.718249097 +0000 UTC m=+1153.029408043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "metrics-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.719156 4945 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: E0108 23:34:41.719277 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:42.719243121 +0000 UTC m=+1153.030402157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "webhook-server-cert" not found Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.842716 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc"] Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.850172 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w"] Jan 08 23:34:41 crc kubenswrapper[4945]: W0108 23:34:41.857071 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c233910_81d5_42fb_8fe9_af3f45260c72.slice/crio-ca5c5dc133f787a0f9f11baf7ed1c4cc066f1d1152c1990dd39302c5821e55e2 WatchSource:0}: Error finding container ca5c5dc133f787a0f9f11baf7ed1c4cc066f1d1152c1990dd39302c5821e55e2: Status 404 returned error can't find the container with id ca5c5dc133f787a0f9f11baf7ed1c4cc066f1d1152c1990dd39302c5821e55e2 Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.858861 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6"] Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.956124 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr"] Jan 08 23:34:41 crc kubenswrapper[4945]: W0108 23:34:41.963320 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9560bc6_7245_4df3_8daa_19f79d9d3d12.slice/crio-44845f371076142411519cfab501a4ae4f2d5284e5b1767948edfcdce917b612 WatchSource:0}: Error finding container 44845f371076142411519cfab501a4ae4f2d5284e5b1767948edfcdce917b612: Status 404 returned error can't find the container with id 44845f371076142411519cfab501a4ae4f2d5284e5b1767948edfcdce917b612 Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.964510 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v"] Jan 08 23:34:41 crc kubenswrapper[4945]: W0108 23:34:41.976248 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0d4e49_ab97_4808_96da_642983c3bafa.slice/crio-42e14ea682d95c5568d59652b9190e4d79e863617bee63a723a829257f3f1324 WatchSource:0}: Error finding container 42e14ea682d95c5568d59652b9190e4d79e863617bee63a723a829257f3f1324: Status 404 returned error can't find the container with id 42e14ea682d95c5568d59652b9190e4d79e863617bee63a723a829257f3f1324 Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.977151 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j"] Jan 08 23:34:41 crc kubenswrapper[4945]: W0108 23:34:41.982263 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1e9bdc2_26c3_4304_8af5_5423dccf220e.slice/crio-6680def2f800e9603476d661231dbcbf18da34c6aafeb03466d1cf3152af2205 WatchSource:0}: Error finding container 6680def2f800e9603476d661231dbcbf18da34c6aafeb03466d1cf3152af2205: Status 404 returned error can't find the container with id 6680def2f800e9603476d661231dbcbf18da34c6aafeb03466d1cf3152af2205 Jan 08 23:34:41 crc kubenswrapper[4945]: I0108 23:34:41.982943 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc"] Jan 08 23:34:41 crc kubenswrapper[4945]: W0108 23:34:41.993749 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77d96cd2_5140_418c_8c4b_0789ccd534e1.slice/crio-8fe561fad5d08a9dc99c9ef8c0d80aee5b998f78f0f66a80288271ff391e9001 WatchSource:0}: Error finding container 8fe561fad5d08a9dc99c9ef8c0d80aee5b998f78f0f66a80288271ff391e9001: Status 404 returned error can't find the container with id 8fe561fad5d08a9dc99c9ef8c0d80aee5b998f78f0f66a80288271ff391e9001 Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.036506 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt"] Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.078574 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj"] Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.082551 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm"] Jan 08 23:34:42 crc kubenswrapper[4945]: W0108 23:34:42.084086 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37d70ea8_8db9_4194_a697_5e8f77c89be0.slice/crio-710b0fbf9834dd543c18e8ec0596e2f114802f7e8729122b0977acd3a6f960db WatchSource:0}: Error finding container 710b0fbf9834dd543c18e8ec0596e2f114802f7e8729122b0977acd3a6f960db: Status 404 returned error can't find the container with id 710b0fbf9834dd543c18e8ec0596e2f114802f7e8729122b0977acd3a6f960db Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.087618 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl"] Jan 08 23:34:42 crc kubenswrapper[4945]: W0108 23:34:42.091303 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaed9ac7f_f2e8_4729_8caa_d6c81f09a392.slice/crio-026a533a674af573ac91f747f08a4dc0fdda830ce00e9aa5a145f90ee6009226 WatchSource:0}: Error finding container 026a533a674af573ac91f747f08a4dc0fdda830ce00e9aa5a145f90ee6009226: Status 404 returned error can't find the container with id 026a533a674af573ac91f747f08a4dc0fdda830ce00e9aa5a145f90ee6009226 Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.091808 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9mxmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-dg8tj_openstack-operators(37d70ea8-8db9-4194-a697-5e8f77c89be0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.092935 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" podUID="37d70ea8-8db9-4194-a697-5e8f77c89be0" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.095947 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9fvrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-5twfl_openstack-operators(aed9ac7f-f2e8-4729-8caa-d6c81f09a392): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.098334 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" podUID="aed9ac7f-f2e8-4729-8caa-d6c81f09a392" Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.127322 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.127462 4945 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.127521 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert podName:f50a75b2-71ab-49c6-b184-f630dbfd9cc0 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:44.127499481 +0000 UTC m=+1154.438658427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert") pod "infra-operator-controller-manager-6d99759cf-v5nq9" (UID: "f50a75b2-71ab-49c6-b184-f630dbfd9cc0") : secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.147284 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6"] Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.161793 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5dlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-xnlp6_openstack-operators(1dfe9903-3eb9-4c02-a453-fe2a8314d79b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.164064 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" podUID="1dfe9903-3eb9-4c02-a453-fe2a8314d79b" Jan 08 23:34:42 crc kubenswrapper[4945]: W0108 23:34:42.166869 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeed07e92_bbf8_4845_ac9a_3ab8eafadb58.slice/crio-348fc377fe66d7678200292766908b865b95612f44a2ad7f81c5c0547b236bce WatchSource:0}: Error finding container 348fc377fe66d7678200292766908b865b95612f44a2ad7f81c5c0547b236bce: Status 404 returned error can't find the container with id 348fc377fe66d7678200292766908b865b95612f44a2ad7f81c5c0547b236bce Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.168502 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl"] Jan 08 23:34:42 crc kubenswrapper[4945]: W0108 23:34:42.169030 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda62c3d41_f859_4eea_8e77_468d06e687bd.slice/crio-a8f744d49baf64135332d5a43034480434ad1d99fbe6f8a5d24b6baf0891c6b5 WatchSource:0}: Error finding container a8f744d49baf64135332d5a43034480434ad1d99fbe6f8a5d24b6baf0891c6b5: Status 404 returned error can't find the container with id a8f744d49baf64135332d5a43034480434ad1d99fbe6f8a5d24b6baf0891c6b5 Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.169267 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2rt2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-68d988df55-h6wf5_openstack-operators(eed07e92-bbf8-4845-ac9a-3ab8eafadb58): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.170432 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" podUID="eed07e92-bbf8-4845-ac9a-3ab8eafadb58" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.173805 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5frj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-m49gl_openstack-operators(a62c3d41-f859-4eea-8e77-468d06e687bd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.175035 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" podUID="a62c3d41-f859-4eea-8e77-468d06e687bd" Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.176127 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5"] Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.195007 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" event={"ID":"e16d1a8a-0404-4007-86fd-886fed232b0b","Type":"ContainerStarted","Data":"eb1429c4e96a8077cf12742c1ef4a98bc446cbe0c01ebd2852710347df66a4b9"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.197245 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" event={"ID":"37d70ea8-8db9-4194-a697-5e8f77c89be0","Type":"ContainerStarted","Data":"710b0fbf9834dd543c18e8ec0596e2f114802f7e8729122b0977acd3a6f960db"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.198745 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" event={"ID":"c32027c9-9810-4f56-afaf-b680d5baed3c","Type":"ContainerStarted","Data":"794123d65aa5d931bf72d1060ed06547a2a8322e36fc1a9b67134438d8d6be2d"} Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.199779 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" podUID="37d70ea8-8db9-4194-a697-5e8f77c89be0" Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.206877 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" event={"ID":"0b0d4e49-ab97-4808-96da-642983c3bafa","Type":"ContainerStarted","Data":"42e14ea682d95c5568d59652b9190e4d79e863617bee63a723a829257f3f1324"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.210056 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" event={"ID":"b9560bc6-7245-4df3-8daa-19f79d9d3d12","Type":"ContainerStarted","Data":"44845f371076142411519cfab501a4ae4f2d5284e5b1767948edfcdce917b612"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.212128 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" event={"ID":"3c233910-81d5-42fb-8fe9-af3f45260c72","Type":"ContainerStarted","Data":"ca5c5dc133f787a0f9f11baf7ed1c4cc066f1d1152c1990dd39302c5821e55e2"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.213096 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" event={"ID":"a62c3d41-f859-4eea-8e77-468d06e687bd","Type":"ContainerStarted","Data":"a8f744d49baf64135332d5a43034480434ad1d99fbe6f8a5d24b6baf0891c6b5"} Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.214154 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" podUID="a62c3d41-f859-4eea-8e77-468d06e687bd" Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.215213 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" event={"ID":"0383d95b-a4e8-401c-92f9-7564a1aa286a","Type":"ContainerStarted","Data":"cd74a7b742d10449a0efd511ed35ebc44a3dbc582822d3dd5b5e33a3e4eb04d2"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.222788 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" event={"ID":"f9869afd-afb1-4974-9b35-61c60e107d86","Type":"ContainerStarted","Data":"dee09489e1688e2ce1b70fc6e8d04b729d17bbb0ad6b2822ddc9aa90e9e02360"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.224744 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" event={"ID":"750dd75e-3bcc-4490-a77b-0e759b74b760","Type":"ContainerStarted","Data":"6193b5fbda53829e414fb7cc0f17d25ea3525cde8f8be0ae9d7ef9cbda977541"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.226571 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" event={"ID":"1604a21a-39a8-4c27-886f-dd74a9c6ed92","Type":"ContainerStarted","Data":"37c9b151bb30a158dbeb6413f8c5bd92569b7b6bc3ba60a1c2ba66e0cad4e840"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.227545 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" event={"ID":"77d96cd2-5140-418c-8c4b-0789ccd534e1","Type":"ContainerStarted","Data":"8fe561fad5d08a9dc99c9ef8c0d80aee5b998f78f0f66a80288271ff391e9001"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.230595 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" event={"ID":"eed07e92-bbf8-4845-ac9a-3ab8eafadb58","Type":"ContainerStarted","Data":"348fc377fe66d7678200292766908b865b95612f44a2ad7f81c5c0547b236bce"} Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.232071 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" podUID="eed07e92-bbf8-4845-ac9a-3ab8eafadb58" Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.232688 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" event={"ID":"1dfe9903-3eb9-4c02-a453-fe2a8314d79b","Type":"ContainerStarted","Data":"8a7275f5c542acc17afcabdc0adda9ebc8152bb4ee600e1db2dcd3dccaa8189f"} Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.234346 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" podUID="1dfe9903-3eb9-4c02-a453-fe2a8314d79b" Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.235704 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" event={"ID":"f1e9bdc2-26c3-4304-8af5-5423dccf220e","Type":"ContainerStarted","Data":"6680def2f800e9603476d661231dbcbf18da34c6aafeb03466d1cf3152af2205"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.236614 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" event={"ID":"18eb8b42-2595-4104-aaf4-005fda7ded69","Type":"ContainerStarted","Data":"61c1b393475e3bd6c40a7d064c0d88f0827c11620d14e85cb08f3747fc4b8119"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.238944 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" event={"ID":"aed9ac7f-f2e8-4729-8caa-d6c81f09a392","Type":"ContainerStarted","Data":"026a533a674af573ac91f747f08a4dc0fdda830ce00e9aa5a145f90ee6009226"} Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.240560 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" podUID="aed9ac7f-f2e8-4729-8caa-d6c81f09a392" Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.244935 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" event={"ID":"90070b95-ae61-4ae4-b6b4-a4436fe457ef","Type":"ContainerStarted","Data":"8abebe7f73677f57fa2fa264e66dc5b3d4f4143562ae9a7436058c10ff613d65"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.248838 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" event={"ID":"7c55b3d4-07b6-45cd-8216-90e88ffe9e59","Type":"ContainerStarted","Data":"313e1d0276933a4eb0503b2fab6177c94523d5517d9a4a6af8dbabb87ef90b6c"} Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.433622 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.433932 4945 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.434034 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert podName:cd1e4942-f561-478c-b456-93d8886d0a31 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:44.434011007 +0000 UTC m=+1154.745169953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" (UID: "cd1e4942-f561-478c-b456-93d8886d0a31") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.738896 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:42 crc kubenswrapper[4945]: I0108 23:34:42.739023 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.740593 4945 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.740667 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:44.740652247 +0000 UTC m=+1155.051811193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "webhook-server-cert" not found Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.741039 4945 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 08 23:34:42 crc kubenswrapper[4945]: E0108 23:34:42.741071 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:44.741063857 +0000 UTC m=+1155.052222803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "metrics-server-cert" not found Jan 08 23:34:43 crc kubenswrapper[4945]: E0108 23:34:43.274331 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" podUID="37d70ea8-8db9-4194-a697-5e8f77c89be0" Jan 08 23:34:43 crc kubenswrapper[4945]: E0108 23:34:43.274399 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" podUID="1dfe9903-3eb9-4c02-a453-fe2a8314d79b" Jan 08 23:34:43 crc kubenswrapper[4945]: E0108 23:34:43.274662 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:3c1b2858c64110448d801905fbbf3ffe7f78d264cc46ab12ab2d724842dba309\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" podUID="eed07e92-bbf8-4845-ac9a-3ab8eafadb58" Jan 08 23:34:43 crc kubenswrapper[4945]: E0108 23:34:43.275046 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" podUID="a62c3d41-f859-4eea-8e77-468d06e687bd" Jan 08 23:34:43 crc kubenswrapper[4945]: E0108 23:34:43.275126 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" podUID="aed9ac7f-f2e8-4729-8caa-d6c81f09a392" Jan 08 23:34:43 crc kubenswrapper[4945]: I0108 23:34:43.597110 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:34:43 crc kubenswrapper[4945]: I0108 23:34:43.597161 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:34:43 crc kubenswrapper[4945]: I0108 23:34:43.597202 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:34:43 crc kubenswrapper[4945]: I0108 23:34:43.597730 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3f7c0fd5402fc991541e7265a64423cf96ba0034b54b94c9210237909eb4a91"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:34:43 crc kubenswrapper[4945]: I0108 23:34:43.597775 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://e3f7c0fd5402fc991541e7265a64423cf96ba0034b54b94c9210237909eb4a91" gracePeriod=600 Jan 08 23:34:44 crc kubenswrapper[4945]: I0108 23:34:44.184447 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:44 crc kubenswrapper[4945]: E0108 23:34:44.184607 4945 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:44 crc kubenswrapper[4945]: E0108 23:34:44.184655 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert podName:f50a75b2-71ab-49c6-b184-f630dbfd9cc0 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:48.184641871 +0000 UTC m=+1158.495800817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert") pod "infra-operator-controller-manager-6d99759cf-v5nq9" (UID: "f50a75b2-71ab-49c6-b184-f630dbfd9cc0") : secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:44 crc kubenswrapper[4945]: I0108 23:34:44.306369 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="e3f7c0fd5402fc991541e7265a64423cf96ba0034b54b94c9210237909eb4a91" exitCode=0 Jan 08 23:34:44 crc kubenswrapper[4945]: I0108 23:34:44.306409 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"e3f7c0fd5402fc991541e7265a64423cf96ba0034b54b94c9210237909eb4a91"} Jan 08 23:34:44 crc kubenswrapper[4945]: I0108 23:34:44.306442 4945 scope.go:117] "RemoveContainer" containerID="bb86089d7fa453c2e2295e7a4532a4489dac612ea805bc40de7f57ca4589bf0f" Jan 08 23:34:44 crc kubenswrapper[4945]: I0108 23:34:44.487980 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:44 crc kubenswrapper[4945]: E0108 23:34:44.488371 4945 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:44 crc kubenswrapper[4945]: E0108 23:34:44.488518 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert podName:cd1e4942-f561-478c-b456-93d8886d0a31 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:48.488492293 +0000 UTC m=+1158.799651239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" (UID: "cd1e4942-f561-478c-b456-93d8886d0a31") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:44 crc kubenswrapper[4945]: I0108 23:34:44.791732 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:44 crc kubenswrapper[4945]: I0108 23:34:44.791880 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:44 crc kubenswrapper[4945]: E0108 23:34:44.792438 4945 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 08 23:34:44 crc kubenswrapper[4945]: E0108 23:34:44.792532 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:48.792491248 +0000 UTC m=+1159.103650194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "webhook-server-cert" not found Jan 08 23:34:44 crc kubenswrapper[4945]: E0108 23:34:44.793028 4945 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 08 23:34:44 crc kubenswrapper[4945]: E0108 23:34:44.793068 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:48.793057982 +0000 UTC m=+1159.104216938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "metrics-server-cert" not found Jan 08 23:34:48 crc kubenswrapper[4945]: I0108 23:34:48.246833 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:48 crc kubenswrapper[4945]: E0108 23:34:48.247065 4945 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:48 crc kubenswrapper[4945]: E0108 23:34:48.247252 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert podName:f50a75b2-71ab-49c6-b184-f630dbfd9cc0 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:56.247233924 +0000 UTC m=+1166.558392870 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert") pod "infra-operator-controller-manager-6d99759cf-v5nq9" (UID: "f50a75b2-71ab-49c6-b184-f630dbfd9cc0") : secret "infra-operator-webhook-server-cert" not found Jan 08 23:34:48 crc kubenswrapper[4945]: I0108 23:34:48.550320 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:48 crc kubenswrapper[4945]: E0108 23:34:48.550492 4945 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:48 crc kubenswrapper[4945]: E0108 23:34:48.550540 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert podName:cd1e4942-f561-478c-b456-93d8886d0a31 nodeName:}" failed. No retries permitted until 2026-01-08 23:34:56.550525722 +0000 UTC m=+1166.861684668 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" (UID: "cd1e4942-f561-478c-b456-93d8886d0a31") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:48 crc kubenswrapper[4945]: I0108 23:34:48.855408 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:48 crc kubenswrapper[4945]: I0108 23:34:48.855504 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:48 crc kubenswrapper[4945]: E0108 23:34:48.855623 4945 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 08 23:34:48 crc kubenswrapper[4945]: E0108 23:34:48.855685 4945 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 08 23:34:48 crc kubenswrapper[4945]: E0108 23:34:48.855710 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:56.855690637 +0000 UTC m=+1167.166849683 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "webhook-server-cert" not found Jan 08 23:34:48 crc kubenswrapper[4945]: E0108 23:34:48.855734 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:34:56.855718477 +0000 UTC m=+1167.166877413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "metrics-server-cert" not found Jan 08 23:34:53 crc kubenswrapper[4945]: E0108 23:34:53.482258 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989" Jan 08 23:34:53 crc kubenswrapper[4945]: E0108 23:34:53.483085 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnns5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-f6f74d6db-xvn4t_openstack-operators(18520dae-623d-499b-95c6-96b4de8d9bf4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:34:53 crc kubenswrapper[4945]: E0108 23:34:53.484382 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" podUID="18520dae-623d-499b-95c6-96b4de8d9bf4" Jan 08 23:34:54 crc kubenswrapper[4945]: E0108 23:34:54.172280 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04" Jan 08 23:34:54 crc kubenswrapper[4945]: E0108 23:34:54.172551 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k9g58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-658dd65b86-hnk29_openstack-operators(c32027c9-9810-4f56-afaf-b680d5baed3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:34:54 crc kubenswrapper[4945]: E0108 23:34:54.173775 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" podUID="c32027c9-9810-4f56-afaf-b680d5baed3c" Jan 08 23:34:54 crc kubenswrapper[4945]: E0108 23:34:54.384850 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:afb66a0f8e1aa057888f7c304cc34cfea711805d9d1f05798aceb4029fef2989\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" podUID="18520dae-623d-499b-95c6-96b4de8d9bf4" Jan 08 23:34:54 crc kubenswrapper[4945]: E0108 23:34:54.385060 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:573d7dba212cbc32101496a7cbe01e391af9891bed3bec717f16bed4d6c23e04\\\"\"" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" podUID="c32027c9-9810-4f56-afaf-b680d5baed3c" Jan 08 23:34:55 crc kubenswrapper[4945]: E0108 23:34:55.607906 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7" Jan 08 23:34:55 crc kubenswrapper[4945]: E0108 23:34:55.608395 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6sk8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-78979fc445-qwj2j_openstack-operators(1604a21a-39a8-4c27-886f-dd74a9c6ed92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:34:55 crc kubenswrapper[4945]: E0108 23:34:55.609600 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" podUID="1604a21a-39a8-4c27-886f-dd74a9c6ed92" Jan 08 23:34:56 crc kubenswrapper[4945]: I0108 23:34:56.265269 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:56 crc kubenswrapper[4945]: I0108 23:34:56.271196 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f50a75b2-71ab-49c6-b184-f630dbfd9cc0-cert\") pod \"infra-operator-controller-manager-6d99759cf-v5nq9\" (UID: \"f50a75b2-71ab-49c6-b184-f630dbfd9cc0\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.397756 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:174acf70c084144827fb8f96c5401a0a8def953bf0ff8929dccd629a550491b7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" podUID="1604a21a-39a8-4c27-886f-dd74a9c6ed92" Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.433512 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.433904 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jbw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7cd87b778f-q7lmc_openstack-operators(77d96cd2-5140-418c-8c4b-0789ccd534e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.435295 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" podUID="77d96cd2-5140-418c-8c4b-0789ccd534e1" Jan 08 23:34:56 crc kubenswrapper[4945]: I0108 23:34:56.537264 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:34:56 crc kubenswrapper[4945]: I0108 23:34:56.569334 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.569495 4945 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.569551 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert podName:cd1e4942-f561-478c-b456-93d8886d0a31 nodeName:}" failed. No retries permitted until 2026-01-08 23:35:12.569534933 +0000 UTC m=+1182.880693879 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" (UID: "cd1e4942-f561-478c-b456-93d8886d0a31") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 08 23:34:56 crc kubenswrapper[4945]: I0108 23:34:56.891567 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:56 crc kubenswrapper[4945]: I0108 23:34:56.891698 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.891722 4945 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.891783 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:35:12.891767072 +0000 UTC m=+1183.202926018 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "webhook-server-cert" not found Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.891901 4945 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 08 23:34:56 crc kubenswrapper[4945]: E0108 23:34:56.891954 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs podName:640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d nodeName:}" failed. No retries permitted until 2026-01-08 23:35:12.891940576 +0000 UTC m=+1183.203099522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs") pod "openstack-operator-controller-manager-57bd96d86c-vf2m9" (UID: "640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d") : secret "metrics-server-cert" not found Jan 08 23:34:57 crc kubenswrapper[4945]: E0108 23:34:57.204137 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Jan 08 23:34:57 crc kubenswrapper[4945]: E0108 23:34:57.204317 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsjkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-bf6d4f946-4qb9v_openstack-operators(0b0d4e49-ab97-4808-96da-642983c3bafa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:34:57 crc kubenswrapper[4945]: E0108 23:34:57.206411 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" podUID="0b0d4e49-ab97-4808-96da-642983c3bafa" Jan 08 23:34:57 crc kubenswrapper[4945]: E0108 23:34:57.403792 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" podUID="0b0d4e49-ab97-4808-96da-642983c3bafa" Jan 08 23:34:57 crc kubenswrapper[4945]: E0108 23:34:57.404140 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" podUID="77d96cd2-5140-418c-8c4b-0789ccd534e1" Jan 08 23:34:57 crc kubenswrapper[4945]: E0108 23:34:57.884753 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848" Jan 08 23:34:57 crc kubenswrapper[4945]: E0108 23:34:57.884927 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vdz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-f99f54bc8-fsjwc_openstack-operators(18eb8b42-2595-4104-aaf4-005fda7ded69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:34:57 crc kubenswrapper[4945]: E0108 23:34:57.886336 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" podUID="18eb8b42-2595-4104-aaf4-005fda7ded69" Jan 08 23:34:58 crc kubenswrapper[4945]: E0108 23:34:58.410095 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:202756538820b5fa874d07a71ece4f048f41ccca8228d359c8cd25a00e9c0848\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" podUID="18eb8b42-2595-4104-aaf4-005fda7ded69" Jan 08 23:34:58 crc kubenswrapper[4945]: E0108 23:34:58.493066 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 08 23:34:58 crc kubenswrapper[4945]: E0108 23:34:58.496453 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rgrpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v7fnt_openstack-operators(e16d1a8a-0404-4007-86fd-886fed232b0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:34:58 crc kubenswrapper[4945]: E0108 23:34:58.497751 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" podUID="e16d1a8a-0404-4007-86fd-886fed232b0b" Jan 08 23:34:59 crc kubenswrapper[4945]: E0108 23:34:59.414579 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" podUID="e16d1a8a-0404-4007-86fd-886fed232b0b" Jan 08 23:35:00 crc kubenswrapper[4945]: E0108 23:35:00.045956 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c" Jan 08 23:35:00 crc kubenswrapper[4945]: E0108 23:35:00.046213 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lfcfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-568985c78-wbhqw_openstack-operators(750dd75e-3bcc-4490-a77b-0e759b74b760): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:35:00 crc kubenswrapper[4945]: E0108 23:35:00.047425 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" podUID="750dd75e-3bcc-4490-a77b-0e759b74b760" Jan 08 23:35:00 crc kubenswrapper[4945]: E0108 23:35:00.419807 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:879d3d679b58ae84419b7907ad092ad4d24bcc9222ce621ce464fd0fea347b0c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" podUID="750dd75e-3bcc-4490-a77b-0e759b74b760" Jan 08 23:35:00 crc kubenswrapper[4945]: E0108 23:35:00.710754 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 08 23:35:00 crc kubenswrapper[4945]: E0108 23:35:00.710963 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kcltb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-f6n8j_openstack-operators(f1e9bdc2-26c3-4304-8af5-5423dccf220e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:35:00 crc kubenswrapper[4945]: E0108 23:35:00.712094 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" podUID="f1e9bdc2-26c3-4304-8af5-5423dccf220e" Jan 08 23:35:01 crc kubenswrapper[4945]: E0108 23:35:01.426564 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" podUID="f1e9bdc2-26c3-4304-8af5-5423dccf220e" Jan 08 23:35:03 crc kubenswrapper[4945]: I0108 23:35:03.729064 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9"] Jan 08 23:35:03 crc kubenswrapper[4945]: W0108 23:35:03.756810 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf50a75b2_71ab_49c6_b184_f630dbfd9cc0.slice/crio-d9e0e8a621cf5d9ce4ed1218021aaf3b7f55c6a752a3f57e65434ceac72e7e99 WatchSource:0}: Error finding container d9e0e8a621cf5d9ce4ed1218021aaf3b7f55c6a752a3f57e65434ceac72e7e99: Status 404 returned error can't find the container with id d9e0e8a621cf5d9ce4ed1218021aaf3b7f55c6a752a3f57e65434ceac72e7e99 Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.453504 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" event={"ID":"a62c3d41-f859-4eea-8e77-468d06e687bd","Type":"ContainerStarted","Data":"35c2a251d6224d48b41dcd8d43c7ea8c1bd590dcf419def05358f5b39c3f4bca"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.453946 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.454935 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" event={"ID":"37d70ea8-8db9-4194-a697-5e8f77c89be0","Type":"ContainerStarted","Data":"7281369611abc3752d946e7ae576cb97649b3d63cf387d26e9c310f7008010c7"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.455136 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.456218 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" event={"ID":"f50a75b2-71ab-49c6-b184-f630dbfd9cc0","Type":"ContainerStarted","Data":"d9e0e8a621cf5d9ce4ed1218021aaf3b7f55c6a752a3f57e65434ceac72e7e99"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.481853 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" event={"ID":"90070b95-ae61-4ae4-b6b4-a4436fe457ef","Type":"ContainerStarted","Data":"92ff8d1b267a279b27c0c24a4e8de232e282e42067dd5a86e4bf9b7d4de5a554"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.482423 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.507736 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" event={"ID":"7c55b3d4-07b6-45cd-8216-90e88ffe9e59","Type":"ContainerStarted","Data":"8f0bb2fb0ffeaf6e0f4dfe2bb878be4c5bbaba5bc5dea53562692b9c4b7c4806"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.508399 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.519649 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" podStartSLOduration=3.512399259 podStartE2EDuration="24.519635789s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:42.173643482 +0000 UTC m=+1152.484802428" lastFinishedPulling="2026-01-08 23:35:03.180880012 +0000 UTC m=+1173.492038958" observedRunningTime="2026-01-08 23:35:04.516680567 +0000 UTC m=+1174.827839513" watchObservedRunningTime="2026-01-08 23:35:04.519635789 +0000 UTC m=+1174.830794735" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.523748 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" event={"ID":"b9560bc6-7245-4df3-8daa-19f79d9d3d12","Type":"ContainerStarted","Data":"168acd235085a965a2754d4de6039df6f2ee8fb4bf4538678657dcedff147d24"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.524183 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.538215 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" event={"ID":"3c233910-81d5-42fb-8fe9-af3f45260c72","Type":"ContainerStarted","Data":"a8776fc50637b081e9984f45eef65475a14aa7b17cac0c6725bd5be7e42df6b2"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.538902 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.545397 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" podStartSLOduration=3.314223584 podStartE2EDuration="24.545372865s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:42.09168455 +0000 UTC m=+1152.402843496" lastFinishedPulling="2026-01-08 23:35:03.322833831 +0000 UTC m=+1173.633992777" observedRunningTime="2026-01-08 23:35:04.535899284 +0000 UTC m=+1174.847058230" watchObservedRunningTime="2026-01-08 23:35:04.545372865 +0000 UTC m=+1174.856531811" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.553378 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" event={"ID":"0383d95b-a4e8-401c-92f9-7564a1aa286a","Type":"ContainerStarted","Data":"5f9c271b4ad08f26032816df5b1c33dc8787615c611572fd36c60debfef210d4"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.553981 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.581584 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" event={"ID":"aed9ac7f-f2e8-4729-8caa-d6c81f09a392","Type":"ContainerStarted","Data":"ee21fb1c62bca241632b0d3936dcfa7624cde8189b5ba7ba255e39fbf84faf57"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.582399 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.595634 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" event={"ID":"f9869afd-afb1-4974-9b35-61c60e107d86","Type":"ContainerStarted","Data":"aaf6e9653037b7ffff02dc2d8dac1be03790091e0e9299bec17d5a1ffe9f16c1"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.595766 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.598417 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" podStartSLOduration=5.141290934 podStartE2EDuration="24.598404823s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.256109359 +0000 UTC m=+1151.567268305" lastFinishedPulling="2026-01-08 23:35:00.713223248 +0000 UTC m=+1171.024382194" observedRunningTime="2026-01-08 23:35:04.598349832 +0000 UTC m=+1174.909508778" watchObservedRunningTime="2026-01-08 23:35:04.598404823 +0000 UTC m=+1174.909563769" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.599030 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" podStartSLOduration=5.7404303599999995 podStartE2EDuration="24.599025648s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.85459843 +0000 UTC m=+1152.165757376" lastFinishedPulling="2026-01-08 23:35:00.713193718 +0000 UTC m=+1171.024352664" observedRunningTime="2026-01-08 23:35:04.572507374 +0000 UTC m=+1174.883666320" watchObservedRunningTime="2026-01-08 23:35:04.599025648 +0000 UTC m=+1174.910184594" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.608546 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"6ea29ffcc641534adace455f20d68f37c7c8da0950e832af522e2661b455a0c2"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.616057 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" event={"ID":"eed07e92-bbf8-4845-ac9a-3ab8eafadb58","Type":"ContainerStarted","Data":"7bd333119162eab8c29da8b6970824480b4b5ad279b4feef10f6f70af2a1bf45"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.616269 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.624493 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" event={"ID":"1dfe9903-3eb9-4c02-a453-fe2a8314d79b","Type":"ContainerStarted","Data":"49d71d2093e5efbf0b156b6a76e0085b963ad15ffff61a91e87b82571f44c3d3"} Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.624724 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.632806 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" podStartSLOduration=5.778963576 podStartE2EDuration="24.632787278s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.860151415 +0000 UTC m=+1152.171310361" lastFinishedPulling="2026-01-08 23:35:00.713975117 +0000 UTC m=+1171.025134063" observedRunningTime="2026-01-08 23:35:04.632640175 +0000 UTC m=+1174.943799121" watchObservedRunningTime="2026-01-08 23:35:04.632787278 +0000 UTC m=+1174.943946224" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.660196 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" podStartSLOduration=4.97030572 podStartE2EDuration="24.660178164s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:42.091369863 +0000 UTC m=+1152.402528809" lastFinishedPulling="2026-01-08 23:35:01.781242307 +0000 UTC m=+1172.092401253" observedRunningTime="2026-01-08 23:35:04.657676393 +0000 UTC m=+1174.968835339" watchObservedRunningTime="2026-01-08 23:35:04.660178164 +0000 UTC m=+1174.971337110" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.718615 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" podStartSLOduration=3.55276425 podStartE2EDuration="24.718600583s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:42.095811171 +0000 UTC m=+1152.406970117" lastFinishedPulling="2026-01-08 23:35:03.261647504 +0000 UTC m=+1173.572806450" observedRunningTime="2026-01-08 23:35:04.689828134 +0000 UTC m=+1175.000987080" watchObservedRunningTime="2026-01-08 23:35:04.718600583 +0000 UTC m=+1175.029759529" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.762959 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" podStartSLOduration=4.945444396 podStartE2EDuration="24.762940231s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.966587011 +0000 UTC m=+1152.277745957" lastFinishedPulling="2026-01-08 23:35:01.784082846 +0000 UTC m=+1172.095241792" observedRunningTime="2026-01-08 23:35:04.719201528 +0000 UTC m=+1175.030360474" watchObservedRunningTime="2026-01-08 23:35:04.762940231 +0000 UTC m=+1175.074099177" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.797630 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" podStartSLOduration=4.320116243 podStartE2EDuration="24.797613903s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.303746937 +0000 UTC m=+1151.614905883" lastFinishedPulling="2026-01-08 23:35:01.781244597 +0000 UTC m=+1172.092403543" observedRunningTime="2026-01-08 23:35:04.764609291 +0000 UTC m=+1175.075768237" watchObservedRunningTime="2026-01-08 23:35:04.797613903 +0000 UTC m=+1175.108772849" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.801978 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" podStartSLOduration=3.659098983 podStartE2EDuration="24.801964579s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:42.161568318 +0000 UTC m=+1152.472727264" lastFinishedPulling="2026-01-08 23:35:03.304433914 +0000 UTC m=+1173.615592860" observedRunningTime="2026-01-08 23:35:04.795060951 +0000 UTC m=+1175.106219897" watchObservedRunningTime="2026-01-08 23:35:04.801964579 +0000 UTC m=+1175.113123525" Jan 08 23:35:04 crc kubenswrapper[4945]: I0108 23:35:04.828067 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" podStartSLOduration=3.741400214 podStartE2EDuration="24.828047653s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:42.169110132 +0000 UTC m=+1152.480269078" lastFinishedPulling="2026-01-08 23:35:03.255757571 +0000 UTC m=+1173.566916517" observedRunningTime="2026-01-08 23:35:04.825972732 +0000 UTC m=+1175.137131678" watchObservedRunningTime="2026-01-08 23:35:04.828047653 +0000 UTC m=+1175.139206599" Jan 08 23:35:07 crc kubenswrapper[4945]: I0108 23:35:07.643720 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" event={"ID":"f50a75b2-71ab-49c6-b184-f630dbfd9cc0","Type":"ContainerStarted","Data":"d4045461b9277e35c9f9cc6ec62ff7460241c4e15c6caff6c9bf23c4c4a8349c"} Jan 08 23:35:07 crc kubenswrapper[4945]: I0108 23:35:07.644373 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:35:07 crc kubenswrapper[4945]: I0108 23:35:07.644751 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" event={"ID":"18520dae-623d-499b-95c6-96b4de8d9bf4","Type":"ContainerStarted","Data":"4997635e5714df82d5ad1154a11349e12cf77a4325de3d3f45429835565423c3"} Jan 08 23:35:07 crc kubenswrapper[4945]: I0108 23:35:07.645407 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" Jan 08 23:35:07 crc kubenswrapper[4945]: I0108 23:35:07.667097 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" podStartSLOduration=24.607167775 podStartE2EDuration="27.667051119s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:35:03.761593901 +0000 UTC m=+1174.072752837" lastFinishedPulling="2026-01-08 23:35:06.821477235 +0000 UTC m=+1177.132636181" observedRunningTime="2026-01-08 23:35:07.666370002 +0000 UTC m=+1177.977528958" watchObservedRunningTime="2026-01-08 23:35:07.667051119 +0000 UTC m=+1177.978210075" Jan 08 23:35:07 crc kubenswrapper[4945]: I0108 23:35:07.692871 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" podStartSLOduration=1.780914281 podStartE2EDuration="27.692853056s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:40.906546947 +0000 UTC m=+1151.217705893" lastFinishedPulling="2026-01-08 23:35:06.818485732 +0000 UTC m=+1177.129644668" observedRunningTime="2026-01-08 23:35:07.688684605 +0000 UTC m=+1177.999843561" watchObservedRunningTime="2026-01-08 23:35:07.692853056 +0000 UTC m=+1178.004011992" Jan 08 23:35:08 crc kubenswrapper[4945]: I0108 23:35:08.655820 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" event={"ID":"1604a21a-39a8-4c27-886f-dd74a9c6ed92","Type":"ContainerStarted","Data":"dcaf12b8fe450cfb96a4cc06d2efb0292e2efe9e4893850d5699b8e35f814266"} Jan 08 23:35:08 crc kubenswrapper[4945]: I0108 23:35:08.657476 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" Jan 08 23:35:08 crc kubenswrapper[4945]: I0108 23:35:08.659510 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" event={"ID":"77d96cd2-5140-418c-8c4b-0789ccd534e1","Type":"ContainerStarted","Data":"e813e21a0d3a240fa8f3cc96a349386169de89c85cbf815687b6f38fc0dc2fa5"} Jan 08 23:35:08 crc kubenswrapper[4945]: I0108 23:35:08.659683 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" Jan 08 23:35:08 crc kubenswrapper[4945]: I0108 23:35:08.666306 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" event={"ID":"c32027c9-9810-4f56-afaf-b680d5baed3c","Type":"ContainerStarted","Data":"67a88189e4e3c0fd35a00ad968c06175d08e8fbfd86d6d024f17ff58f4034b4f"} Jan 08 23:35:08 crc kubenswrapper[4945]: I0108 23:35:08.706894 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" podStartSLOduration=2.701668531 podStartE2EDuration="28.706880283s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.624611082 +0000 UTC m=+1151.935770028" lastFinishedPulling="2026-01-08 23:35:07.629822834 +0000 UTC m=+1177.940981780" observedRunningTime="2026-01-08 23:35:08.70468614 +0000 UTC m=+1179.015845086" watchObservedRunningTime="2026-01-08 23:35:08.706880283 +0000 UTC m=+1179.018039229" Jan 08 23:35:08 crc kubenswrapper[4945]: I0108 23:35:08.710157 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" podStartSLOduration=1.6932121690000002 podStartE2EDuration="28.710150092s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.422129033 +0000 UTC m=+1151.733287979" lastFinishedPulling="2026-01-08 23:35:08.439066956 +0000 UTC m=+1178.750225902" observedRunningTime="2026-01-08 23:35:08.690795162 +0000 UTC m=+1179.001954128" watchObservedRunningTime="2026-01-08 23:35:08.710150092 +0000 UTC m=+1179.021309038" Jan 08 23:35:08 crc kubenswrapper[4945]: I0108 23:35:08.726203 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" podStartSLOduration=2.2882412260000002 podStartE2EDuration="28.726181422s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:42.000589667 +0000 UTC m=+1152.311748633" lastFinishedPulling="2026-01-08 23:35:08.438529883 +0000 UTC m=+1178.749688829" observedRunningTime="2026-01-08 23:35:08.720898633 +0000 UTC m=+1179.032057579" watchObservedRunningTime="2026-01-08 23:35:08.726181422 +0000 UTC m=+1179.037340368" Jan 08 23:35:10 crc kubenswrapper[4945]: I0108 23:35:10.503105 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-k72k4" Jan 08 23:35:10 crc kubenswrapper[4945]: I0108 23:35:10.573310 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7b549fc966-9t9tj" Jan 08 23:35:10 crc kubenswrapper[4945]: I0108 23:35:10.593948 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" Jan 08 23:35:10 crc kubenswrapper[4945]: I0108 23:35:10.793575 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-npd8w" Jan 08 23:35:10 crc kubenswrapper[4945]: I0108 23:35:10.897914 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-48rg6" Jan 08 23:35:10 crc kubenswrapper[4945]: I0108 23:35:10.920802 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-gh5xr" Jan 08 23:35:11 crc kubenswrapper[4945]: I0108 23:35:11.000661 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-xnlp6" Jan 08 23:35:11 crc kubenswrapper[4945]: I0108 23:35:11.063823 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-vcpxm" Jan 08 23:35:11 crc kubenswrapper[4945]: I0108 23:35:11.065587 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5twfl" Jan 08 23:35:11 crc kubenswrapper[4945]: I0108 23:35:11.153156 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-h6wf5" Jan 08 23:35:11 crc kubenswrapper[4945]: I0108 23:35:11.205755 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-dg8tj" Jan 08 23:35:11 crc kubenswrapper[4945]: I0108 23:35:11.227143 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-m49gl" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.584426 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.593707 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd1e4942-f561-478c-b456-93d8886d0a31-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g\" (UID: \"cd1e4942-f561-478c-b456-93d8886d0a31\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.700303 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" event={"ID":"750dd75e-3bcc-4490-a77b-0e759b74b760","Type":"ContainerStarted","Data":"ddd2c8751394a02215cb1429fb87482e23360503a30b8f4285e7906cb8971b08"} Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.701204 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.702595 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" event={"ID":"0b0d4e49-ab97-4808-96da-642983c3bafa","Type":"ContainerStarted","Data":"6f42a00ad48daaa1e23037b4980a4f9e565a8089827165f6bb1e018d361073de"} Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.702905 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.742681 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" podStartSLOduration=2.424574798 podStartE2EDuration="32.742648116s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.604635637 +0000 UTC m=+1151.915794583" lastFinishedPulling="2026-01-08 23:35:11.922708955 +0000 UTC m=+1182.233867901" observedRunningTime="2026-01-08 23:35:12.736351593 +0000 UTC m=+1183.047510539" watchObservedRunningTime="2026-01-08 23:35:12.742648116 +0000 UTC m=+1183.053807062" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.759636 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" podStartSLOduration=2.737280597 podStartE2EDuration="32.759613159s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.979220948 +0000 UTC m=+1152.290379904" lastFinishedPulling="2026-01-08 23:35:12.00155352 +0000 UTC m=+1182.312712466" observedRunningTime="2026-01-08 23:35:12.753056879 +0000 UTC m=+1183.064215825" watchObservedRunningTime="2026-01-08 23:35:12.759613159 +0000 UTC m=+1183.070772105" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.813722 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.892066 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.892139 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.897390 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-metrics-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:35:12 crc kubenswrapper[4945]: I0108 23:35:12.897602 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d-webhook-certs\") pod \"openstack-operator-controller-manager-57bd96d86c-vf2m9\" (UID: \"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d\") " pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.144782 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.162482 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g"] Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.433892 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9"] Jan 08 23:35:13 crc kubenswrapper[4945]: W0108 23:35:13.440941 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod640c2f26_25d6_4aa3_a5ed_fb6a0890bc3d.slice/crio-1598adb1d0e3f55860066dfb645d4b328a2f24d83e966c6478002d99a2ced295 WatchSource:0}: Error finding container 1598adb1d0e3f55860066dfb645d4b328a2f24d83e966c6478002d99a2ced295: Status 404 returned error can't find the container with id 1598adb1d0e3f55860066dfb645d4b328a2f24d83e966c6478002d99a2ced295 Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.711138 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" event={"ID":"cd1e4942-f561-478c-b456-93d8886d0a31","Type":"ContainerStarted","Data":"83e52e1f974226cb8684a8600becef8dd8805144af66ffeb522377a101c9cea6"} Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.712458 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" event={"ID":"e16d1a8a-0404-4007-86fd-886fed232b0b","Type":"ContainerStarted","Data":"3b2895ec62760de59e9ef7b7fb501f3dc7e79573c768206b68605b1d747c39b7"} Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.714651 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" event={"ID":"f1e9bdc2-26c3-4304-8af5-5423dccf220e","Type":"ContainerStarted","Data":"e7fe1c1f43b19c88bb7e62f8173a89eadf8744af15fe390588d426a3d82b62fc"} Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.714878 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.717057 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" event={"ID":"18eb8b42-2595-4104-aaf4-005fda7ded69","Type":"ContainerStarted","Data":"8b42c49b9fb3b9e21be4a9571f1b9d74877f80235e03edb947aeb1a7fee33ed3"} Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.717646 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.719909 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" event={"ID":"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d","Type":"ContainerStarted","Data":"ebf8fdd766c173b8a9bf8e91dd1f6eff8de89b2b416510a70472c7b26e7c857d"} Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.719933 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.719943 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" event={"ID":"640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d","Type":"ContainerStarted","Data":"1598adb1d0e3f55860066dfb645d4b328a2f24d83e966c6478002d99a2ced295"} Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.731571 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v7fnt" podStartSLOduration=3.364563206 podStartE2EDuration="33.731549792s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:42.064057099 +0000 UTC m=+1152.375216045" lastFinishedPulling="2026-01-08 23:35:12.431043675 +0000 UTC m=+1182.742202631" observedRunningTime="2026-01-08 23:35:13.730705702 +0000 UTC m=+1184.041864648" watchObservedRunningTime="2026-01-08 23:35:13.731549792 +0000 UTC m=+1184.042708938" Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.756432 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" podStartSLOduration=3.285595287 podStartE2EDuration="33.756412426s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.986472844 +0000 UTC m=+1152.297631790" lastFinishedPulling="2026-01-08 23:35:12.457289983 +0000 UTC m=+1182.768448929" observedRunningTime="2026-01-08 23:35:13.749930929 +0000 UTC m=+1184.061089895" watchObservedRunningTime="2026-01-08 23:35:13.756412426 +0000 UTC m=+1184.067571372" Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.786123 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" podStartSLOduration=33.786100987 podStartE2EDuration="33.786100987s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:35:13.777233152 +0000 UTC m=+1184.088392108" watchObservedRunningTime="2026-01-08 23:35:13.786100987 +0000 UTC m=+1184.097259943" Jan 08 23:35:13 crc kubenswrapper[4945]: I0108 23:35:13.798398 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" podStartSLOduration=2.1484727 podStartE2EDuration="33.798379696s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:34:41.854394215 +0000 UTC m=+1152.165553161" lastFinishedPulling="2026-01-08 23:35:13.504301211 +0000 UTC m=+1183.815460157" observedRunningTime="2026-01-08 23:35:13.797198917 +0000 UTC m=+1184.108357873" watchObservedRunningTime="2026-01-08 23:35:13.798379696 +0000 UTC m=+1184.109538642" Jan 08 23:35:16 crc kubenswrapper[4945]: I0108 23:35:16.543795 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-v5nq9" Jan 08 23:35:20 crc kubenswrapper[4945]: I0108 23:35:20.476682 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-xvn4t" Jan 08 23:35:20 crc kubenswrapper[4945]: I0108 23:35:20.491960 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-qwj2j" Jan 08 23:35:20 crc kubenswrapper[4945]: I0108 23:35:20.593000 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hnk29" Jan 08 23:35:20 crc kubenswrapper[4945]: I0108 23:35:20.749734 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-568985c78-wbhqw" Jan 08 23:35:20 crc kubenswrapper[4945]: I0108 23:35:20.754482 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-fsjwc" Jan 08 23:35:20 crc kubenswrapper[4945]: I0108 23:35:20.916408 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-f6n8j" Jan 08 23:35:20 crc kubenswrapper[4945]: I0108 23:35:20.951029 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-q7lmc" Jan 08 23:35:21 crc kubenswrapper[4945]: I0108 23:35:21.063930 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-4qb9v" Jan 08 23:35:23 crc kubenswrapper[4945]: I0108 23:35:23.151721 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57bd96d86c-vf2m9" Jan 08 23:35:24 crc kubenswrapper[4945]: E0108 23:35:24.973450 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:5d09c9ffa6ee479724f6da786cb35902b87578365dac2035c222f5e4f752d208" Jan 08 23:35:24 crc kubenswrapper[4945]: E0108 23:35:24.974135 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:5d09c9ffa6ee479724f6da786cb35902b87578365dac2035c222f5e4f752d208,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ps8w9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g_openstack-operators(cd1e4942-f561-478c-b456-93d8886d0a31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:35:24 crc kubenswrapper[4945]: E0108 23:35:24.975422 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" podUID="cd1e4942-f561-478c-b456-93d8886d0a31" Jan 08 23:35:25 crc kubenswrapper[4945]: E0108 23:35:25.810227 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:5d09c9ffa6ee479724f6da786cb35902b87578365dac2035c222f5e4f752d208\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" podUID="cd1e4942-f561-478c-b456-93d8886d0a31" Jan 08 23:35:37 crc kubenswrapper[4945]: I0108 23:35:37.902686 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" event={"ID":"cd1e4942-f561-478c-b456-93d8886d0a31","Type":"ContainerStarted","Data":"6d688b97ff660f0feb8119da364dfb1689f1c8e4b5d1a53f8553b33409974934"} Jan 08 23:35:37 crc kubenswrapper[4945]: I0108 23:35:37.903905 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:35:37 crc kubenswrapper[4945]: I0108 23:35:37.938949 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" podStartSLOduration=33.685553986 podStartE2EDuration="57.938926593s" podCreationTimestamp="2026-01-08 23:34:40 +0000 UTC" firstStartedPulling="2026-01-08 23:35:13.193341537 +0000 UTC m=+1183.504500483" lastFinishedPulling="2026-01-08 23:35:37.446714144 +0000 UTC m=+1207.757873090" observedRunningTime="2026-01-08 23:35:37.936572996 +0000 UTC m=+1208.247731952" watchObservedRunningTime="2026-01-08 23:35:37.938926593 +0000 UTC m=+1208.250085539" Jan 08 23:35:42 crc kubenswrapper[4945]: I0108 23:35:42.822234 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g" Jan 08 23:36:00 crc kubenswrapper[4945]: I0108 23:36:00.945172 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mp8fx"] Jan 08 23:36:00 crc kubenswrapper[4945]: I0108 23:36:00.962433 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mp8fx"] Jan 08 23:36:00 crc kubenswrapper[4945]: I0108 23:36:00.962537 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:00 crc kubenswrapper[4945]: I0108 23:36:00.964463 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 08 23:36:00 crc kubenswrapper[4945]: I0108 23:36:00.964715 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 08 23:36:00 crc kubenswrapper[4945]: I0108 23:36:00.965051 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rchbb" Jan 08 23:36:00 crc kubenswrapper[4945]: I0108 23:36:00.965202 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.051951 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tw4sh"] Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.063850 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.066322 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.071094 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tw4sh"] Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.164926 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.164984 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57p2w\" (UniqueName: \"kubernetes.io/projected/5602a7fe-504e-4e6e-83ed-0dd458ef496d-kube-api-access-57p2w\") pod \"dnsmasq-dns-675f4bcbfc-mp8fx\" (UID: \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.165033 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-config\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.165055 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xq6m\" (UniqueName: \"kubernetes.io/projected/8360f40b-d24b-42ab-8240-94ad3970531d-kube-api-access-6xq6m\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.165076 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5602a7fe-504e-4e6e-83ed-0dd458ef496d-config\") pod \"dnsmasq-dns-675f4bcbfc-mp8fx\" (UID: \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.266226 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.266285 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57p2w\" (UniqueName: \"kubernetes.io/projected/5602a7fe-504e-4e6e-83ed-0dd458ef496d-kube-api-access-57p2w\") pod \"dnsmasq-dns-675f4bcbfc-mp8fx\" (UID: \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.266312 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-config\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.266334 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xq6m\" (UniqueName: \"kubernetes.io/projected/8360f40b-d24b-42ab-8240-94ad3970531d-kube-api-access-6xq6m\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.266357 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5602a7fe-504e-4e6e-83ed-0dd458ef496d-config\") pod \"dnsmasq-dns-675f4bcbfc-mp8fx\" (UID: \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.267454 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5602a7fe-504e-4e6e-83ed-0dd458ef496d-config\") pod \"dnsmasq-dns-675f4bcbfc-mp8fx\" (UID: \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.267807 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.268351 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-config\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.286966 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57p2w\" (UniqueName: \"kubernetes.io/projected/5602a7fe-504e-4e6e-83ed-0dd458ef496d-kube-api-access-57p2w\") pod \"dnsmasq-dns-675f4bcbfc-mp8fx\" (UID: \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.296669 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xq6m\" (UniqueName: \"kubernetes.io/projected/8360f40b-d24b-42ab-8240-94ad3970531d-kube-api-access-6xq6m\") pod \"dnsmasq-dns-78dd6ddcc-tw4sh\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.303315 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.380778 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.557581 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mp8fx"] Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.587080 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fcpqs"] Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.588276 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.596515 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fcpqs"] Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.772763 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6jg\" (UniqueName: \"kubernetes.io/projected/d82f18b5-6415-4553-9daa-371807dd012b-kube-api-access-qx6jg\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.773120 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.773195 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-config\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.834744 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mp8fx"] Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.874086 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-config\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.874184 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6jg\" (UniqueName: \"kubernetes.io/projected/d82f18b5-6415-4553-9daa-371807dd012b-kube-api-access-qx6jg\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.874313 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.875501 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-config\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.877634 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.891643 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6jg\" (UniqueName: \"kubernetes.io/projected/d82f18b5-6415-4553-9daa-371807dd012b-kube-api-access-qx6jg\") pod \"dnsmasq-dns-5ccc8479f9-fcpqs\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.912458 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:01 crc kubenswrapper[4945]: I0108 23:36:01.974393 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tw4sh"] Jan 08 23:36:01 crc kubenswrapper[4945]: W0108 23:36:01.996119 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8360f40b_d24b_42ab_8240_94ad3970531d.slice/crio-257ff73a69d3a13bf645b3ff53bba02ebaaca1d5047587c600fbf907e99b7270 WatchSource:0}: Error finding container 257ff73a69d3a13bf645b3ff53bba02ebaaca1d5047587c600fbf907e99b7270: Status 404 returned error can't find the container with id 257ff73a69d3a13bf645b3ff53bba02ebaaca1d5047587c600fbf907e99b7270 Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.030905 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fcpqs"] Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.066421 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnsnn"] Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.067496 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.081450 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" event={"ID":"5602a7fe-504e-4e6e-83ed-0dd458ef496d","Type":"ContainerStarted","Data":"37af61e5faab984052fac53febefcec2a8c1e3d74db78b20f1573325b4b2e306"} Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.094288 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" event={"ID":"8360f40b-d24b-42ab-8240-94ad3970531d","Type":"ContainerStarted","Data":"257ff73a69d3a13bf645b3ff53bba02ebaaca1d5047587c600fbf907e99b7270"} Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.100217 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnsnn"] Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.178822 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krb47\" (UniqueName: \"kubernetes.io/projected/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-kube-api-access-krb47\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.179286 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.179318 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-config\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.280911 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krb47\" (UniqueName: \"kubernetes.io/projected/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-kube-api-access-krb47\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.280979 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.281016 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-config\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.282130 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.282188 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-config\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.301621 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krb47\" (UniqueName: \"kubernetes.io/projected/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-kube-api-access-krb47\") pod \"dnsmasq-dns-57d769cc4f-fnsnn\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.389285 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.410470 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fcpqs"] Jan 08 23:36:02 crc kubenswrapper[4945]: W0108 23:36:02.418577 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd82f18b5_6415_4553_9daa_371807dd012b.slice/crio-49c1d82d5a9e35e899770f03c842b48b77f5589c2512f6417c2a084405b951d4 WatchSource:0}: Error finding container 49c1d82d5a9e35e899770f03c842b48b77f5589c2512f6417c2a084405b951d4: Status 404 returned error can't find the container with id 49c1d82d5a9e35e899770f03c842b48b77f5589c2512f6417c2a084405b951d4 Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.625477 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnsnn"] Jan 08 23:36:02 crc kubenswrapper[4945]: W0108 23:36:02.628811 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda547696a_ed3c_4a1f_a1b5_1982e6fa62cb.slice/crio-132a552815ee08b75a3698ebd9f39aecbedfbceb7b565753b9f648822f42568a WatchSource:0}: Error finding container 132a552815ee08b75a3698ebd9f39aecbedfbceb7b565753b9f648822f42568a: Status 404 returned error can't find the container with id 132a552815ee08b75a3698ebd9f39aecbedfbceb7b565753b9f648822f42568a Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.717803 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.720660 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.723549 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.723717 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.723557 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.723871 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.724100 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.724233 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-sd7d9" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.726802 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.729674 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.890581 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.890860 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.890883 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.890916 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.890940 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.890955 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.891115 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.891136 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.891155 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.891174 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.891195 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrs6k\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-kube-api-access-vrs6k\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.992976 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrs6k\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-kube-api-access-vrs6k\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.993071 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.993107 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996659 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996720 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996753 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996775 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996847 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996874 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996896 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996916 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.997596 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.998818 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.999745 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.993788 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:02 crc kubenswrapper[4945]: I0108 23:36:02.996599 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.000143 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.000622 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.002864 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.004144 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.012748 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.023654 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrs6k\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-kube-api-access-vrs6k\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.030812 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.049853 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.127152 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" event={"ID":"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb","Type":"ContainerStarted","Data":"132a552815ee08b75a3698ebd9f39aecbedfbceb7b565753b9f648822f42568a"} Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.131571 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" event={"ID":"d82f18b5-6415-4553-9daa-371807dd012b","Type":"ContainerStarted","Data":"49c1d82d5a9e35e899770f03c842b48b77f5589c2512f6417c2a084405b951d4"} Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.210777 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.212420 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.214267 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.215920 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.216124 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.216247 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.216387 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-l84gd" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.218226 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.226043 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.234324 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301206 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2btbg\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-kube-api-access-2btbg\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301416 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301442 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301465 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71eb40d2-e481-445d-99ea-948b918b862d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301486 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301515 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301542 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301562 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301583 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71eb40d2-e481-445d-99ea-948b918b862d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301613 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.301639 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.403713 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.403775 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.403799 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.403820 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71eb40d2-e481-445d-99ea-948b918b862d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.403864 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.403900 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.403948 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2btbg\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-kube-api-access-2btbg\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.404009 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.404032 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.404057 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71eb40d2-e481-445d-99ea-948b918b862d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.404083 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.404552 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.404875 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.405074 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.405794 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.406389 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.406460 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.425067 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.430930 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71eb40d2-e481-445d-99ea-948b918b862d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.438591 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71eb40d2-e481-445d-99ea-948b918b862d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.441345 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2btbg\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-kube-api-access-2btbg\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.445569 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.465119 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.564399 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 08 23:36:03 crc kubenswrapper[4945]: I0108 23:36:03.989699 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 08 23:36:04 crc kubenswrapper[4945]: W0108 23:36:04.006919 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode920b84a_bd1b_4649_9cc0_e3b239d6a5b9.slice/crio-cc2726a84e1996e305b3da7f7f3df5215be6dfd07a4a76b111658bbf2daaf6d2 WatchSource:0}: Error finding container cc2726a84e1996e305b3da7f7f3df5215be6dfd07a4a76b111658bbf2daaf6d2: Status 404 returned error can't find the container with id cc2726a84e1996e305b3da7f7f3df5215be6dfd07a4a76b111658bbf2daaf6d2 Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.152068 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9","Type":"ContainerStarted","Data":"cc2726a84e1996e305b3da7f7f3df5215be6dfd07a4a76b111658bbf2daaf6d2"} Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.285331 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.566579 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.568025 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.569986 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4tbhl" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.570965 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.571124 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.577406 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.578851 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.582252 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.625248 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-kolla-config\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.625306 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.625326 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqmx\" (UniqueName: \"kubernetes.io/projected/17371a82-14e3-4830-b99f-a2b46b4f4366-kube-api-access-jxqmx\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.625345 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-default\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.625381 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.625397 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.625410 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-operator-scripts\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.625457 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-generated\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727009 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-generated\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727085 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-kolla-config\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727117 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727142 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxqmx\" (UniqueName: \"kubernetes.io/projected/17371a82-14e3-4830-b99f-a2b46b4f4366-kube-api-access-jxqmx\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727162 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-default\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727551 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727570 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727865 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.728032 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-kolla-config\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.728273 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-default\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.728958 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-operator-scripts\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727664 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-generated\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.727586 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-operator-scripts\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.742067 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.744364 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.744850 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxqmx\" (UniqueName: \"kubernetes.io/projected/17371a82-14e3-4830-b99f-a2b46b4f4366-kube-api-access-jxqmx\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.747098 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " pod="openstack/openstack-galera-0" Jan 08 23:36:04 crc kubenswrapper[4945]: I0108 23:36:04.903838 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 08 23:36:05 crc kubenswrapper[4945]: I0108 23:36:05.173509 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71eb40d2-e481-445d-99ea-948b918b862d","Type":"ContainerStarted","Data":"dda13f6fb165777de035f98f50fb1f342b1baf36040a5f518c70e5e066670cce"} Jan 08 23:36:05 crc kubenswrapper[4945]: I0108 23:36:05.549336 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.200310 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"17371a82-14e3-4830-b99f-a2b46b4f4366","Type":"ContainerStarted","Data":"cf395830dc3cb44dafa10eb210f28826cfa35a8e63fa241421e4b199f3ea2047"} Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.368377 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.370225 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.377116 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-plbbs" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.377341 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.377844 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.386858 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.448216 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.449547 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.451745 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.456895 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.462831 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.463752 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.463797 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.463828 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7smc\" (UniqueName: \"kubernetes.io/projected/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kube-api-access-m7smc\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.463858 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kolla-config\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.463874 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-config-data\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.462919 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-2vgjz" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.463455 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572209 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572335 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572368 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572393 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572437 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572486 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572509 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572540 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7smc\" (UniqueName: \"kubernetes.io/projected/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kube-api-access-m7smc\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572562 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572580 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kolla-config\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572610 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-config-data\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572645 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.572680 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86rx5\" (UniqueName: \"kubernetes.io/projected/d4fe48ad-f532-4c60-b88b-95a894d0b519-kube-api-access-86rx5\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.575398 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-config-data\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.575826 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kolla-config\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.587637 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.589628 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.598231 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7smc\" (UniqueName: \"kubernetes.io/projected/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kube-api-access-m7smc\") pod \"memcached-0\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.674844 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.674891 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.674913 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.675061 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.675094 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86rx5\" (UniqueName: \"kubernetes.io/projected/d4fe48ad-f532-4c60-b88b-95a894d0b519-kube-api-access-86rx5\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.675126 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.675183 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.675214 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.676847 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.677587 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.677659 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.677840 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.678022 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.680225 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.707420 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.711570 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.741459 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.741618 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86rx5\" (UniqueName: \"kubernetes.io/projected/d4fe48ad-f532-4c60-b88b-95a894d0b519-kube-api-access-86rx5\") pod \"openstack-cell1-galera-0\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:06 crc kubenswrapper[4945]: I0108 23:36:06.776348 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:07 crc kubenswrapper[4945]: I0108 23:36:07.375358 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 08 23:36:07 crc kubenswrapper[4945]: W0108 23:36:07.421347 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4fe48ad_f532_4c60_b88b_95a894d0b519.slice/crio-52028ca7c043619fc7c81c5925741eabcc44472143fdc093e2303c4af094b997 WatchSource:0}: Error finding container 52028ca7c043619fc7c81c5925741eabcc44472143fdc093e2303c4af094b997: Status 404 returned error can't find the container with id 52028ca7c043619fc7c81c5925741eabcc44472143fdc093e2303c4af094b997 Jan 08 23:36:07 crc kubenswrapper[4945]: I0108 23:36:07.428326 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.247426 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d9574582-49aa-48ec-8b43-bc55ed78a3d1","Type":"ContainerStarted","Data":"5b60050490d930ca0f6510b2e6d366465aac37eb45b0fe3ef232a7b56a143932"} Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.250347 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4fe48ad-f532-4c60-b88b-95a894d0b519","Type":"ContainerStarted","Data":"52028ca7c043619fc7c81c5925741eabcc44472143fdc093e2303c4af094b997"} Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.452635 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.454067 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.458650 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sc649" Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.470238 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.521160 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj9lj\" (UniqueName: \"kubernetes.io/projected/84d4963b-6a9f-4053-8944-d1c7e61256b9-kube-api-access-nj9lj\") pod \"kube-state-metrics-0\" (UID: \"84d4963b-6a9f-4053-8944-d1c7e61256b9\") " pod="openstack/kube-state-metrics-0" Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.622711 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj9lj\" (UniqueName: \"kubernetes.io/projected/84d4963b-6a9f-4053-8944-d1c7e61256b9-kube-api-access-nj9lj\") pod \"kube-state-metrics-0\" (UID: \"84d4963b-6a9f-4053-8944-d1c7e61256b9\") " pod="openstack/kube-state-metrics-0" Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.672317 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj9lj\" (UniqueName: \"kubernetes.io/projected/84d4963b-6a9f-4053-8944-d1c7e61256b9-kube-api-access-nj9lj\") pod \"kube-state-metrics-0\" (UID: \"84d4963b-6a9f-4053-8944-d1c7e61256b9\") " pod="openstack/kube-state-metrics-0" Jan 08 23:36:08 crc kubenswrapper[4945]: I0108 23:36:08.785977 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 08 23:36:09 crc kubenswrapper[4945]: I0108 23:36:09.527122 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.041955 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fr87r"] Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.043276 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.046476 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6qxkk" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.048092 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.048319 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.050656 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fr87r"] Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.093401 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c4f1760-d296-46a1-9ec8-cb64e543897c-scripts\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.093506 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-combined-ca-bundle\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.093549 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86w89\" (UniqueName: \"kubernetes.io/projected/6c4f1760-d296-46a1-9ec8-cb64e543897c-kube-api-access-86w89\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.093585 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run-ovn\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.093683 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-ovn-controller-tls-certs\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.094668 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.094900 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-log-ovn\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.134255 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-hfhkg"] Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.135693 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.146370 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hfhkg"] Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.199829 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c4f1760-d296-46a1-9ec8-cb64e543897c-scripts\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.199890 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-log\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.199931 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-combined-ca-bundle\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.199957 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-lib\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200079 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86w89\" (UniqueName: \"kubernetes.io/projected/6c4f1760-d296-46a1-9ec8-cb64e543897c-kube-api-access-86w89\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200110 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt99d\" (UniqueName: \"kubernetes.io/projected/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-kube-api-access-zt99d\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200136 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-etc-ovs\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200154 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run-ovn\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200209 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-ovn-controller-tls-certs\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200235 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200266 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-log-ovn\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200289 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-scripts\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200334 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-run\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200892 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run-ovn\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.200968 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.201025 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-log-ovn\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.202445 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c4f1760-d296-46a1-9ec8-cb64e543897c-scripts\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.208198 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-ovn-controller-tls-certs\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.208197 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-combined-ca-bundle\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.219767 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86w89\" (UniqueName: \"kubernetes.io/projected/6c4f1760-d296-46a1-9ec8-cb64e543897c-kube-api-access-86w89\") pod \"ovn-controller-fr87r\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.301621 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt99d\" (UniqueName: \"kubernetes.io/projected/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-kube-api-access-zt99d\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.301888 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-etc-ovs\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.301963 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-scripts\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.302018 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-run\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.302043 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-log\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.302068 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-lib\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.302112 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-run\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.302191 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-etc-ovs\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.302243 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-lib\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.302295 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-log\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.304866 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-scripts\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.319140 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt99d\" (UniqueName: \"kubernetes.io/projected/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-kube-api-access-zt99d\") pod \"ovn-controller-ovs-hfhkg\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.387238 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r" Jan 08 23:36:12 crc kubenswrapper[4945]: I0108 23:36:12.457482 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.288198 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.289540 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.291907 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qt8jm" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.292098 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.292148 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.292285 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.292406 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.296703 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.414803 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq6mb\" (UniqueName: \"kubernetes.io/projected/36817bdb-e28c-495c-9e26-005e53f3cc2a-kube-api-access-gq6mb\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.414854 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-config\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.414887 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.414916 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.414946 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.414966 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.415012 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.415045 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.517349 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.518279 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.518438 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.518551 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.518682 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.518831 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.521486 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq6mb\" (UniqueName: \"kubernetes.io/projected/36817bdb-e28c-495c-9e26-005e53f3cc2a-kube-api-access-gq6mb\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.521839 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.521925 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.521851 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-config\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.523658 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-config\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.524974 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.532834 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.539926 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq6mb\" (UniqueName: \"kubernetes.io/projected/36817bdb-e28c-495c-9e26-005e53f3cc2a-kube-api-access-gq6mb\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.540552 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.542690 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.543454 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:13 crc kubenswrapper[4945]: I0108 23:36:13.618255 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.759913 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.761914 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.764858 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sn4zk" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.765174 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.765453 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.765678 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.771939 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.869028 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.869097 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-config\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.869280 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.869358 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.869417 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90046452-437b-4666-83a8-e8ee09bfc932-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.869445 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mq2\" (UniqueName: \"kubernetes.io/projected/90046452-437b-4666-83a8-e8ee09bfc932-kube-api-access-57mq2\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.869577 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.869602 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.971113 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.971164 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.971246 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.971302 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-config\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.971344 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.971372 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.971401 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90046452-437b-4666-83a8-e8ee09bfc932-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.971422 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mq2\" (UniqueName: \"kubernetes.io/projected/90046452-437b-4666-83a8-e8ee09bfc932-kube-api-access-57mq2\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.972168 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.972972 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-config\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.973130 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.973357 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90046452-437b-4666-83a8-e8ee09bfc932-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.977223 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.983850 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.983953 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:15 crc kubenswrapper[4945]: I0108 23:36:15.992174 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mq2\" (UniqueName: \"kubernetes.io/projected/90046452-437b-4666-83a8-e8ee09bfc932-kube-api-access-57mq2\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:16 crc kubenswrapper[4945]: I0108 23:36:16.004760 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:16 crc kubenswrapper[4945]: I0108 23:36:16.087646 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:16 crc kubenswrapper[4945]: W0108 23:36:16.533220 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84d4963b_6a9f_4053_8944_d1c7e61256b9.slice/crio-3f5c065693a6416884e3a2e2438b81e97773cd1dc46ebf24c1dca8b02a731a27 WatchSource:0}: Error finding container 3f5c065693a6416884e3a2e2438b81e97773cd1dc46ebf24c1dca8b02a731a27: Status 404 returned error can't find the container with id 3f5c065693a6416884e3a2e2438b81e97773cd1dc46ebf24c1dca8b02a731a27 Jan 08 23:36:17 crc kubenswrapper[4945]: I0108 23:36:17.366884 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"84d4963b-6a9f-4053-8944-d1c7e61256b9","Type":"ContainerStarted","Data":"3f5c065693a6416884e3a2e2438b81e97773cd1dc46ebf24c1dca8b02a731a27"} Jan 08 23:36:25 crc kubenswrapper[4945]: E0108 23:36:25.747591 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 08 23:36:25 crc kubenswrapper[4945]: E0108 23:36:25.748284 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2btbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(71eb40d2-e481-445d-99ea-948b918b862d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:25 crc kubenswrapper[4945]: E0108 23:36:25.749535 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="71eb40d2-e481-445d-99ea-948b918b862d" Jan 08 23:36:25 crc kubenswrapper[4945]: E0108 23:36:25.759144 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 08 23:36:25 crc kubenswrapper[4945]: E0108 23:36:25.759412 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrs6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(e920b84a-bd1b-4649-9cc0-e3b239d6a5b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:25 crc kubenswrapper[4945]: E0108 23:36:25.760631 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" Jan 08 23:36:26 crc kubenswrapper[4945]: E0108 23:36:26.430758 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" Jan 08 23:36:26 crc kubenswrapper[4945]: E0108 23:36:26.432500 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="71eb40d2-e481-445d-99ea-948b918b862d" Jan 08 23:36:27 crc kubenswrapper[4945]: E0108 23:36:27.406576 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 08 23:36:27 crc kubenswrapper[4945]: E0108 23:36:27.407064 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxqmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(17371a82-14e3-4830-b99f-a2b46b4f4366): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:27 crc kubenswrapper[4945]: E0108 23:36:27.408240 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" Jan 08 23:36:27 crc kubenswrapper[4945]: E0108 23:36:27.436595 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" Jan 08 23:36:27 crc kubenswrapper[4945]: E0108 23:36:27.996817 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 08 23:36:27 crc kubenswrapper[4945]: E0108 23:36:27.997048 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n55ch55h549hcbh55ch584hf5h568h649hf5h68h67bh568h56fh65bh684h589h547h8ch64bh5f9h97h658h685h586hdh64bh579h59hfbhcbh696q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7smc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(d9574582-49aa-48ec-8b43-bc55ed78a3d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:27 crc kubenswrapper[4945]: E0108 23:36:27.998245 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="d9574582-49aa-48ec-8b43-bc55ed78a3d1" Jan 08 23:36:28 crc kubenswrapper[4945]: E0108 23:36:28.443101 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="d9574582-49aa-48ec-8b43-bc55ed78a3d1" Jan 08 23:36:32 crc kubenswrapper[4945]: E0108 23:36:32.201272 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 08 23:36:32 crc kubenswrapper[4945]: E0108 23:36:32.201630 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86rx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(d4fe48ad-f532-4c60-b88b-95a894d0b519): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:32 crc kubenswrapper[4945]: E0108 23:36:32.202826 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" Jan 08 23:36:32 crc kubenswrapper[4945]: E0108 23:36:32.484765 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.003538 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.003699 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qx6jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-fcpqs_openstack(d82f18b5-6415-4553-9daa-371807dd012b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.004868 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" podUID="d82f18b5-6415-4553-9daa-371807dd012b" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.056037 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.056233 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xq6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-tw4sh_openstack(8360f40b-d24b-42ab-8240-94ad3970531d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.056638 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.056784 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57p2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-mp8fx_openstack(5602a7fe-504e-4e6e-83ed-0dd458ef496d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.057863 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" podUID="8360f40b-d24b-42ab-8240-94ad3970531d" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.057937 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" podUID="5602a7fe-504e-4e6e-83ed-0dd458ef496d" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.127844 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.128002 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-krb47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-fnsnn_openstack(a547696a-ed3c-4a1f-a1b5-1982e6fa62cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.129123 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" podUID="a547696a-ed3c-4a1f-a1b5-1982e6fa62cb" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.494934 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" podUID="8360f40b-d24b-42ab-8240-94ad3970531d" Jan 08 23:36:33 crc kubenswrapper[4945]: E0108 23:36:33.500476 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" podUID="a547696a-ed3c-4a1f-a1b5-1982e6fa62cb" Jan 08 23:36:33 crc kubenswrapper[4945]: I0108 23:36:33.591571 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 08 23:36:33 crc kubenswrapper[4945]: I0108 23:36:33.609256 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fr87r"] Jan 08 23:36:33 crc kubenswrapper[4945]: I0108 23:36:33.677664 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-hfhkg"] Jan 08 23:36:33 crc kubenswrapper[4945]: I0108 23:36:33.762613 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 08 23:36:34 crc kubenswrapper[4945]: W0108 23:36:34.014573 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36817bdb_e28c_495c_9e26_005e53f3cc2a.slice/crio-1d2de1945f394afc7f8c64a13b440db07aa5c4c42b42236eadf3d5356331d1ad WatchSource:0}: Error finding container 1d2de1945f394afc7f8c64a13b440db07aa5c4c42b42236eadf3d5356331d1ad: Status 404 returned error can't find the container with id 1d2de1945f394afc7f8c64a13b440db07aa5c4c42b42236eadf3d5356331d1ad Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.098102 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.104689 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.185596 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57p2w\" (UniqueName: \"kubernetes.io/projected/5602a7fe-504e-4e6e-83ed-0dd458ef496d-kube-api-access-57p2w\") pod \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\" (UID: \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\") " Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.185654 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-dns-svc\") pod \"d82f18b5-6415-4553-9daa-371807dd012b\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.185731 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-config\") pod \"d82f18b5-6415-4553-9daa-371807dd012b\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.185818 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx6jg\" (UniqueName: \"kubernetes.io/projected/d82f18b5-6415-4553-9daa-371807dd012b-kube-api-access-qx6jg\") pod \"d82f18b5-6415-4553-9daa-371807dd012b\" (UID: \"d82f18b5-6415-4553-9daa-371807dd012b\") " Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.185847 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5602a7fe-504e-4e6e-83ed-0dd458ef496d-config\") pod \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\" (UID: \"5602a7fe-504e-4e6e-83ed-0dd458ef496d\") " Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.186419 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-config" (OuterVolumeSpecName: "config") pod "d82f18b5-6415-4553-9daa-371807dd012b" (UID: "d82f18b5-6415-4553-9daa-371807dd012b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.186465 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5602a7fe-504e-4e6e-83ed-0dd458ef496d-config" (OuterVolumeSpecName: "config") pod "5602a7fe-504e-4e6e-83ed-0dd458ef496d" (UID: "5602a7fe-504e-4e6e-83ed-0dd458ef496d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.186837 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d82f18b5-6415-4553-9daa-371807dd012b" (UID: "d82f18b5-6415-4553-9daa-371807dd012b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.191433 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82f18b5-6415-4553-9daa-371807dd012b-kube-api-access-qx6jg" (OuterVolumeSpecName: "kube-api-access-qx6jg") pod "d82f18b5-6415-4553-9daa-371807dd012b" (UID: "d82f18b5-6415-4553-9daa-371807dd012b"). InnerVolumeSpecName "kube-api-access-qx6jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.191472 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5602a7fe-504e-4e6e-83ed-0dd458ef496d-kube-api-access-57p2w" (OuterVolumeSpecName: "kube-api-access-57p2w") pod "5602a7fe-504e-4e6e-83ed-0dd458ef496d" (UID: "5602a7fe-504e-4e6e-83ed-0dd458ef496d"). InnerVolumeSpecName "kube-api-access-57p2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.287647 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx6jg\" (UniqueName: \"kubernetes.io/projected/d82f18b5-6415-4553-9daa-371807dd012b-kube-api-access-qx6jg\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.287685 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5602a7fe-504e-4e6e-83ed-0dd458ef496d-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.287697 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57p2w\" (UniqueName: \"kubernetes.io/projected/5602a7fe-504e-4e6e-83ed-0dd458ef496d-kube-api-access-57p2w\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.287707 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.287716 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d82f18b5-6415-4553-9daa-371807dd012b-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.500772 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"36817bdb-e28c-495c-9e26-005e53f3cc2a","Type":"ContainerStarted","Data":"1d2de1945f394afc7f8c64a13b440db07aa5c4c42b42236eadf3d5356331d1ad"} Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.501848 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90046452-437b-4666-83a8-e8ee09bfc932","Type":"ContainerStarted","Data":"7d43672c12a5f4688955b254c86537cba58ced58f27461f80fabfeed77024de5"} Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.502673 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hfhkg" event={"ID":"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f","Type":"ContainerStarted","Data":"bd80fc7604694796ec29ee56ba7f96a76e99b6653ef30dce24ec673318952c9c"} Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.503431 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" event={"ID":"d82f18b5-6415-4553-9daa-371807dd012b","Type":"ContainerDied","Data":"49c1d82d5a9e35e899770f03c842b48b77f5589c2512f6417c2a084405b951d4"} Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.503485 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-fcpqs" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.504186 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r" event={"ID":"6c4f1760-d296-46a1-9ec8-cb64e543897c","Type":"ContainerStarted","Data":"3a31c0530c0b7c42dc0de4c1f2432d9fbeb23b6869b93b080bd633d81423ac30"} Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.505368 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" event={"ID":"5602a7fe-504e-4e6e-83ed-0dd458ef496d","Type":"ContainerDied","Data":"37af61e5faab984052fac53febefcec2a8c1e3d74db78b20f1573325b4b2e306"} Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.505454 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mp8fx" Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.576168 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mp8fx"] Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.594192 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mp8fx"] Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.611116 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fcpqs"] Jan 08 23:36:34 crc kubenswrapper[4945]: I0108 23:36:34.618095 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-fcpqs"] Jan 08 23:36:35 crc kubenswrapper[4945]: I0108 23:36:35.514603 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"84d4963b-6a9f-4053-8944-d1c7e61256b9","Type":"ContainerStarted","Data":"5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6"} Jan 08 23:36:35 crc kubenswrapper[4945]: I0108 23:36:35.515571 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 08 23:36:35 crc kubenswrapper[4945]: I0108 23:36:35.541094 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.103298458 podStartE2EDuration="27.541061559s" podCreationTimestamp="2026-01-08 23:36:08 +0000 UTC" firstStartedPulling="2026-01-08 23:36:16.536015879 +0000 UTC m=+1246.847174825" lastFinishedPulling="2026-01-08 23:36:34.97377898 +0000 UTC m=+1265.284937926" observedRunningTime="2026-01-08 23:36:35.533067877 +0000 UTC m=+1265.844226833" watchObservedRunningTime="2026-01-08 23:36:35.541061559 +0000 UTC m=+1265.852220505" Jan 08 23:36:36 crc kubenswrapper[4945]: I0108 23:36:36.010846 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5602a7fe-504e-4e6e-83ed-0dd458ef496d" path="/var/lib/kubelet/pods/5602a7fe-504e-4e6e-83ed-0dd458ef496d/volumes" Jan 08 23:36:36 crc kubenswrapper[4945]: I0108 23:36:36.011300 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82f18b5-6415-4553-9daa-371807dd012b" path="/var/lib/kubelet/pods/d82f18b5-6415-4553-9daa-371807dd012b/volumes" Jan 08 23:36:38 crc kubenswrapper[4945]: I0108 23:36:38.552653 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"36817bdb-e28c-495c-9e26-005e53f3cc2a","Type":"ContainerStarted","Data":"324e6e48dab6c7d527635ca30c759a02e9ba3563bc130abb4ddbc3f4fe3ceb54"} Jan 08 23:36:38 crc kubenswrapper[4945]: I0108 23:36:38.555393 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90046452-437b-4666-83a8-e8ee09bfc932","Type":"ContainerStarted","Data":"230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9"} Jan 08 23:36:38 crc kubenswrapper[4945]: I0108 23:36:38.557394 4945 generic.go:334] "Generic (PLEG): container finished" podID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerID="e899cbf27092607cae12a7dc93241e5ed129cce95626267ff8c2d829e7a8cddb" exitCode=0 Jan 08 23:36:38 crc kubenswrapper[4945]: I0108 23:36:38.557444 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hfhkg" event={"ID":"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f","Type":"ContainerDied","Data":"e899cbf27092607cae12a7dc93241e5ed129cce95626267ff8c2d829e7a8cddb"} Jan 08 23:36:38 crc kubenswrapper[4945]: I0108 23:36:38.559472 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r" event={"ID":"6c4f1760-d296-46a1-9ec8-cb64e543897c","Type":"ContainerStarted","Data":"1b0f37807bb46059b745ff0659643ccbaa1cf4a8f20a7c025f2d1511d9fb08db"} Jan 08 23:36:38 crc kubenswrapper[4945]: I0108 23:36:38.559731 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-fr87r" Jan 08 23:36:38 crc kubenswrapper[4945]: I0108 23:36:38.605443 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fr87r" podStartSLOduration=23.020580603 podStartE2EDuration="26.605423491s" podCreationTimestamp="2026-01-08 23:36:12 +0000 UTC" firstStartedPulling="2026-01-08 23:36:33.70657323 +0000 UTC m=+1264.017732176" lastFinishedPulling="2026-01-08 23:36:37.291416108 +0000 UTC m=+1267.602575064" observedRunningTime="2026-01-08 23:36:38.603362242 +0000 UTC m=+1268.914521188" watchObservedRunningTime="2026-01-08 23:36:38.605423491 +0000 UTC m=+1268.916582437" Jan 08 23:36:39 crc kubenswrapper[4945]: I0108 23:36:39.577923 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71eb40d2-e481-445d-99ea-948b918b862d","Type":"ContainerStarted","Data":"b84caeef5cc5ad10edc0c8450a4bb95aea44a01ae1bdb0470e03aacfe261b00d"} Jan 08 23:36:39 crc kubenswrapper[4945]: I0108 23:36:39.581678 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hfhkg" event={"ID":"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f","Type":"ContainerStarted","Data":"452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d"} Jan 08 23:36:39 crc kubenswrapper[4945]: I0108 23:36:39.581744 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hfhkg" event={"ID":"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f","Type":"ContainerStarted","Data":"9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8"} Jan 08 23:36:39 crc kubenswrapper[4945]: I0108 23:36:39.581846 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:39 crc kubenswrapper[4945]: I0108 23:36:39.637641 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-hfhkg" podStartSLOduration=24.291813499 podStartE2EDuration="27.637609272s" podCreationTimestamp="2026-01-08 23:36:12 +0000 UTC" firstStartedPulling="2026-01-08 23:36:33.926142738 +0000 UTC m=+1264.237301684" lastFinishedPulling="2026-01-08 23:36:37.271938511 +0000 UTC m=+1267.583097457" observedRunningTime="2026-01-08 23:36:39.627916739 +0000 UTC m=+1269.939075685" watchObservedRunningTime="2026-01-08 23:36:39.637609272 +0000 UTC m=+1269.948768218" Jan 08 23:36:40 crc kubenswrapper[4945]: I0108 23:36:40.590865 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9","Type":"ContainerStarted","Data":"c81f4cba79646c4284e071dbc05ea1b22c10137dd94b3016f75e77dd3cfb0060"} Jan 08 23:36:40 crc kubenswrapper[4945]: I0108 23:36:40.591063 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:36:41 crc kubenswrapper[4945]: I0108 23:36:41.599127 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"36817bdb-e28c-495c-9e26-005e53f3cc2a","Type":"ContainerStarted","Data":"29134cac38be275897153b9e527a487bd6dc85a0149bae9cb21fa2cca5dc21f1"} Jan 08 23:36:41 crc kubenswrapper[4945]: I0108 23:36:41.603523 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90046452-437b-4666-83a8-e8ee09bfc932","Type":"ContainerStarted","Data":"d9317ebca4ad485773a2bb68895a59989eaf7ea993a34e6617668bb4a89d52c9"} Jan 08 23:36:41 crc kubenswrapper[4945]: I0108 23:36:41.625550 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.896606138 podStartE2EDuration="29.625526691s" podCreationTimestamp="2026-01-08 23:36:12 +0000 UTC" firstStartedPulling="2026-01-08 23:36:34.019293202 +0000 UTC m=+1264.330452148" lastFinishedPulling="2026-01-08 23:36:40.748213755 +0000 UTC m=+1271.059372701" observedRunningTime="2026-01-08 23:36:41.616579916 +0000 UTC m=+1271.927738872" watchObservedRunningTime="2026-01-08 23:36:41.625526691 +0000 UTC m=+1271.936685637" Jan 08 23:36:41 crc kubenswrapper[4945]: I0108 23:36:41.645064 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=20.594799759 podStartE2EDuration="27.645047519s" podCreationTimestamp="2026-01-08 23:36:14 +0000 UTC" firstStartedPulling="2026-01-08 23:36:33.706162241 +0000 UTC m=+1264.017321187" lastFinishedPulling="2026-01-08 23:36:40.756410001 +0000 UTC m=+1271.067568947" observedRunningTime="2026-01-08 23:36:41.638069382 +0000 UTC m=+1271.949228348" watchObservedRunningTime="2026-01-08 23:36:41.645047519 +0000 UTC m=+1271.956206465" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.088300 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.133776 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.619466 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.619972 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.623103 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"17371a82-14e3-4830-b99f-a2b46b4f4366","Type":"ContainerStarted","Data":"654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848"} Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.623430 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.670853 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.670936 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.878031 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnsnn"] Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.908558 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dw2hj"] Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.909827 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.921613 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.964584 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dw2hj"] Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.992883 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.992973 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.993013 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mpq\" (UniqueName: \"kubernetes.io/projected/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-kube-api-access-q6mpq\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:43 crc kubenswrapper[4945]: I0108 23:36:43.993045 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-config\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.037801 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-nh2p7"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.038842 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.048156 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.056722 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nh2p7"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.095320 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.095575 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.095605 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mpq\" (UniqueName: \"kubernetes.io/projected/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-kube-api-access-q6mpq\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.095667 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-config\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.096828 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.097658 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.098430 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-config\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.121035 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mpq\" (UniqueName: \"kubernetes.io/projected/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-kube-api-access-q6mpq\") pod \"dnsmasq-dns-6bc7876d45-dw2hj\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.201129 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-combined-ca-bundle\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.201226 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovs-rundir\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.201268 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.201293 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvlph\" (UniqueName: \"kubernetes.io/projected/242587ad-03ea-45d9-be99-4deb624ce107-kube-api-access-kvlph\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.201328 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242587ad-03ea-45d9-be99-4deb624ce107-config\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.201374 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovn-rundir\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.233376 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.304826 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-combined-ca-bundle\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.304897 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovs-rundir\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.304931 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.304948 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvlph\" (UniqueName: \"kubernetes.io/projected/242587ad-03ea-45d9-be99-4deb624ce107-kube-api-access-kvlph\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.304976 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242587ad-03ea-45d9-be99-4deb624ce107-config\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.305023 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovn-rundir\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.305678 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovn-rundir\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.307731 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovs-rundir\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.308557 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242587ad-03ea-45d9-be99-4deb624ce107-config\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.310781 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-combined-ca-bundle\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.313428 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.338448 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvlph\" (UniqueName: \"kubernetes.io/projected/242587ad-03ea-45d9-be99-4deb624ce107-kube-api-access-kvlph\") pod \"ovn-controller-metrics-nh2p7\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.367607 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.373883 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tw4sh"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.391643 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.412586 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-bmzvc"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.414507 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.424387 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.424582 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bmzvc"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.508470 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-config\") pod \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.508955 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-dns-svc\") pod \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.508973 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-config" (OuterVolumeSpecName: "config") pod "a547696a-ed3c-4a1f-a1b5-1982e6fa62cb" (UID: "a547696a-ed3c-4a1f-a1b5-1982e6fa62cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509164 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krb47\" (UniqueName: \"kubernetes.io/projected/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-kube-api-access-krb47\") pod \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\" (UID: \"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb\") " Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509386 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a547696a-ed3c-4a1f-a1b5-1982e6fa62cb" (UID: "a547696a-ed3c-4a1f-a1b5-1982e6fa62cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509452 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509535 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509567 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjl7s\" (UniqueName: \"kubernetes.io/projected/b534cf9b-cc04-4939-a856-919b53f7e602-kube-api-access-jjl7s\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509591 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-dns-svc\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509624 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-config\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509662 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.509674 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.532836 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-kube-api-access-krb47" (OuterVolumeSpecName: "kube-api-access-krb47") pod "a547696a-ed3c-4a1f-a1b5-1982e6fa62cb" (UID: "a547696a-ed3c-4a1f-a1b5-1982e6fa62cb"). InnerVolumeSpecName "kube-api-access-krb47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.610561 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.610612 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjl7s\" (UniqueName: \"kubernetes.io/projected/b534cf9b-cc04-4939-a856-919b53f7e602-kube-api-access-jjl7s\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.610642 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-dns-svc\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.610676 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-config\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.610710 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.610803 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krb47\" (UniqueName: \"kubernetes.io/projected/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb-kube-api-access-krb47\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.611839 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-config\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.611880 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-dns-svc\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.612268 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.612465 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.629164 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjl7s\" (UniqueName: \"kubernetes.io/projected/b534cf9b-cc04-4939-a856-919b53f7e602-kube-api-access-jjl7s\") pod \"dnsmasq-dns-8554648995-bmzvc\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.649649 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d9574582-49aa-48ec-8b43-bc55ed78a3d1","Type":"ContainerStarted","Data":"01e6f6d29fbc37e10b337041462136b759e49d0ec4a3d75774556109c7c74797"} Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.650245 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.652493 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.655949 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fnsnn" event={"ID":"a547696a-ed3c-4a1f-a1b5-1982e6fa62cb","Type":"ContainerDied","Data":"132a552815ee08b75a3698ebd9f39aecbedfbceb7b565753b9f648822f42568a"} Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.677570 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.631690808 podStartE2EDuration="38.677553397s" podCreationTimestamp="2026-01-08 23:36:06 +0000 UTC" firstStartedPulling="2026-01-08 23:36:07.438613427 +0000 UTC m=+1237.749772373" lastFinishedPulling="2026-01-08 23:36:43.484476016 +0000 UTC m=+1273.795634962" observedRunningTime="2026-01-08 23:36:44.667977737 +0000 UTC m=+1274.979136693" watchObservedRunningTime="2026-01-08 23:36:44.677553397 +0000 UTC m=+1274.988712343" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.702418 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.722925 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnsnn"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.731293 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fnsnn"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.754916 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.852785 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nh2p7"] Jan 08 23:36:44 crc kubenswrapper[4945]: W0108 23:36:44.869462 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod242587ad_03ea_45d9_be99_4deb624ce107.slice/crio-e065557af6edc7dc9e97c82bd1cb799d5f82d15ee58699fc15aa8b6baa8d5ceb WatchSource:0}: Error finding container e065557af6edc7dc9e97c82bd1cb799d5f82d15ee58699fc15aa8b6baa8d5ceb: Status 404 returned error can't find the container with id e065557af6edc7dc9e97c82bd1cb799d5f82d15ee58699fc15aa8b6baa8d5ceb Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.879276 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.946139 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dw2hj"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.972401 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.974774 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.980291 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-d4zfh" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.980492 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.980588 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.980707 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 08 23:36:44 crc kubenswrapper[4945]: I0108 23:36:44.997268 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.018661 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xq6m\" (UniqueName: \"kubernetes.io/projected/8360f40b-d24b-42ab-8240-94ad3970531d-kube-api-access-6xq6m\") pod \"8360f40b-d24b-42ab-8240-94ad3970531d\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.018752 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-config\") pod \"8360f40b-d24b-42ab-8240-94ad3970531d\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.018832 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-dns-svc\") pod \"8360f40b-d24b-42ab-8240-94ad3970531d\" (UID: \"8360f40b-d24b-42ab-8240-94ad3970531d\") " Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.019540 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8360f40b-d24b-42ab-8240-94ad3970531d" (UID: "8360f40b-d24b-42ab-8240-94ad3970531d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.022971 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-config" (OuterVolumeSpecName: "config") pod "8360f40b-d24b-42ab-8240-94ad3970531d" (UID: "8360f40b-d24b-42ab-8240-94ad3970531d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.032556 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8360f40b-d24b-42ab-8240-94ad3970531d-kube-api-access-6xq6m" (OuterVolumeSpecName: "kube-api-access-6xq6m") pod "8360f40b-d24b-42ab-8240-94ad3970531d" (UID: "8360f40b-d24b-42ab-8240-94ad3970531d"). InnerVolumeSpecName "kube-api-access-6xq6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.120551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.120616 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.120808 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.120934 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.120981 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.121098 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlld\" (UniqueName: \"kubernetes.io/projected/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-kube-api-access-cxlld\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.121302 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.121453 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.121465 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8360f40b-d24b-42ab-8240-94ad3970531d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.121480 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xq6m\" (UniqueName: \"kubernetes.io/projected/8360f40b-d24b-42ab-8240-94ad3970531d-kube-api-access-6xq6m\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.222866 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.222925 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxlld\" (UniqueName: \"kubernetes.io/projected/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-kube-api-access-cxlld\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.223006 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.223070 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.223102 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.223173 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.223205 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.224182 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.224508 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.224906 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.227508 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.227688 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.231833 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.252084 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxlld\" (UniqueName: \"kubernetes.io/projected/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-kube-api-access-cxlld\") pod \"ovn-northd-0\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.293469 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.299018 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bmzvc"] Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.659909 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" event={"ID":"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32","Type":"ContainerStarted","Data":"0f5e895cc0c8d68bb445ec93a2f663cc90b2dc5d405a6401847a356cc757ff9e"} Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.660973 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" event={"ID":"8360f40b-d24b-42ab-8240-94ad3970531d","Type":"ContainerDied","Data":"257ff73a69d3a13bf645b3ff53bba02ebaaca1d5047587c600fbf907e99b7270"} Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.661039 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-tw4sh" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.661950 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bmzvc" event={"ID":"b534cf9b-cc04-4939-a856-919b53f7e602","Type":"ContainerStarted","Data":"cc155dbda1956a4a0a47d89e8e8666dae0da29bae860a4d8fcab1522d073acfa"} Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.664729 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nh2p7" event={"ID":"242587ad-03ea-45d9-be99-4deb624ce107","Type":"ContainerStarted","Data":"65d1afbf60fdd5284b8bb7770cb76316e3e898a189e17fc2b6f0138e0d6d3482"} Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.664752 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nh2p7" event={"ID":"242587ad-03ea-45d9-be99-4deb624ce107","Type":"ContainerStarted","Data":"e065557af6edc7dc9e97c82bd1cb799d5f82d15ee58699fc15aa8b6baa8d5ceb"} Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.695609 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-nh2p7" podStartSLOduration=2.695585529 podStartE2EDuration="2.695585529s" podCreationTimestamp="2026-01-08 23:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:36:45.689897243 +0000 UTC m=+1276.001056189" watchObservedRunningTime="2026-01-08 23:36:45.695585529 +0000 UTC m=+1276.006744475" Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.743652 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tw4sh"] Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.765368 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-tw4sh"] Jan 08 23:36:45 crc kubenswrapper[4945]: I0108 23:36:45.780587 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 08 23:36:46 crc kubenswrapper[4945]: I0108 23:36:46.011468 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8360f40b-d24b-42ab-8240-94ad3970531d" path="/var/lib/kubelet/pods/8360f40b-d24b-42ab-8240-94ad3970531d/volumes" Jan 08 23:36:46 crc kubenswrapper[4945]: I0108 23:36:46.011980 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a547696a-ed3c-4a1f-a1b5-1982e6fa62cb" path="/var/lib/kubelet/pods/a547696a-ed3c-4a1f-a1b5-1982e6fa62cb/volumes" Jan 08 23:36:46 crc kubenswrapper[4945]: I0108 23:36:46.673709 4945 generic.go:334] "Generic (PLEG): container finished" podID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" containerID="4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be" exitCode=0 Jan 08 23:36:46 crc kubenswrapper[4945]: I0108 23:36:46.673808 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" event={"ID":"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32","Type":"ContainerDied","Data":"4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be"} Jan 08 23:36:46 crc kubenswrapper[4945]: I0108 23:36:46.677091 4945 generic.go:334] "Generic (PLEG): container finished" podID="b534cf9b-cc04-4939-a856-919b53f7e602" containerID="0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494" exitCode=0 Jan 08 23:36:46 crc kubenswrapper[4945]: I0108 23:36:46.677164 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bmzvc" event={"ID":"b534cf9b-cc04-4939-a856-919b53f7e602","Type":"ContainerDied","Data":"0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494"} Jan 08 23:36:46 crc kubenswrapper[4945]: I0108 23:36:46.679371 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eefc7456-a6c7-4442-aa3a-370a1f9b01fa","Type":"ContainerStarted","Data":"41e43fd5ffa8c59c1e5b0b7bca180a4d44bbce1523a4b568e90c29e7d4fef9f5"} Jan 08 23:36:46 crc kubenswrapper[4945]: I0108 23:36:46.681403 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4fe48ad-f532-4c60-b88b-95a894d0b519","Type":"ContainerStarted","Data":"616b374a5eae55d59d9108e2215178983988ca2e990a507b0a9a4f74f2df67bc"} Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.694536 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bmzvc" event={"ID":"b534cf9b-cc04-4939-a856-919b53f7e602","Type":"ContainerStarted","Data":"e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688"} Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.695859 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.699214 4945 generic.go:334] "Generic (PLEG): container finished" podID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerID="654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848" exitCode=0 Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.699450 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"17371a82-14e3-4830-b99f-a2b46b4f4366","Type":"ContainerDied","Data":"654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848"} Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.702851 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eefc7456-a6c7-4442-aa3a-370a1f9b01fa","Type":"ContainerStarted","Data":"a785ea69394c633a9012de634007aaa1ea39fa590a19423eeb91abe2b39b2bdf"} Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.702920 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eefc7456-a6c7-4442-aa3a-370a1f9b01fa","Type":"ContainerStarted","Data":"fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975"} Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.704875 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.708779 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" event={"ID":"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32","Type":"ContainerStarted","Data":"28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20"} Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.709012 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.733037 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-bmzvc" podStartSLOduration=3.297317264 podStartE2EDuration="3.732979685s" podCreationTimestamp="2026-01-08 23:36:44 +0000 UTC" firstStartedPulling="2026-01-08 23:36:45.331251349 +0000 UTC m=+1275.642410295" lastFinishedPulling="2026-01-08 23:36:45.76691377 +0000 UTC m=+1276.078072716" observedRunningTime="2026-01-08 23:36:47.730306901 +0000 UTC m=+1278.041465867" watchObservedRunningTime="2026-01-08 23:36:47.732979685 +0000 UTC m=+1278.044138621" Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.767263 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" podStartSLOduration=4.2803671770000005 podStartE2EDuration="4.767239567s" podCreationTimestamp="2026-01-08 23:36:43 +0000 UTC" firstStartedPulling="2026-01-08 23:36:44.952864782 +0000 UTC m=+1275.264023728" lastFinishedPulling="2026-01-08 23:36:45.439737172 +0000 UTC m=+1275.750896118" observedRunningTime="2026-01-08 23:36:47.761347066 +0000 UTC m=+1278.072506022" watchObservedRunningTime="2026-01-08 23:36:47.767239567 +0000 UTC m=+1278.078398523" Jan 08 23:36:47 crc kubenswrapper[4945]: I0108 23:36:47.791428 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.684687308 podStartE2EDuration="3.791405797s" podCreationTimestamp="2026-01-08 23:36:44 +0000 UTC" firstStartedPulling="2026-01-08 23:36:45.792540815 +0000 UTC m=+1276.103699761" lastFinishedPulling="2026-01-08 23:36:46.899259304 +0000 UTC m=+1277.210418250" observedRunningTime="2026-01-08 23:36:47.785106666 +0000 UTC m=+1278.096265652" watchObservedRunningTime="2026-01-08 23:36:47.791405797 +0000 UTC m=+1278.102564743" Jan 08 23:36:48 crc kubenswrapper[4945]: I0108 23:36:48.719173 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"17371a82-14e3-4830-b99f-a2b46b4f4366","Type":"ContainerStarted","Data":"a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e"} Jan 08 23:36:48 crc kubenswrapper[4945]: I0108 23:36:48.744817 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.877580576 podStartE2EDuration="45.744797278s" podCreationTimestamp="2026-01-08 23:36:03 +0000 UTC" firstStartedPulling="2026-01-08 23:36:05.585088403 +0000 UTC m=+1235.896247359" lastFinishedPulling="2026-01-08 23:36:42.452305105 +0000 UTC m=+1272.763464061" observedRunningTime="2026-01-08 23:36:48.741404817 +0000 UTC m=+1279.052563783" watchObservedRunningTime="2026-01-08 23:36:48.744797278 +0000 UTC m=+1279.055956224" Jan 08 23:36:48 crc kubenswrapper[4945]: I0108 23:36:48.791284 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 08 23:36:51 crc kubenswrapper[4945]: I0108 23:36:51.709504 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 08 23:36:51 crc kubenswrapper[4945]: I0108 23:36:51.750163 4945 generic.go:334] "Generic (PLEG): container finished" podID="d4fe48ad-f532-4c60-b88b-95a894d0b519" containerID="616b374a5eae55d59d9108e2215178983988ca2e990a507b0a9a4f74f2df67bc" exitCode=0 Jan 08 23:36:51 crc kubenswrapper[4945]: I0108 23:36:51.750226 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4fe48ad-f532-4c60-b88b-95a894d0b519","Type":"ContainerDied","Data":"616b374a5eae55d59d9108e2215178983988ca2e990a507b0a9a4f74f2df67bc"} Jan 08 23:36:52 crc kubenswrapper[4945]: I0108 23:36:52.760311 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4fe48ad-f532-4c60-b88b-95a894d0b519","Type":"ContainerStarted","Data":"6864af6cbe8f647a9f0c28948b92d59f1746e14059ad5a2b80f2933cb34799cc"} Jan 08 23:36:52 crc kubenswrapper[4945]: I0108 23:36:52.785955 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371989.068863 podStartE2EDuration="47.785913243s" podCreationTimestamp="2026-01-08 23:36:05 +0000 UTC" firstStartedPulling="2026-01-08 23:36:07.425047232 +0000 UTC m=+1237.736206178" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:36:52.779653853 +0000 UTC m=+1283.090812829" watchObservedRunningTime="2026-01-08 23:36:52.785913243 +0000 UTC m=+1283.097072239" Jan 08 23:36:54 crc kubenswrapper[4945]: I0108 23:36:54.235728 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:54 crc kubenswrapper[4945]: I0108 23:36:54.756216 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:36:54 crc kubenswrapper[4945]: I0108 23:36:54.836356 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dw2hj"] Jan 08 23:36:54 crc kubenswrapper[4945]: I0108 23:36:54.836730 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" podUID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" containerName="dnsmasq-dns" containerID="cri-o://28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20" gracePeriod=10 Jan 08 23:36:54 crc kubenswrapper[4945]: I0108 23:36:54.905334 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 08 23:36:54 crc kubenswrapper[4945]: I0108 23:36:54.905852 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.034226 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.297412 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.409042 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-ovsdbserver-sb\") pod \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.409258 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6mpq\" (UniqueName: \"kubernetes.io/projected/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-kube-api-access-q6mpq\") pod \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.409298 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-config\") pod \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.409328 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-dns-svc\") pod \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\" (UID: \"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32\") " Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.434145 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-kube-api-access-q6mpq" (OuterVolumeSpecName: "kube-api-access-q6mpq") pod "75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" (UID: "75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32"). InnerVolumeSpecName "kube-api-access-q6mpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.459105 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-config" (OuterVolumeSpecName: "config") pod "75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" (UID: "75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.467777 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" (UID: "75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.474515 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" (UID: "75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.512486 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.512522 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6mpq\" (UniqueName: \"kubernetes.io/projected/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-kube-api-access-q6mpq\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.512537 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.512547 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.789849 4945 generic.go:334] "Generic (PLEG): container finished" podID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" containerID="28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20" exitCode=0 Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.789982 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" event={"ID":"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32","Type":"ContainerDied","Data":"28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20"} Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.790226 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" event={"ID":"75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32","Type":"ContainerDied","Data":"0f5e895cc0c8d68bb445ec93a2f663cc90b2dc5d405a6401847a356cc757ff9e"} Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.790260 4945 scope.go:117] "RemoveContainer" containerID="28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.789984 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-dw2hj" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.853296 4945 scope.go:117] "RemoveContainer" containerID="4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.853685 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dw2hj"] Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.870027 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dw2hj"] Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.947195 4945 scope.go:117] "RemoveContainer" containerID="28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20" Jan 08 23:36:55 crc kubenswrapper[4945]: E0108 23:36:55.951106 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20\": container with ID starting with 28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20 not found: ID does not exist" containerID="28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.951151 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20"} err="failed to get container status \"28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20\": rpc error: code = NotFound desc = could not find container \"28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20\": container with ID starting with 28921d0a2287ab8280c2e1cae60bccb98a8f09d6c2429c23e0af70e3de16db20 not found: ID does not exist" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.951180 4945 scope.go:117] "RemoveContainer" containerID="4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be" Jan 08 23:36:55 crc kubenswrapper[4945]: E0108 23:36:55.956448 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be\": container with ID starting with 4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be not found: ID does not exist" containerID="4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be" Jan 08 23:36:55 crc kubenswrapper[4945]: I0108 23:36:55.956484 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be"} err="failed to get container status \"4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be\": rpc error: code = NotFound desc = could not find container \"4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be\": container with ID starting with 4878595b1f5ee3ca60af8a11e68d0131fa13c095bd50a3298aad77ce2bb3d8be not found: ID does not exist" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.011801 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" path="/var/lib/kubelet/pods/75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32/volumes" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.022404 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.365655 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4ee0-account-create-update-pmzqj"] Jan 08 23:36:56 crc kubenswrapper[4945]: E0108 23:36:56.366297 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" containerName="dnsmasq-dns" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.366316 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" containerName="dnsmasq-dns" Jan 08 23:36:56 crc kubenswrapper[4945]: E0108 23:36:56.366339 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" containerName="init" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.366346 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" containerName="init" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.366573 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f7aff5-5d8e-4e23-b8a3-4ff2420f5b32" containerName="dnsmasq-dns" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.367349 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.370730 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.375153 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4ee0-account-create-update-pmzqj"] Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.437813 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-77g9h"] Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.438909 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.449040 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-77g9h"] Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.535601 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcttv\" (UniqueName: \"kubernetes.io/projected/2cf00aca-6357-47bb-8d88-a931518afa75-kube-api-access-wcttv\") pod \"keystone-db-create-77g9h\" (UID: \"2cf00aca-6357-47bb-8d88-a931518afa75\") " pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.535777 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-operator-scripts\") pod \"keystone-4ee0-account-create-update-pmzqj\" (UID: \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\") " pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.536124 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfnnc\" (UniqueName: \"kubernetes.io/projected/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-kube-api-access-jfnnc\") pod \"keystone-4ee0-account-create-update-pmzqj\" (UID: \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\") " pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.536271 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf00aca-6357-47bb-8d88-a931518afa75-operator-scripts\") pod \"keystone-db-create-77g9h\" (UID: \"2cf00aca-6357-47bb-8d88-a931518afa75\") " pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.594518 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rckwr"] Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.596524 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rckwr" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.604692 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rckwr"] Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.639752 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-operator-scripts\") pod \"keystone-4ee0-account-create-update-pmzqj\" (UID: \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\") " pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.639860 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfnnc\" (UniqueName: \"kubernetes.io/projected/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-kube-api-access-jfnnc\") pod \"keystone-4ee0-account-create-update-pmzqj\" (UID: \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\") " pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.639904 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf00aca-6357-47bb-8d88-a931518afa75-operator-scripts\") pod \"keystone-db-create-77g9h\" (UID: \"2cf00aca-6357-47bb-8d88-a931518afa75\") " pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.639967 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcttv\" (UniqueName: \"kubernetes.io/projected/2cf00aca-6357-47bb-8d88-a931518afa75-kube-api-access-wcttv\") pod \"keystone-db-create-77g9h\" (UID: \"2cf00aca-6357-47bb-8d88-a931518afa75\") " pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.640810 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-operator-scripts\") pod \"keystone-4ee0-account-create-update-pmzqj\" (UID: \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\") " pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.641592 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf00aca-6357-47bb-8d88-a931518afa75-operator-scripts\") pod \"keystone-db-create-77g9h\" (UID: \"2cf00aca-6357-47bb-8d88-a931518afa75\") " pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.677095 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcttv\" (UniqueName: \"kubernetes.io/projected/2cf00aca-6357-47bb-8d88-a931518afa75-kube-api-access-wcttv\") pod \"keystone-db-create-77g9h\" (UID: \"2cf00aca-6357-47bb-8d88-a931518afa75\") " pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.684728 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfnnc\" (UniqueName: \"kubernetes.io/projected/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-kube-api-access-jfnnc\") pod \"keystone-4ee0-account-create-update-pmzqj\" (UID: \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\") " pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.697184 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.718805 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3bb0-account-create-update-89p7q"] Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.724188 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.732258 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.746655 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3bb0-account-create-update-89p7q"] Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.755657 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzshm\" (UniqueName: \"kubernetes.io/projected/0101c256-7c32-4906-897c-112a6c686f66-kube-api-access-mzshm\") pod \"placement-db-create-rckwr\" (UID: \"0101c256-7c32-4906-897c-112a6c686f66\") " pod="openstack/placement-db-create-rckwr" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.755854 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0101c256-7c32-4906-897c-112a6c686f66-operator-scripts\") pod \"placement-db-create-rckwr\" (UID: \"0101c256-7c32-4906-897c-112a6c686f66\") " pod="openstack/placement-db-create-rckwr" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.777568 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.778634 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.785648 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.857939 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzshm\" (UniqueName: \"kubernetes.io/projected/0101c256-7c32-4906-897c-112a6c686f66-kube-api-access-mzshm\") pod \"placement-db-create-rckwr\" (UID: \"0101c256-7c32-4906-897c-112a6c686f66\") " pod="openstack/placement-db-create-rckwr" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.858053 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0101c256-7c32-4906-897c-112a6c686f66-operator-scripts\") pod \"placement-db-create-rckwr\" (UID: \"0101c256-7c32-4906-897c-112a6c686f66\") " pod="openstack/placement-db-create-rckwr" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.858107 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de1550e-e5d5-4cba-bdc5-e56194b3446d-operator-scripts\") pod \"placement-3bb0-account-create-update-89p7q\" (UID: \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\") " pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.858262 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swwq5\" (UniqueName: \"kubernetes.io/projected/4de1550e-e5d5-4cba-bdc5-e56194b3446d-kube-api-access-swwq5\") pod \"placement-3bb0-account-create-update-89p7q\" (UID: \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\") " pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.861652 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0101c256-7c32-4906-897c-112a6c686f66-operator-scripts\") pod \"placement-db-create-rckwr\" (UID: \"0101c256-7c32-4906-897c-112a6c686f66\") " pod="openstack/placement-db-create-rckwr" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.892402 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzshm\" (UniqueName: \"kubernetes.io/projected/0101c256-7c32-4906-897c-112a6c686f66-kube-api-access-mzshm\") pod \"placement-db-create-rckwr\" (UID: \"0101c256-7c32-4906-897c-112a6c686f66\") " pod="openstack/placement-db-create-rckwr" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.892671 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.924142 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rckwr" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.959748 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swwq5\" (UniqueName: \"kubernetes.io/projected/4de1550e-e5d5-4cba-bdc5-e56194b3446d-kube-api-access-swwq5\") pod \"placement-3bb0-account-create-update-89p7q\" (UID: \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\") " pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.959915 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de1550e-e5d5-4cba-bdc5-e56194b3446d-operator-scripts\") pod \"placement-3bb0-account-create-update-89p7q\" (UID: \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\") " pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.961546 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de1550e-e5d5-4cba-bdc5-e56194b3446d-operator-scripts\") pod \"placement-3bb0-account-create-update-89p7q\" (UID: \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\") " pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:36:56 crc kubenswrapper[4945]: I0108 23:36:56.982494 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swwq5\" (UniqueName: \"kubernetes.io/projected/4de1550e-e5d5-4cba-bdc5-e56194b3446d-kube-api-access-swwq5\") pod \"placement-3bb0-account-create-update-89p7q\" (UID: \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\") " pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.180658 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.256469 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4ee0-account-create-update-pmzqj"] Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.353807 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-77g9h"] Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.431026 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rckwr"] Jan 08 23:36:57 crc kubenswrapper[4945]: W0108 23:36:57.445422 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0101c256_7c32_4906_897c_112a6c686f66.slice/crio-06cf0cf4c65a98882692c27ed453d6b47b20134e673e161c9710f70bf0f0183a WatchSource:0}: Error finding container 06cf0cf4c65a98882692c27ed453d6b47b20134e673e161c9710f70bf0f0183a: Status 404 returned error can't find the container with id 06cf0cf4c65a98882692c27ed453d6b47b20134e673e161c9710f70bf0f0183a Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.676008 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3bb0-account-create-update-89p7q"] Jan 08 23:36:57 crc kubenswrapper[4945]: W0108 23:36:57.677788 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4de1550e_e5d5_4cba_bdc5_e56194b3446d.slice/crio-d8ec61113e4b3e41a09f39ef958d15b560c133111da95ffda7dfe6d36d7c1eda WatchSource:0}: Error finding container d8ec61113e4b3e41a09f39ef958d15b560c133111da95ffda7dfe6d36d7c1eda: Status 404 returned error can't find the container with id d8ec61113e4b3e41a09f39ef958d15b560c133111da95ffda7dfe6d36d7c1eda Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.819540 4945 generic.go:334] "Generic (PLEG): container finished" podID="0101c256-7c32-4906-897c-112a6c686f66" containerID="efd2bc7b040f659f7206782fec4c8107ca83b21da56fad17b1b8b0d43bd222ba" exitCode=0 Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.820235 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rckwr" event={"ID":"0101c256-7c32-4906-897c-112a6c686f66","Type":"ContainerDied","Data":"efd2bc7b040f659f7206782fec4c8107ca83b21da56fad17b1b8b0d43bd222ba"} Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.820285 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rckwr" event={"ID":"0101c256-7c32-4906-897c-112a6c686f66","Type":"ContainerStarted","Data":"06cf0cf4c65a98882692c27ed453d6b47b20134e673e161c9710f70bf0f0183a"} Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.821914 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3bb0-account-create-update-89p7q" event={"ID":"4de1550e-e5d5-4cba-bdc5-e56194b3446d","Type":"ContainerStarted","Data":"d8ec61113e4b3e41a09f39ef958d15b560c133111da95ffda7dfe6d36d7c1eda"} Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.824060 4945 generic.go:334] "Generic (PLEG): container finished" podID="bf98b083-95c0-4e25-b0c2-e5064ebde5fd" containerID="85a51471e64c9c8202d37be77ba6c842abfb6fe410974730af0489e3a2774c94" exitCode=0 Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.824123 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4ee0-account-create-update-pmzqj" event={"ID":"bf98b083-95c0-4e25-b0c2-e5064ebde5fd","Type":"ContainerDied","Data":"85a51471e64c9c8202d37be77ba6c842abfb6fe410974730af0489e3a2774c94"} Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.824144 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4ee0-account-create-update-pmzqj" event={"ID":"bf98b083-95c0-4e25-b0c2-e5064ebde5fd","Type":"ContainerStarted","Data":"6eabae109cbfcc58707aa83f09dc181f62ec86d66a33ea482734a68719d66913"} Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.826248 4945 generic.go:334] "Generic (PLEG): container finished" podID="2cf00aca-6357-47bb-8d88-a931518afa75" containerID="dfbb378f03def2a6587de835fb231d3a876888d83ae0b3bf039ebc2fea867371" exitCode=0 Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.826428 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77g9h" event={"ID":"2cf00aca-6357-47bb-8d88-a931518afa75","Type":"ContainerDied","Data":"dfbb378f03def2a6587de835fb231d3a876888d83ae0b3bf039ebc2fea867371"} Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.826479 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77g9h" event={"ID":"2cf00aca-6357-47bb-8d88-a931518afa75","Type":"ContainerStarted","Data":"20f947a0a3964b01780d67aa39223f96a363a524ac3ace68c7be4f5c9c80d339"} Jan 08 23:36:57 crc kubenswrapper[4945]: I0108 23:36:57.928743 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 08 23:36:58 crc kubenswrapper[4945]: I0108 23:36:58.859401 4945 generic.go:334] "Generic (PLEG): container finished" podID="4de1550e-e5d5-4cba-bdc5-e56194b3446d" containerID="67b80221022e04916251de3638954da70b4cf4aa9fe4a41c3f21b1506de9ebdc" exitCode=0 Jan 08 23:36:58 crc kubenswrapper[4945]: I0108 23:36:58.863550 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3bb0-account-create-update-89p7q" event={"ID":"4de1550e-e5d5-4cba-bdc5-e56194b3446d","Type":"ContainerDied","Data":"67b80221022e04916251de3638954da70b4cf4aa9fe4a41c3f21b1506de9ebdc"} Jan 08 23:36:58 crc kubenswrapper[4945]: I0108 23:36:58.902595 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-btlgr"] Jan 08 23:36:58 crc kubenswrapper[4945]: I0108 23:36:58.908913 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:58 crc kubenswrapper[4945]: I0108 23:36:58.923111 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-btlgr"] Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.104161 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.104253 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.104297 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfctn\" (UniqueName: \"kubernetes.io/projected/3602d69c-6735-47da-b4ca-ef53f5e70a29-kube-api-access-dfctn\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.104315 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.104415 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-config\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.206817 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfctn\" (UniqueName: \"kubernetes.io/projected/3602d69c-6735-47da-b4ca-ef53f5e70a29-kube-api-access-dfctn\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.206879 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.206949 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-config\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.207077 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.207101 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.208485 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.209470 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.209777 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-config\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.210244 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.241503 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfctn\" (UniqueName: \"kubernetes.io/projected/3602d69c-6735-47da-b4ca-ef53f5e70a29-kube-api-access-dfctn\") pod \"dnsmasq-dns-b8fbc5445-btlgr\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.257556 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.372124 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.411686 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcttv\" (UniqueName: \"kubernetes.io/projected/2cf00aca-6357-47bb-8d88-a931518afa75-kube-api-access-wcttv\") pod \"2cf00aca-6357-47bb-8d88-a931518afa75\" (UID: \"2cf00aca-6357-47bb-8d88-a931518afa75\") " Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.411793 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf00aca-6357-47bb-8d88-a931518afa75-operator-scripts\") pod \"2cf00aca-6357-47bb-8d88-a931518afa75\" (UID: \"2cf00aca-6357-47bb-8d88-a931518afa75\") " Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.413062 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cf00aca-6357-47bb-8d88-a931518afa75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2cf00aca-6357-47bb-8d88-a931518afa75" (UID: "2cf00aca-6357-47bb-8d88-a931518afa75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.418217 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf00aca-6357-47bb-8d88-a931518afa75-kube-api-access-wcttv" (OuterVolumeSpecName: "kube-api-access-wcttv") pod "2cf00aca-6357-47bb-8d88-a931518afa75" (UID: "2cf00aca-6357-47bb-8d88-a931518afa75"). InnerVolumeSpecName "kube-api-access-wcttv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.513527 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcttv\" (UniqueName: \"kubernetes.io/projected/2cf00aca-6357-47bb-8d88-a931518afa75-kube-api-access-wcttv\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.513564 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2cf00aca-6357-47bb-8d88-a931518afa75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.534580 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rckwr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.543727 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.615537 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0101c256-7c32-4906-897c-112a6c686f66-operator-scripts\") pod \"0101c256-7c32-4906-897c-112a6c686f66\" (UID: \"0101c256-7c32-4906-897c-112a6c686f66\") " Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.615587 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzshm\" (UniqueName: \"kubernetes.io/projected/0101c256-7c32-4906-897c-112a6c686f66-kube-api-access-mzshm\") pod \"0101c256-7c32-4906-897c-112a6c686f66\" (UID: \"0101c256-7c32-4906-897c-112a6c686f66\") " Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.615786 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-operator-scripts\") pod \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\" (UID: \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\") " Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.615834 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfnnc\" (UniqueName: \"kubernetes.io/projected/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-kube-api-access-jfnnc\") pod \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\" (UID: \"bf98b083-95c0-4e25-b0c2-e5064ebde5fd\") " Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.617257 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf98b083-95c0-4e25-b0c2-e5064ebde5fd" (UID: "bf98b083-95c0-4e25-b0c2-e5064ebde5fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.617652 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0101c256-7c32-4906-897c-112a6c686f66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0101c256-7c32-4906-897c-112a6c686f66" (UID: "0101c256-7c32-4906-897c-112a6c686f66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.620740 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-kube-api-access-jfnnc" (OuterVolumeSpecName: "kube-api-access-jfnnc") pod "bf98b083-95c0-4e25-b0c2-e5064ebde5fd" (UID: "bf98b083-95c0-4e25-b0c2-e5064ebde5fd"). InnerVolumeSpecName "kube-api-access-jfnnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.627331 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0101c256-7c32-4906-897c-112a6c686f66-kube-api-access-mzshm" (OuterVolumeSpecName: "kube-api-access-mzshm") pod "0101c256-7c32-4906-897c-112a6c686f66" (UID: "0101c256-7c32-4906-897c-112a6c686f66"). InnerVolumeSpecName "kube-api-access-mzshm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.717006 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0101c256-7c32-4906-897c-112a6c686f66-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.717040 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzshm\" (UniqueName: \"kubernetes.io/projected/0101c256-7c32-4906-897c-112a6c686f66-kube-api-access-mzshm\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.717057 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.717069 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfnnc\" (UniqueName: \"kubernetes.io/projected/bf98b083-95c0-4e25-b0c2-e5064ebde5fd-kube-api-access-jfnnc\") on node \"crc\" DevicePath \"\"" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.871160 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4ee0-account-create-update-pmzqj" event={"ID":"bf98b083-95c0-4e25-b0c2-e5064ebde5fd","Type":"ContainerDied","Data":"6eabae109cbfcc58707aa83f09dc181f62ec86d66a33ea482734a68719d66913"} Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.871531 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eabae109cbfcc58707aa83f09dc181f62ec86d66a33ea482734a68719d66913" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.871193 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4ee0-account-create-update-pmzqj" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.873779 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-77g9h" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.873798 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-77g9h" event={"ID":"2cf00aca-6357-47bb-8d88-a931518afa75","Type":"ContainerDied","Data":"20f947a0a3964b01780d67aa39223f96a363a524ac3ace68c7be4f5c9c80d339"} Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.873860 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20f947a0a3964b01780d67aa39223f96a363a524ac3ace68c7be4f5c9c80d339" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.876462 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rckwr" event={"ID":"0101c256-7c32-4906-897c-112a6c686f66","Type":"ContainerDied","Data":"06cf0cf4c65a98882692c27ed453d6b47b20134e673e161c9710f70bf0f0183a"} Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.876503 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06cf0cf4c65a98882692c27ed453d6b47b20134e673e161c9710f70bf0f0183a" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.876482 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rckwr" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.887194 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-btlgr"] Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.896425 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 08 23:36:59 crc kubenswrapper[4945]: E0108 23:36:59.896816 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf98b083-95c0-4e25-b0c2-e5064ebde5fd" containerName="mariadb-account-create-update" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.896833 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf98b083-95c0-4e25-b0c2-e5064ebde5fd" containerName="mariadb-account-create-update" Jan 08 23:36:59 crc kubenswrapper[4945]: E0108 23:36:59.896879 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf00aca-6357-47bb-8d88-a931518afa75" containerName="mariadb-database-create" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.896888 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf00aca-6357-47bb-8d88-a931518afa75" containerName="mariadb-database-create" Jan 08 23:36:59 crc kubenswrapper[4945]: E0108 23:36:59.896904 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0101c256-7c32-4906-897c-112a6c686f66" containerName="mariadb-database-create" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.896912 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0101c256-7c32-4906-897c-112a6c686f66" containerName="mariadb-database-create" Jan 08 23:36:59 crc kubenswrapper[4945]: W0108 23:36:59.901019 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3602d69c_6735_47da_b4ca_ef53f5e70a29.slice/crio-92fdcc564b517d5cad25b7a4c008a327351b91816ee5118f89b9f18fd0fbabea WatchSource:0}: Error finding container 92fdcc564b517d5cad25b7a4c008a327351b91816ee5118f89b9f18fd0fbabea: Status 404 returned error can't find the container with id 92fdcc564b517d5cad25b7a4c008a327351b91816ee5118f89b9f18fd0fbabea Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.901410 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf98b083-95c0-4e25-b0c2-e5064ebde5fd" containerName="mariadb-account-create-update" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.905352 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf00aca-6357-47bb-8d88-a931518afa75" containerName="mariadb-database-create" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.905403 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0101c256-7c32-4906-897c-112a6c686f66" containerName="mariadb-database-create" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.928441 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.928646 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.933637 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-22rch" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.933965 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.934160 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 08 23:36:59 crc kubenswrapper[4945]: I0108 23:36:59.934309 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.123048 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.123178 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6rh\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-kube-api-access-kj6rh\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.123238 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-lock\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.123259 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.123320 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-cache\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.213358 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.226912 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-cache\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.227059 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.227222 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj6rh\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-kube-api-access-kj6rh\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.227257 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-lock\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.227279 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: E0108 23:37:00.227513 4945 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 08 23:37:00 crc kubenswrapper[4945]: E0108 23:37:00.227533 4945 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 08 23:37:00 crc kubenswrapper[4945]: E0108 23:37:00.227600 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift podName:12eb7cf8-4c67-4574-a65b-dc82c7285c68 nodeName:}" failed. No retries permitted until 2026-01-08 23:37:00.727577113 +0000 UTC m=+1291.038736059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift") pod "swift-storage-0" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68") : configmap "swift-ring-files" not found Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.227592 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.227716 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-cache\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.227826 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-lock\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.256064 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj6rh\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-kube-api-access-kj6rh\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.259718 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.329250 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de1550e-e5d5-4cba-bdc5-e56194b3446d-operator-scripts\") pod \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\" (UID: \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\") " Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.329408 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swwq5\" (UniqueName: \"kubernetes.io/projected/4de1550e-e5d5-4cba-bdc5-e56194b3446d-kube-api-access-swwq5\") pod \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\" (UID: \"4de1550e-e5d5-4cba-bdc5-e56194b3446d\") " Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.336066 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de1550e-e5d5-4cba-bdc5-e56194b3446d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4de1550e-e5d5-4cba-bdc5-e56194b3446d" (UID: "4de1550e-e5d5-4cba-bdc5-e56194b3446d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.338611 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de1550e-e5d5-4cba-bdc5-e56194b3446d-kube-api-access-swwq5" (OuterVolumeSpecName: "kube-api-access-swwq5") pod "4de1550e-e5d5-4cba-bdc5-e56194b3446d" (UID: "4de1550e-e5d5-4cba-bdc5-e56194b3446d"). InnerVolumeSpecName "kube-api-access-swwq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.370386 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.380270 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nqm45"] Jan 08 23:37:00 crc kubenswrapper[4945]: E0108 23:37:00.380654 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de1550e-e5d5-4cba-bdc5-e56194b3446d" containerName="mariadb-account-create-update" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.380671 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de1550e-e5d5-4cba-bdc5-e56194b3446d" containerName="mariadb-account-create-update" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.380834 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de1550e-e5d5-4cba-bdc5-e56194b3446d" containerName="mariadb-account-create-update" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.381436 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.384846 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.384866 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.386922 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.397074 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nqm45"] Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.431587 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swwq5\" (UniqueName: \"kubernetes.io/projected/4de1550e-e5d5-4cba-bdc5-e56194b3446d-kube-api-access-swwq5\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.431623 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4de1550e-e5d5-4cba-bdc5-e56194b3446d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.533548 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-combined-ca-bundle\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.533649 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1155ea44-2cab-445e-a621-fbd85a2b31a9-etc-swift\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.533747 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-dispersionconf\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.533793 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-scripts\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.533899 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k6pv\" (UniqueName: \"kubernetes.io/projected/1155ea44-2cab-445e-a621-fbd85a2b31a9-kube-api-access-2k6pv\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.533930 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-swiftconf\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.534010 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-ring-data-devices\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.635941 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-ring-data-devices\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.636176 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-combined-ca-bundle\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.636208 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1155ea44-2cab-445e-a621-fbd85a2b31a9-etc-swift\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.636335 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-dispersionconf\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.636738 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1155ea44-2cab-445e-a621-fbd85a2b31a9-etc-swift\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.636847 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-scripts\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.636943 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k6pv\" (UniqueName: \"kubernetes.io/projected/1155ea44-2cab-445e-a621-fbd85a2b31a9-kube-api-access-2k6pv\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.636978 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-swiftconf\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.636962 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-ring-data-devices\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.637465 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-scripts\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.644499 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-combined-ca-bundle\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.644786 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-swiftconf\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.654564 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-dispersionconf\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.659376 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k6pv\" (UniqueName: \"kubernetes.io/projected/1155ea44-2cab-445e-a621-fbd85a2b31a9-kube-api-access-2k6pv\") pod \"swift-ring-rebalance-nqm45\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.709059 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.738885 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:00 crc kubenswrapper[4945]: E0108 23:37:00.739099 4945 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 08 23:37:00 crc kubenswrapper[4945]: E0108 23:37:00.739122 4945 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 08 23:37:00 crc kubenswrapper[4945]: E0108 23:37:00.739174 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift podName:12eb7cf8-4c67-4574-a65b-dc82c7285c68 nodeName:}" failed. No retries permitted until 2026-01-08 23:37:01.739159055 +0000 UTC m=+1292.050318001 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift") pod "swift-storage-0" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68") : configmap "swift-ring-files" not found Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.888787 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" event={"ID":"3602d69c-6735-47da-b4ca-ef53f5e70a29","Type":"ContainerStarted","Data":"92fdcc564b517d5cad25b7a4c008a327351b91816ee5118f89b9f18fd0fbabea"} Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.891807 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3bb0-account-create-update-89p7q" event={"ID":"4de1550e-e5d5-4cba-bdc5-e56194b3446d","Type":"ContainerDied","Data":"d8ec61113e4b3e41a09f39ef958d15b560c133111da95ffda7dfe6d36d7c1eda"} Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.891872 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8ec61113e4b3e41a09f39ef958d15b560c133111da95ffda7dfe6d36d7c1eda" Jan 08 23:37:00 crc kubenswrapper[4945]: I0108 23:37:00.891973 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3bb0-account-create-update-89p7q" Jan 08 23:37:01 crc kubenswrapper[4945]: I0108 23:37:01.165644 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nqm45"] Jan 08 23:37:01 crc kubenswrapper[4945]: W0108 23:37:01.173212 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1155ea44_2cab_445e_a621_fbd85a2b31a9.slice/crio-0351019ea0765d2c0adb3da28e8ecad2e1443464e013c9e769a45ec4a95d28ef WatchSource:0}: Error finding container 0351019ea0765d2c0adb3da28e8ecad2e1443464e013c9e769a45ec4a95d28ef: Status 404 returned error can't find the container with id 0351019ea0765d2c0adb3da28e8ecad2e1443464e013c9e769a45ec4a95d28ef Jan 08 23:37:01 crc kubenswrapper[4945]: I0108 23:37:01.760170 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:01 crc kubenswrapper[4945]: E0108 23:37:01.760472 4945 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 08 23:37:01 crc kubenswrapper[4945]: E0108 23:37:01.760714 4945 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 08 23:37:01 crc kubenswrapper[4945]: E0108 23:37:01.760819 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift podName:12eb7cf8-4c67-4574-a65b-dc82c7285c68 nodeName:}" failed. No retries permitted until 2026-01-08 23:37:03.760783764 +0000 UTC m=+1294.071942740 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift") pod "swift-storage-0" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68") : configmap "swift-ring-files" not found Jan 08 23:37:01 crc kubenswrapper[4945]: I0108 23:37:01.903677 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nqm45" event={"ID":"1155ea44-2cab-445e-a621-fbd85a2b31a9","Type":"ContainerStarted","Data":"0351019ea0765d2c0adb3da28e8ecad2e1443464e013c9e769a45ec4a95d28ef"} Jan 08 23:37:01 crc kubenswrapper[4945]: I0108 23:37:01.980769 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fw668"] Jan 08 23:37:01 crc kubenswrapper[4945]: I0108 23:37:01.982196 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fw668" Jan 08 23:37:01 crc kubenswrapper[4945]: I0108 23:37:01.994095 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fw668"] Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.011524 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d4b0-account-create-update-b5sj2"] Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.012544 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d4b0-account-create-update-b5sj2"] Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.012699 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.024098 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.068613 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fscjn\" (UniqueName: \"kubernetes.io/projected/df214629-0f2c-4a4c-af2a-0f69e06a0899-kube-api-access-fscjn\") pod \"glance-db-create-fw668\" (UID: \"df214629-0f2c-4a4c-af2a-0f69e06a0899\") " pod="openstack/glance-db-create-fw668" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.068689 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df214629-0f2c-4a4c-af2a-0f69e06a0899-operator-scripts\") pod \"glance-db-create-fw668\" (UID: \"df214629-0f2c-4a4c-af2a-0f69e06a0899\") " pod="openstack/glance-db-create-fw668" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.171658 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442a04f-05ad-4da6-8312-74cbce0ed2f1-operator-scripts\") pod \"glance-d4b0-account-create-update-b5sj2\" (UID: \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\") " pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.171735 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fscjn\" (UniqueName: \"kubernetes.io/projected/df214629-0f2c-4a4c-af2a-0f69e06a0899-kube-api-access-fscjn\") pod \"glance-db-create-fw668\" (UID: \"df214629-0f2c-4a4c-af2a-0f69e06a0899\") " pod="openstack/glance-db-create-fw668" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.171764 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df214629-0f2c-4a4c-af2a-0f69e06a0899-operator-scripts\") pod \"glance-db-create-fw668\" (UID: \"df214629-0f2c-4a4c-af2a-0f69e06a0899\") " pod="openstack/glance-db-create-fw668" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.171909 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g9gk\" (UniqueName: \"kubernetes.io/projected/4442a04f-05ad-4da6-8312-74cbce0ed2f1-kube-api-access-9g9gk\") pod \"glance-d4b0-account-create-update-b5sj2\" (UID: \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\") " pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.173425 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df214629-0f2c-4a4c-af2a-0f69e06a0899-operator-scripts\") pod \"glance-db-create-fw668\" (UID: \"df214629-0f2c-4a4c-af2a-0f69e06a0899\") " pod="openstack/glance-db-create-fw668" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.196155 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fscjn\" (UniqueName: \"kubernetes.io/projected/df214629-0f2c-4a4c-af2a-0f69e06a0899-kube-api-access-fscjn\") pod \"glance-db-create-fw668\" (UID: \"df214629-0f2c-4a4c-af2a-0f69e06a0899\") " pod="openstack/glance-db-create-fw668" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.273849 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g9gk\" (UniqueName: \"kubernetes.io/projected/4442a04f-05ad-4da6-8312-74cbce0ed2f1-kube-api-access-9g9gk\") pod \"glance-d4b0-account-create-update-b5sj2\" (UID: \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\") " pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.273979 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442a04f-05ad-4da6-8312-74cbce0ed2f1-operator-scripts\") pod \"glance-d4b0-account-create-update-b5sj2\" (UID: \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\") " pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.274796 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442a04f-05ad-4da6-8312-74cbce0ed2f1-operator-scripts\") pod \"glance-d4b0-account-create-update-b5sj2\" (UID: \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\") " pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.293419 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g9gk\" (UniqueName: \"kubernetes.io/projected/4442a04f-05ad-4da6-8312-74cbce0ed2f1-kube-api-access-9g9gk\") pod \"glance-d4b0-account-create-update-b5sj2\" (UID: \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\") " pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.301109 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fw668" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.326801 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.915106 4945 generic.go:334] "Generic (PLEG): container finished" podID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerID="6ed647e63ed5ced94c25ecc087c8142d70f440529129b6cab6669745d50b1e92" exitCode=0 Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.915208 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" event={"ID":"3602d69c-6735-47da-b4ca-ef53f5e70a29","Type":"ContainerDied","Data":"6ed647e63ed5ced94c25ecc087c8142d70f440529129b6cab6669745d50b1e92"} Jan 08 23:37:02 crc kubenswrapper[4945]: I0108 23:37:02.990674 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d4b0-account-create-update-b5sj2"] Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.127840 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fw668"] Jan 08 23:37:03 crc kubenswrapper[4945]: W0108 23:37:03.140476 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf214629_0f2c_4a4c_af2a_0f69e06a0899.slice/crio-16cda960a3c90c430e8d44184f3e2d2d9b34269c4d379163a1503785cae26f3f WatchSource:0}: Error finding container 16cda960a3c90c430e8d44184f3e2d2d9b34269c4d379163a1503785cae26f3f: Status 404 returned error can't find the container with id 16cda960a3c90c430e8d44184f3e2d2d9b34269c4d379163a1503785cae26f3f Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.543412 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-97fkk"] Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.548943 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.554717 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.557565 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-97fkk"] Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.740884 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jksk\" (UniqueName: \"kubernetes.io/projected/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-kube-api-access-5jksk\") pod \"root-account-create-update-97fkk\" (UID: \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\") " pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.740973 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-operator-scripts\") pod \"root-account-create-update-97fkk\" (UID: \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\") " pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.843117 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jksk\" (UniqueName: \"kubernetes.io/projected/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-kube-api-access-5jksk\") pod \"root-account-create-update-97fkk\" (UID: \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\") " pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.843176 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-operator-scripts\") pod \"root-account-create-update-97fkk\" (UID: \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\") " pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.843245 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:03 crc kubenswrapper[4945]: E0108 23:37:03.843427 4945 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 08 23:37:03 crc kubenswrapper[4945]: E0108 23:37:03.843444 4945 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 08 23:37:03 crc kubenswrapper[4945]: E0108 23:37:03.843492 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift podName:12eb7cf8-4c67-4574-a65b-dc82c7285c68 nodeName:}" failed. No retries permitted until 2026-01-08 23:37:07.843474297 +0000 UTC m=+1298.154633243 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift") pod "swift-storage-0" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68") : configmap "swift-ring-files" not found Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.844625 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-operator-scripts\") pod \"root-account-create-update-97fkk\" (UID: \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\") " pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.871732 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jksk\" (UniqueName: \"kubernetes.io/projected/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-kube-api-access-5jksk\") pod \"root-account-create-update-97fkk\" (UID: \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\") " pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.930008 4945 generic.go:334] "Generic (PLEG): container finished" podID="df214629-0f2c-4a4c-af2a-0f69e06a0899" containerID="8a22cd90e0b9c42a59cf3f72b9b8ccd520ce04e8c67e4b3827083fb7ebad819d" exitCode=0 Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.930113 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fw668" event={"ID":"df214629-0f2c-4a4c-af2a-0f69e06a0899","Type":"ContainerDied","Data":"8a22cd90e0b9c42a59cf3f72b9b8ccd520ce04e8c67e4b3827083fb7ebad819d"} Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.930188 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fw668" event={"ID":"df214629-0f2c-4a4c-af2a-0f69e06a0899","Type":"ContainerStarted","Data":"16cda960a3c90c430e8d44184f3e2d2d9b34269c4d379163a1503785cae26f3f"} Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.933664 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" event={"ID":"3602d69c-6735-47da-b4ca-ef53f5e70a29","Type":"ContainerStarted","Data":"31f5128f3ee2a062618b95bb3d21ba814f7cc510009648d249d8e84cbe8e1094"} Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.934032 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.935763 4945 generic.go:334] "Generic (PLEG): container finished" podID="4442a04f-05ad-4da6-8312-74cbce0ed2f1" containerID="2614a5ec47ba81e99516320fdc35932c6d008ee50a312bfe0b63f7e25b3f97d5" exitCode=0 Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.935836 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d4b0-account-create-update-b5sj2" event={"ID":"4442a04f-05ad-4da6-8312-74cbce0ed2f1","Type":"ContainerDied","Data":"2614a5ec47ba81e99516320fdc35932c6d008ee50a312bfe0b63f7e25b3f97d5"} Jan 08 23:37:03 crc kubenswrapper[4945]: I0108 23:37:03.935874 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d4b0-account-create-update-b5sj2" event={"ID":"4442a04f-05ad-4da6-8312-74cbce0ed2f1","Type":"ContainerStarted","Data":"849cd45f3a00ba3c40d48ce17f11e65aea7b442d389d4529e03f9c2c5a3a729b"} Jan 08 23:37:04 crc kubenswrapper[4945]: I0108 23:37:04.000321 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" podStartSLOduration=6.000297529 podStartE2EDuration="6.000297529s" podCreationTimestamp="2026-01-08 23:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:03.998958087 +0000 UTC m=+1294.310117053" watchObservedRunningTime="2026-01-08 23:37:04.000297529 +0000 UTC m=+1294.311456485" Jan 08 23:37:04 crc kubenswrapper[4945]: I0108 23:37:04.169460 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.293168 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.308053 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fw668" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.416091 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df214629-0f2c-4a4c-af2a-0f69e06a0899-operator-scripts\") pod \"df214629-0f2c-4a4c-af2a-0f69e06a0899\" (UID: \"df214629-0f2c-4a4c-af2a-0f69e06a0899\") " Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.416152 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442a04f-05ad-4da6-8312-74cbce0ed2f1-operator-scripts\") pod \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\" (UID: \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\") " Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.416283 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fscjn\" (UniqueName: \"kubernetes.io/projected/df214629-0f2c-4a4c-af2a-0f69e06a0899-kube-api-access-fscjn\") pod \"df214629-0f2c-4a4c-af2a-0f69e06a0899\" (UID: \"df214629-0f2c-4a4c-af2a-0f69e06a0899\") " Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.416524 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g9gk\" (UniqueName: \"kubernetes.io/projected/4442a04f-05ad-4da6-8312-74cbce0ed2f1-kube-api-access-9g9gk\") pod \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\" (UID: \"4442a04f-05ad-4da6-8312-74cbce0ed2f1\") " Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.417076 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df214629-0f2c-4a4c-af2a-0f69e06a0899-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df214629-0f2c-4a4c-af2a-0f69e06a0899" (UID: "df214629-0f2c-4a4c-af2a-0f69e06a0899"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.417081 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4442a04f-05ad-4da6-8312-74cbce0ed2f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4442a04f-05ad-4da6-8312-74cbce0ed2f1" (UID: "4442a04f-05ad-4da6-8312-74cbce0ed2f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.418090 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df214629-0f2c-4a4c-af2a-0f69e06a0899-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.418136 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442a04f-05ad-4da6-8312-74cbce0ed2f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.421914 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df214629-0f2c-4a4c-af2a-0f69e06a0899-kube-api-access-fscjn" (OuterVolumeSpecName: "kube-api-access-fscjn") pod "df214629-0f2c-4a4c-af2a-0f69e06a0899" (UID: "df214629-0f2c-4a4c-af2a-0f69e06a0899"). InnerVolumeSpecName "kube-api-access-fscjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.422892 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4442a04f-05ad-4da6-8312-74cbce0ed2f1-kube-api-access-9g9gk" (OuterVolumeSpecName: "kube-api-access-9g9gk") pod "4442a04f-05ad-4da6-8312-74cbce0ed2f1" (UID: "4442a04f-05ad-4da6-8312-74cbce0ed2f1"). InnerVolumeSpecName "kube-api-access-9g9gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.426359 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fr87r" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" probeResult="failure" output=< Jan 08 23:37:07 crc kubenswrapper[4945]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 08 23:37:07 crc kubenswrapper[4945]: > Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.521650 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fscjn\" (UniqueName: \"kubernetes.io/projected/df214629-0f2c-4a4c-af2a-0f69e06a0899-kube-api-access-fscjn\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.522122 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g9gk\" (UniqueName: \"kubernetes.io/projected/4442a04f-05ad-4da6-8312-74cbce0ed2f1-kube-api-access-9g9gk\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:07 crc kubenswrapper[4945]: W0108 23:37:07.676837 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc2c1f34_1d86_4d6f_b8ab_6dfca65810ea.slice/crio-57f7b6f5bc783fc03843535ec3c15b0a6796dbeee4859d7d36707ac13320cd0a WatchSource:0}: Error finding container 57f7b6f5bc783fc03843535ec3c15b0a6796dbeee4859d7d36707ac13320cd0a: Status 404 returned error can't find the container with id 57f7b6f5bc783fc03843535ec3c15b0a6796dbeee4859d7d36707ac13320cd0a Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.683140 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-97fkk"] Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.929946 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:07 crc kubenswrapper[4945]: E0108 23:37:07.930627 4945 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 08 23:37:07 crc kubenswrapper[4945]: E0108 23:37:07.930666 4945 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 08 23:37:07 crc kubenswrapper[4945]: E0108 23:37:07.930755 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift podName:12eb7cf8-4c67-4574-a65b-dc82c7285c68 nodeName:}" failed. No retries permitted until 2026-01-08 23:37:15.930726096 +0000 UTC m=+1306.241885072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift") pod "swift-storage-0" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68") : configmap "swift-ring-files" not found Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.987946 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nqm45" event={"ID":"1155ea44-2cab-445e-a621-fbd85a2b31a9","Type":"ContainerStarted","Data":"032c7dd5d1e5a08219574c8bc61072aa124998bdab8472e816f29e879abaab35"} Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.992381 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-97fkk" event={"ID":"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea","Type":"ContainerStarted","Data":"57f7b6f5bc783fc03843535ec3c15b0a6796dbeee4859d7d36707ac13320cd0a"} Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.995533 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d4b0-account-create-update-b5sj2" event={"ID":"4442a04f-05ad-4da6-8312-74cbce0ed2f1","Type":"ContainerDied","Data":"849cd45f3a00ba3c40d48ce17f11e65aea7b442d389d4529e03f9c2c5a3a729b"} Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.995603 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="849cd45f3a00ba3c40d48ce17f11e65aea7b442d389d4529e03f9c2c5a3a729b" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.995613 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d4b0-account-create-update-b5sj2" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.998025 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fw668" event={"ID":"df214629-0f2c-4a4c-af2a-0f69e06a0899","Type":"ContainerDied","Data":"16cda960a3c90c430e8d44184f3e2d2d9b34269c4d379163a1503785cae26f3f"} Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.998049 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16cda960a3c90c430e8d44184f3e2d2d9b34269c4d379163a1503785cae26f3f" Jan 08 23:37:07 crc kubenswrapper[4945]: I0108 23:37:07.998124 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fw668" Jan 08 23:37:08 crc kubenswrapper[4945]: I0108 23:37:08.017758 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nqm45" podStartSLOduration=1.905547246 podStartE2EDuration="8.017737304s" podCreationTimestamp="2026-01-08 23:37:00 +0000 UTC" firstStartedPulling="2026-01-08 23:37:01.175722088 +0000 UTC m=+1291.486881034" lastFinishedPulling="2026-01-08 23:37:07.287912126 +0000 UTC m=+1297.599071092" observedRunningTime="2026-01-08 23:37:08.012453107 +0000 UTC m=+1298.323612063" watchObservedRunningTime="2026-01-08 23:37:08.017737304 +0000 UTC m=+1298.328896260" Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.023163 4945 generic.go:334] "Generic (PLEG): container finished" podID="fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea" containerID="172d50744242a84f78b56e1d2950591ec8b0ca4bdf178eafbed1aa93b04df962" exitCode=0 Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.023293 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-97fkk" event={"ID":"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea","Type":"ContainerDied","Data":"172d50744242a84f78b56e1d2950591ec8b0ca4bdf178eafbed1aa93b04df962"} Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.260231 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.332123 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bmzvc"] Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.332400 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-bmzvc" podUID="b534cf9b-cc04-4939-a856-919b53f7e602" containerName="dnsmasq-dns" containerID="cri-o://e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688" gracePeriod=10 Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.802589 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.967133 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-nb\") pod \"b534cf9b-cc04-4939-a856-919b53f7e602\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.968527 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-sb\") pod \"b534cf9b-cc04-4939-a856-919b53f7e602\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.968694 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjl7s\" (UniqueName: \"kubernetes.io/projected/b534cf9b-cc04-4939-a856-919b53f7e602-kube-api-access-jjl7s\") pod \"b534cf9b-cc04-4939-a856-919b53f7e602\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.968736 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-config\") pod \"b534cf9b-cc04-4939-a856-919b53f7e602\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.968762 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-dns-svc\") pod \"b534cf9b-cc04-4939-a856-919b53f7e602\" (UID: \"b534cf9b-cc04-4939-a856-919b53f7e602\") " Jan 08 23:37:09 crc kubenswrapper[4945]: I0108 23:37:09.980213 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b534cf9b-cc04-4939-a856-919b53f7e602-kube-api-access-jjl7s" (OuterVolumeSpecName: "kube-api-access-jjl7s") pod "b534cf9b-cc04-4939-a856-919b53f7e602" (UID: "b534cf9b-cc04-4939-a856-919b53f7e602"). InnerVolumeSpecName "kube-api-access-jjl7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.025016 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b534cf9b-cc04-4939-a856-919b53f7e602" (UID: "b534cf9b-cc04-4939-a856-919b53f7e602"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.034672 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-config" (OuterVolumeSpecName: "config") pod "b534cf9b-cc04-4939-a856-919b53f7e602" (UID: "b534cf9b-cc04-4939-a856-919b53f7e602"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.035112 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b534cf9b-cc04-4939-a856-919b53f7e602" (UID: "b534cf9b-cc04-4939-a856-919b53f7e602"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.036436 4945 generic.go:334] "Generic (PLEG): container finished" podID="b534cf9b-cc04-4939-a856-919b53f7e602" containerID="e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688" exitCode=0 Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.036576 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-bmzvc" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.038586 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b534cf9b-cc04-4939-a856-919b53f7e602" (UID: "b534cf9b-cc04-4939-a856-919b53f7e602"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.058436 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bmzvc" event={"ID":"b534cf9b-cc04-4939-a856-919b53f7e602","Type":"ContainerDied","Data":"e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688"} Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.058483 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-bmzvc" event={"ID":"b534cf9b-cc04-4939-a856-919b53f7e602","Type":"ContainerDied","Data":"cc155dbda1956a4a0a47d89e8e8666dae0da29bae860a4d8fcab1522d073acfa"} Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.058503 4945 scope.go:117] "RemoveContainer" containerID="e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.072775 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjl7s\" (UniqueName: \"kubernetes.io/projected/b534cf9b-cc04-4939-a856-919b53f7e602-kube-api-access-jjl7s\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.072813 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.072828 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.072840 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.072850 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b534cf9b-cc04-4939-a856-919b53f7e602-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.084610 4945 scope.go:117] "RemoveContainer" containerID="0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.111679 4945 scope.go:117] "RemoveContainer" containerID="e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688" Jan 08 23:37:10 crc kubenswrapper[4945]: E0108 23:37:10.112722 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688\": container with ID starting with e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688 not found: ID does not exist" containerID="e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.112755 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688"} err="failed to get container status \"e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688\": rpc error: code = NotFound desc = could not find container \"e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688\": container with ID starting with e15ebb2b171a33dd0d947c766635f2e061a76a7a756a13bfb0763953180a7688 not found: ID does not exist" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.112780 4945 scope.go:117] "RemoveContainer" containerID="0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494" Jan 08 23:37:10 crc kubenswrapper[4945]: E0108 23:37:10.113071 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494\": container with ID starting with 0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494 not found: ID does not exist" containerID="0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.113097 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494"} err="failed to get container status \"0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494\": rpc error: code = NotFound desc = could not find container \"0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494\": container with ID starting with 0a67ee05f259656ae3163822b8c206b855f334f2077d851c074fadf51ed96494 not found: ID does not exist" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.365633 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.380380 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bmzvc"] Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.388211 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-bmzvc"] Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.484564 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jksk\" (UniqueName: \"kubernetes.io/projected/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-kube-api-access-5jksk\") pod \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\" (UID: \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\") " Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.484689 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-operator-scripts\") pod \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\" (UID: \"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea\") " Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.485605 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea" (UID: "fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.490335 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-kube-api-access-5jksk" (OuterVolumeSpecName: "kube-api-access-5jksk") pod "fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea" (UID: "fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea"). InnerVolumeSpecName "kube-api-access-5jksk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.587623 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:10 crc kubenswrapper[4945]: I0108 23:37:10.587667 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jksk\" (UniqueName: \"kubernetes.io/projected/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea-kube-api-access-5jksk\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:11 crc kubenswrapper[4945]: I0108 23:37:11.050204 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-97fkk" event={"ID":"fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea","Type":"ContainerDied","Data":"57f7b6f5bc783fc03843535ec3c15b0a6796dbeee4859d7d36707ac13320cd0a"} Jan 08 23:37:11 crc kubenswrapper[4945]: I0108 23:37:11.050293 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f7b6f5bc783fc03843535ec3c15b0a6796dbeee4859d7d36707ac13320cd0a" Jan 08 23:37:11 crc kubenswrapper[4945]: I0108 23:37:11.050241 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-97fkk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.017434 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b534cf9b-cc04-4939-a856-919b53f7e602" path="/var/lib/kubelet/pods/b534cf9b-cc04-4939-a856-919b53f7e602/volumes" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215047 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-94g76"] Jan 08 23:37:12 crc kubenswrapper[4945]: E0108 23:37:12.215532 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea" containerName="mariadb-account-create-update" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215548 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea" containerName="mariadb-account-create-update" Jan 08 23:37:12 crc kubenswrapper[4945]: E0108 23:37:12.215580 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b534cf9b-cc04-4939-a856-919b53f7e602" containerName="init" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215589 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b534cf9b-cc04-4939-a856-919b53f7e602" containerName="init" Jan 08 23:37:12 crc kubenswrapper[4945]: E0108 23:37:12.215598 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df214629-0f2c-4a4c-af2a-0f69e06a0899" containerName="mariadb-database-create" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215606 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="df214629-0f2c-4a4c-af2a-0f69e06a0899" containerName="mariadb-database-create" Jan 08 23:37:12 crc kubenswrapper[4945]: E0108 23:37:12.215614 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4442a04f-05ad-4da6-8312-74cbce0ed2f1" containerName="mariadb-account-create-update" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215621 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4442a04f-05ad-4da6-8312-74cbce0ed2f1" containerName="mariadb-account-create-update" Jan 08 23:37:12 crc kubenswrapper[4945]: E0108 23:37:12.215642 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b534cf9b-cc04-4939-a856-919b53f7e602" containerName="dnsmasq-dns" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215649 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b534cf9b-cc04-4939-a856-919b53f7e602" containerName="dnsmasq-dns" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215819 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4442a04f-05ad-4da6-8312-74cbce0ed2f1" containerName="mariadb-account-create-update" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215836 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="df214629-0f2c-4a4c-af2a-0f69e06a0899" containerName="mariadb-database-create" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215855 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b534cf9b-cc04-4939-a856-919b53f7e602" containerName="dnsmasq-dns" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.215863 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea" containerName="mariadb-account-create-update" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.216754 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.218975 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zpdb9" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.220469 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.221485 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-combined-ca-bundle\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.221596 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fms5\" (UniqueName: \"kubernetes.io/projected/91d2a120-b7c1-44e5-a3c0-6720acab34a7-kube-api-access-9fms5\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.221795 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-config-data\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.221867 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-db-sync-config-data\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.231207 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-94g76"] Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.323978 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-combined-ca-bundle\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.324062 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fms5\" (UniqueName: \"kubernetes.io/projected/91d2a120-b7c1-44e5-a3c0-6720acab34a7-kube-api-access-9fms5\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.324144 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-config-data\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.324180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-db-sync-config-data\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.331109 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-db-sync-config-data\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.331212 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-config-data\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.331239 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-combined-ca-bundle\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.348202 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fms5\" (UniqueName: \"kubernetes.io/projected/91d2a120-b7c1-44e5-a3c0-6720acab34a7-kube-api-access-9fms5\") pod \"glance-db-sync-94g76\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.421837 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fr87r" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" probeResult="failure" output=< Jan 08 23:37:12 crc kubenswrapper[4945]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 08 23:37:12 crc kubenswrapper[4945]: > Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.501847 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.507734 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.549781 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-94g76" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.735338 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fr87r-config-llqnk"] Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.737401 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.742458 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.758573 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fr87r-config-llqnk"] Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.835680 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-scripts\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.835739 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-log-ovn\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.835768 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.835801 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pld\" (UniqueName: \"kubernetes.io/projected/bfd2f397-5d5e-455d-b03d-643174bd4460-kube-api-access-v2pld\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.835966 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run-ovn\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.836024 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-additional-scripts\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.938897 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.939147 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2pld\" (UniqueName: \"kubernetes.io/projected/bfd2f397-5d5e-455d-b03d-643174bd4460-kube-api-access-v2pld\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.939356 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run-ovn\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.939427 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-additional-scripts\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.939561 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run-ovn\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.939668 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-scripts\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.939677 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.939712 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-log-ovn\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.939851 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-log-ovn\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.942978 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-scripts\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.943369 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-additional-scripts\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.971576 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-94g76"] Jan 08 23:37:12 crc kubenswrapper[4945]: I0108 23:37:12.972398 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2pld\" (UniqueName: \"kubernetes.io/projected/bfd2f397-5d5e-455d-b03d-643174bd4460-kube-api-access-v2pld\") pod \"ovn-controller-fr87r-config-llqnk\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:12 crc kubenswrapper[4945]: W0108 23:37:12.986344 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91d2a120_b7c1_44e5_a3c0_6720acab34a7.slice/crio-862f0bf334d578b8cf40849d232b2c83ec1f827356ff664ac4e64939d0c8f673 WatchSource:0}: Error finding container 862f0bf334d578b8cf40849d232b2c83ec1f827356ff664ac4e64939d0c8f673: Status 404 returned error can't find the container with id 862f0bf334d578b8cf40849d232b2c83ec1f827356ff664ac4e64939d0c8f673 Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.074017 4945 generic.go:334] "Generic (PLEG): container finished" podID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerID="c81f4cba79646c4284e071dbc05ea1b22c10137dd94b3016f75e77dd3cfb0060" exitCode=0 Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.074138 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9","Type":"ContainerDied","Data":"c81f4cba79646c4284e071dbc05ea1b22c10137dd94b3016f75e77dd3cfb0060"} Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.076053 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-94g76" event={"ID":"91d2a120-b7c1-44e5-a3c0-6720acab34a7","Type":"ContainerStarted","Data":"862f0bf334d578b8cf40849d232b2c83ec1f827356ff664ac4e64939d0c8f673"} Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.079282 4945 generic.go:334] "Generic (PLEG): container finished" podID="71eb40d2-e481-445d-99ea-948b918b862d" containerID="b84caeef5cc5ad10edc0c8450a4bb95aea44a01ae1bdb0470e03aacfe261b00d" exitCode=0 Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.079370 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71eb40d2-e481-445d-99ea-948b918b862d","Type":"ContainerDied","Data":"b84caeef5cc5ad10edc0c8450a4bb95aea44a01ae1bdb0470e03aacfe261b00d"} Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.120260 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.578849 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.579705 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:37:13 crc kubenswrapper[4945]: I0108 23:37:13.609358 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fr87r-config-llqnk"] Jan 08 23:37:13 crc kubenswrapper[4945]: W0108 23:37:13.621681 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfd2f397_5d5e_455d_b03d_643174bd4460.slice/crio-d1053b3d154fe0afb01465872d4cfcb9761183df39548dd080b029295d4aaf84 WatchSource:0}: Error finding container d1053b3d154fe0afb01465872d4cfcb9761183df39548dd080b029295d4aaf84: Status 404 returned error can't find the container with id d1053b3d154fe0afb01465872d4cfcb9761183df39548dd080b029295d4aaf84 Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.090734 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9","Type":"ContainerStarted","Data":"0e4014df7512e89b5d332f842e50840d513c77310ddfa321933cdc5b307230c9"} Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.091525 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.093348 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r-config-llqnk" event={"ID":"bfd2f397-5d5e-455d-b03d-643174bd4460","Type":"ContainerStarted","Data":"6ddbf977bb97c0edecd692a6bbf6552971918da00054953461c2a54d0288f64a"} Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.093396 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r-config-llqnk" event={"ID":"bfd2f397-5d5e-455d-b03d-643174bd4460","Type":"ContainerStarted","Data":"d1053b3d154fe0afb01465872d4cfcb9761183df39548dd080b029295d4aaf84"} Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.096013 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71eb40d2-e481-445d-99ea-948b918b862d","Type":"ContainerStarted","Data":"3c8e62ad7bb3a5c1b692e76747e535c86452a618975faa4a7349a1cd8e6445b4"} Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.096284 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.148335 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371963.706469 podStartE2EDuration="1m13.148307943s" podCreationTimestamp="2026-01-08 23:36:01 +0000 UTC" firstStartedPulling="2026-01-08 23:36:04.025653532 +0000 UTC m=+1234.336812478" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:14.11690355 +0000 UTC m=+1304.428062506" watchObservedRunningTime="2026-01-08 23:37:14.148307943 +0000 UTC m=+1304.459466879" Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.153260 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.970382624 podStartE2EDuration="1m12.153253252s" podCreationTimestamp="2026-01-08 23:36:02 +0000 UTC" firstStartedPulling="2026-01-08 23:36:04.302894983 +0000 UTC m=+1234.614053929" lastFinishedPulling="2026-01-08 23:36:37.485765611 +0000 UTC m=+1267.796924557" observedRunningTime="2026-01-08 23:37:14.144333708 +0000 UTC m=+1304.455492664" watchObservedRunningTime="2026-01-08 23:37:14.153253252 +0000 UTC m=+1304.464412198" Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.212895 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fr87r-config-llqnk" podStartSLOduration=2.212867562 podStartE2EDuration="2.212867562s" podCreationTimestamp="2026-01-08 23:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:14.164931172 +0000 UTC m=+1304.476090118" watchObservedRunningTime="2026-01-08 23:37:14.212867562 +0000 UTC m=+1304.524026508" Jan 08 23:37:14 crc kubenswrapper[4945]: I0108 23:37:14.756015 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-bmzvc" podUID="b534cf9b-cc04-4939-a856-919b53f7e602" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Jan 08 23:37:15 crc kubenswrapper[4945]: I0108 23:37:15.105835 4945 generic.go:334] "Generic (PLEG): container finished" podID="bfd2f397-5d5e-455d-b03d-643174bd4460" containerID="6ddbf977bb97c0edecd692a6bbf6552971918da00054953461c2a54d0288f64a" exitCode=0 Jan 08 23:37:15 crc kubenswrapper[4945]: I0108 23:37:15.105899 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r-config-llqnk" event={"ID":"bfd2f397-5d5e-455d-b03d-643174bd4460","Type":"ContainerDied","Data":"6ddbf977bb97c0edecd692a6bbf6552971918da00054953461c2a54d0288f64a"} Jan 08 23:37:15 crc kubenswrapper[4945]: I0108 23:37:15.432761 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-97fkk"] Jan 08 23:37:15 crc kubenswrapper[4945]: I0108 23:37:15.444246 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-97fkk"] Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.010199 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea" path="/var/lib/kubelet/pods/fc2c1f34-1d86-4d6f-b8ab-6dfca65810ea/volumes" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.019846 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:16 crc kubenswrapper[4945]: E0108 23:37:16.020522 4945 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 08 23:37:16 crc kubenswrapper[4945]: E0108 23:37:16.020632 4945 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 08 23:37:16 crc kubenswrapper[4945]: E0108 23:37:16.020763 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift podName:12eb7cf8-4c67-4574-a65b-dc82c7285c68 nodeName:}" failed. No retries permitted until 2026-01-08 23:37:32.020742811 +0000 UTC m=+1322.331901767 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift") pod "swift-storage-0" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68") : configmap "swift-ring-files" not found Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.481114 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.630378 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-log-ovn\") pod \"bfd2f397-5d5e-455d-b03d-643174bd4460\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.630518 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-scripts\") pod \"bfd2f397-5d5e-455d-b03d-643174bd4460\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.630643 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-additional-scripts\") pod \"bfd2f397-5d5e-455d-b03d-643174bd4460\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.630518 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bfd2f397-5d5e-455d-b03d-643174bd4460" (UID: "bfd2f397-5d5e-455d-b03d-643174bd4460"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.630795 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run\") pod \"bfd2f397-5d5e-455d-b03d-643174bd4460\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.630886 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run" (OuterVolumeSpecName: "var-run") pod "bfd2f397-5d5e-455d-b03d-643174bd4460" (UID: "bfd2f397-5d5e-455d-b03d-643174bd4460"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.630922 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2pld\" (UniqueName: \"kubernetes.io/projected/bfd2f397-5d5e-455d-b03d-643174bd4460-kube-api-access-v2pld\") pod \"bfd2f397-5d5e-455d-b03d-643174bd4460\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.630967 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run-ovn\") pod \"bfd2f397-5d5e-455d-b03d-643174bd4460\" (UID: \"bfd2f397-5d5e-455d-b03d-643174bd4460\") " Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.631012 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bfd2f397-5d5e-455d-b03d-643174bd4460" (UID: "bfd2f397-5d5e-455d-b03d-643174bd4460"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.631333 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bfd2f397-5d5e-455d-b03d-643174bd4460" (UID: "bfd2f397-5d5e-455d-b03d-643174bd4460"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.631601 4945 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.631626 4945 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.631647 4945 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfd2f397-5d5e-455d-b03d-643174bd4460-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.631659 4945 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.631622 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-scripts" (OuterVolumeSpecName: "scripts") pod "bfd2f397-5d5e-455d-b03d-643174bd4460" (UID: "bfd2f397-5d5e-455d-b03d-643174bd4460"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.637269 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfd2f397-5d5e-455d-b03d-643174bd4460-kube-api-access-v2pld" (OuterVolumeSpecName: "kube-api-access-v2pld") pod "bfd2f397-5d5e-455d-b03d-643174bd4460" (UID: "bfd2f397-5d5e-455d-b03d-643174bd4460"). InnerVolumeSpecName "kube-api-access-v2pld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.733297 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2pld\" (UniqueName: \"kubernetes.io/projected/bfd2f397-5d5e-455d-b03d-643174bd4460-kube-api-access-v2pld\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:16 crc kubenswrapper[4945]: I0108 23:37:16.733326 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfd2f397-5d5e-455d-b03d-643174bd4460-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.122451 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r-config-llqnk" event={"ID":"bfd2f397-5d5e-455d-b03d-643174bd4460","Type":"ContainerDied","Data":"d1053b3d154fe0afb01465872d4cfcb9761183df39548dd080b029295d4aaf84"} Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.122493 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1053b3d154fe0afb01465872d4cfcb9761183df39548dd080b029295d4aaf84" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.122551 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r-config-llqnk" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.129260 4945 generic.go:334] "Generic (PLEG): container finished" podID="1155ea44-2cab-445e-a621-fbd85a2b31a9" containerID="032c7dd5d1e5a08219574c8bc61072aa124998bdab8472e816f29e879abaab35" exitCode=0 Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.129349 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nqm45" event={"ID":"1155ea44-2cab-445e-a621-fbd85a2b31a9","Type":"ContainerDied","Data":"032c7dd5d1e5a08219574c8bc61072aa124998bdab8472e816f29e879abaab35"} Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.437792 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-fr87r" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.597677 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fr87r-config-llqnk"] Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.609529 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fr87r-config-llqnk"] Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.683390 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fr87r-config-rjz97"] Jan 08 23:37:17 crc kubenswrapper[4945]: E0108 23:37:17.686930 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd2f397-5d5e-455d-b03d-643174bd4460" containerName="ovn-config" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.686964 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd2f397-5d5e-455d-b03d-643174bd4460" containerName="ovn-config" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.687321 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd2f397-5d5e-455d-b03d-643174bd4460" containerName="ovn-config" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.688485 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.694274 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.714808 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fr87r-config-rjz97"] Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.753461 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run-ovn\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.753628 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-additional-scripts\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.753664 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.753696 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-scripts\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.753728 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-log-ovn\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.753824 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cnnm\" (UniqueName: \"kubernetes.io/projected/a22b7e6a-80f3-478b-9655-450397f76bd7-kube-api-access-9cnnm\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.855216 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run-ovn\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.855466 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-additional-scripts\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.855595 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.855623 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-scripts\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.855685 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-log-ovn\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.855796 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnnm\" (UniqueName: \"kubernetes.io/projected/a22b7e6a-80f3-478b-9655-450397f76bd7-kube-api-access-9cnnm\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.857011 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run-ovn\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.857074 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-log-ovn\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.857300 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.857747 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-additional-scripts\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.858890 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-scripts\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:17 crc kubenswrapper[4945]: I0108 23:37:17.881643 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnnm\" (UniqueName: \"kubernetes.io/projected/a22b7e6a-80f3-478b-9655-450397f76bd7-kube-api-access-9cnnm\") pod \"ovn-controller-fr87r-config-rjz97\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:18 crc kubenswrapper[4945]: I0108 23:37:18.013306 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd2f397-5d5e-455d-b03d-643174bd4460" path="/var/lib/kubelet/pods/bfd2f397-5d5e-455d-b03d-643174bd4460/volumes" Jan 08 23:37:18 crc kubenswrapper[4945]: I0108 23:37:18.024327 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:19 crc kubenswrapper[4945]: I0108 23:37:19.253556 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fr87r-config-rjz97"] Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.433763 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2hqz2"] Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.434808 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.438720 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.452538 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2hqz2"] Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.517551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzbjb\" (UniqueName: \"kubernetes.io/projected/69fe9da9-1222-42c9-aefc-b051e72f81f7-kube-api-access-rzbjb\") pod \"root-account-create-update-2hqz2\" (UID: \"69fe9da9-1222-42c9-aefc-b051e72f81f7\") " pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.517916 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fe9da9-1222-42c9-aefc-b051e72f81f7-operator-scripts\") pod \"root-account-create-update-2hqz2\" (UID: \"69fe9da9-1222-42c9-aefc-b051e72f81f7\") " pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.620522 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fe9da9-1222-42c9-aefc-b051e72f81f7-operator-scripts\") pod \"root-account-create-update-2hqz2\" (UID: \"69fe9da9-1222-42c9-aefc-b051e72f81f7\") " pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.620695 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzbjb\" (UniqueName: \"kubernetes.io/projected/69fe9da9-1222-42c9-aefc-b051e72f81f7-kube-api-access-rzbjb\") pod \"root-account-create-update-2hqz2\" (UID: \"69fe9da9-1222-42c9-aefc-b051e72f81f7\") " pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.624249 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fe9da9-1222-42c9-aefc-b051e72f81f7-operator-scripts\") pod \"root-account-create-update-2hqz2\" (UID: \"69fe9da9-1222-42c9-aefc-b051e72f81f7\") " pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.644029 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzbjb\" (UniqueName: \"kubernetes.io/projected/69fe9da9-1222-42c9-aefc-b051e72f81f7-kube-api-access-rzbjb\") pod \"root-account-create-update-2hqz2\" (UID: \"69fe9da9-1222-42c9-aefc-b051e72f81f7\") " pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:20 crc kubenswrapper[4945]: I0108 23:37:20.770025 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:23 crc kubenswrapper[4945]: I0108 23:37:23.053427 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:37:23 crc kubenswrapper[4945]: I0108 23:37:23.574032 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.370554 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-f2zr9"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.372617 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.410297 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f2zr9"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.442505 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008a988d-6834-4bbe-b9ec-333cfe1c534c-operator-scripts\") pod \"cinder-db-create-f2zr9\" (UID: \"008a988d-6834-4bbe-b9ec-333cfe1c534c\") " pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.442842 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2nw\" (UniqueName: \"kubernetes.io/projected/008a988d-6834-4bbe-b9ec-333cfe1c534c-kube-api-access-qb2nw\") pod \"cinder-db-create-f2zr9\" (UID: \"008a988d-6834-4bbe-b9ec-333cfe1c534c\") " pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.507862 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee25-account-create-update-fhpch"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.509095 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.511938 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.523764 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5nhlb"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.533345 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.533457 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee25-account-create-update-fhpch"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.541169 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5nhlb"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.547357 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb2nw\" (UniqueName: \"kubernetes.io/projected/008a988d-6834-4bbe-b9ec-333cfe1c534c-kube-api-access-qb2nw\") pod \"cinder-db-create-f2zr9\" (UID: \"008a988d-6834-4bbe-b9ec-333cfe1c534c\") " pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.547433 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008a988d-6834-4bbe-b9ec-333cfe1c534c-operator-scripts\") pod \"cinder-db-create-f2zr9\" (UID: \"008a988d-6834-4bbe-b9ec-333cfe1c534c\") " pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.548414 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008a988d-6834-4bbe-b9ec-333cfe1c534c-operator-scripts\") pod \"cinder-db-create-f2zr9\" (UID: \"008a988d-6834-4bbe-b9ec-333cfe1c534c\") " pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.583849 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb2nw\" (UniqueName: \"kubernetes.io/projected/008a988d-6834-4bbe-b9ec-333cfe1c534c-kube-api-access-qb2nw\") pod \"cinder-db-create-f2zr9\" (UID: \"008a988d-6834-4bbe-b9ec-333cfe1c534c\") " pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.649101 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e56604b-5a70-4403-9e9e-4842d685fadd-operator-scripts\") pod \"cinder-ee25-account-create-update-fhpch\" (UID: \"1e56604b-5a70-4403-9e9e-4842d685fadd\") " pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.649158 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss8pd\" (UniqueName: \"kubernetes.io/projected/1e56604b-5a70-4403-9e9e-4842d685fadd-kube-api-access-ss8pd\") pod \"cinder-ee25-account-create-update-fhpch\" (UID: \"1e56604b-5a70-4403-9e9e-4842d685fadd\") " pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.649220 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688e13f9-5653-41da-ba2a-541ffaa8cec9-operator-scripts\") pod \"barbican-db-create-5nhlb\" (UID: \"688e13f9-5653-41da-ba2a-541ffaa8cec9\") " pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.649263 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4mp\" (UniqueName: \"kubernetes.io/projected/688e13f9-5653-41da-ba2a-541ffaa8cec9-kube-api-access-bj4mp\") pod \"barbican-db-create-5nhlb\" (UID: \"688e13f9-5653-41da-ba2a-541ffaa8cec9\") " pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.698929 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ckkb7"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.700575 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.707861 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.720788 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d684-account-create-update-ng64c"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.722216 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.723929 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.731892 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ckkb7"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.751052 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj4mp\" (UniqueName: \"kubernetes.io/projected/688e13f9-5653-41da-ba2a-541ffaa8cec9-kube-api-access-bj4mp\") pod \"barbican-db-create-5nhlb\" (UID: \"688e13f9-5653-41da-ba2a-541ffaa8cec9\") " pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.751151 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e56604b-5a70-4403-9e9e-4842d685fadd-operator-scripts\") pod \"cinder-ee25-account-create-update-fhpch\" (UID: \"1e56604b-5a70-4403-9e9e-4842d685fadd\") " pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.751198 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss8pd\" (UniqueName: \"kubernetes.io/projected/1e56604b-5a70-4403-9e9e-4842d685fadd-kube-api-access-ss8pd\") pod \"cinder-ee25-account-create-update-fhpch\" (UID: \"1e56604b-5a70-4403-9e9e-4842d685fadd\") " pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.751251 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-operator-scripts\") pod \"neutron-db-create-ckkb7\" (UID: \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\") " pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.751295 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srgf\" (UniqueName: \"kubernetes.io/projected/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-kube-api-access-2srgf\") pod \"neutron-db-create-ckkb7\" (UID: \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\") " pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.751318 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688e13f9-5653-41da-ba2a-541ffaa8cec9-operator-scripts\") pod \"barbican-db-create-5nhlb\" (UID: \"688e13f9-5653-41da-ba2a-541ffaa8cec9\") " pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.751964 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688e13f9-5653-41da-ba2a-541ffaa8cec9-operator-scripts\") pod \"barbican-db-create-5nhlb\" (UID: \"688e13f9-5653-41da-ba2a-541ffaa8cec9\") " pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.752487 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e56604b-5a70-4403-9e9e-4842d685fadd-operator-scripts\") pod \"cinder-ee25-account-create-update-fhpch\" (UID: \"1e56604b-5a70-4403-9e9e-4842d685fadd\") " pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.762015 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bzgn7"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.763507 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.766599 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bwwj4" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.766789 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.766923 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.767054 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.777733 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bzgn7"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.802115 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj4mp\" (UniqueName: \"kubernetes.io/projected/688e13f9-5653-41da-ba2a-541ffaa8cec9-kube-api-access-bj4mp\") pod \"barbican-db-create-5nhlb\" (UID: \"688e13f9-5653-41da-ba2a-541ffaa8cec9\") " pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.802466 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss8pd\" (UniqueName: \"kubernetes.io/projected/1e56604b-5a70-4403-9e9e-4842d685fadd-kube-api-access-ss8pd\") pod \"cinder-ee25-account-create-update-fhpch\" (UID: \"1e56604b-5a70-4403-9e9e-4842d685fadd\") " pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.812748 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d684-account-create-update-ng64c"] Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.823754 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.853265 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-config-data\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.853332 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf1f31-7393-4381-9c13-723fe4732c95-operator-scripts\") pod \"barbican-d684-account-create-update-ng64c\" (UID: \"08cf1f31-7393-4381-9c13-723fe4732c95\") " pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.853372 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5pz7\" (UniqueName: \"kubernetes.io/projected/08cf1f31-7393-4381-9c13-723fe4732c95-kube-api-access-q5pz7\") pod \"barbican-d684-account-create-update-ng64c\" (UID: \"08cf1f31-7393-4381-9c13-723fe4732c95\") " pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.853438 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-operator-scripts\") pod \"neutron-db-create-ckkb7\" (UID: \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\") " pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.853464 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2srgf\" (UniqueName: \"kubernetes.io/projected/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-kube-api-access-2srgf\") pod \"neutron-db-create-ckkb7\" (UID: \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\") " pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.853490 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-combined-ca-bundle\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.853511 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4n7\" (UniqueName: \"kubernetes.io/projected/3773ecb5-8e59-462c-9fc6-323c779b2544-kube-api-access-sf4n7\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.853640 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.854212 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-operator-scripts\") pod \"neutron-db-create-ckkb7\" (UID: \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\") " pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.873577 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srgf\" (UniqueName: \"kubernetes.io/projected/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-kube-api-access-2srgf\") pod \"neutron-db-create-ckkb7\" (UID: \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\") " pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.954903 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-combined-ca-bundle\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.954966 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4n7\" (UniqueName: \"kubernetes.io/projected/3773ecb5-8e59-462c-9fc6-323c779b2544-kube-api-access-sf4n7\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.955076 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-config-data\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.955122 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf1f31-7393-4381-9c13-723fe4732c95-operator-scripts\") pod \"barbican-d684-account-create-update-ng64c\" (UID: \"08cf1f31-7393-4381-9c13-723fe4732c95\") " pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.955169 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5pz7\" (UniqueName: \"kubernetes.io/projected/08cf1f31-7393-4381-9c13-723fe4732c95-kube-api-access-q5pz7\") pod \"barbican-d684-account-create-update-ng64c\" (UID: \"08cf1f31-7393-4381-9c13-723fe4732c95\") " pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.956893 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf1f31-7393-4381-9c13-723fe4732c95-operator-scripts\") pod \"barbican-d684-account-create-update-ng64c\" (UID: \"08cf1f31-7393-4381-9c13-723fe4732c95\") " pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.970679 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-config-data\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.976457 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-combined-ca-bundle\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.977668 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4n7\" (UniqueName: \"kubernetes.io/projected/3773ecb5-8e59-462c-9fc6-323c779b2544-kube-api-access-sf4n7\") pod \"keystone-db-sync-bzgn7\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:25 crc kubenswrapper[4945]: I0108 23:37:25.981063 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5pz7\" (UniqueName: \"kubernetes.io/projected/08cf1f31-7393-4381-9c13-723fe4732c95-kube-api-access-q5pz7\") pod \"barbican-d684-account-create-update-ng64c\" (UID: \"08cf1f31-7393-4381-9c13-723fe4732c95\") " pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.011763 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84e9-account-create-update-4ktlt"] Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.012681 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.015317 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.035406 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84e9-account-create-update-4ktlt"] Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.056381 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4n6\" (UniqueName: \"kubernetes.io/projected/fc565fd6-de46-476f-9dc4-8e53aad38fdd-kube-api-access-st4n6\") pod \"neutron-84e9-account-create-update-4ktlt\" (UID: \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\") " pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.056485 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc565fd6-de46-476f-9dc4-8e53aad38fdd-operator-scripts\") pod \"neutron-84e9-account-create-update-4ktlt\" (UID: \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\") " pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.056543 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.065105 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.097502 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.158597 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st4n6\" (UniqueName: \"kubernetes.io/projected/fc565fd6-de46-476f-9dc4-8e53aad38fdd-kube-api-access-st4n6\") pod \"neutron-84e9-account-create-update-4ktlt\" (UID: \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\") " pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.158675 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc565fd6-de46-476f-9dc4-8e53aad38fdd-operator-scripts\") pod \"neutron-84e9-account-create-update-4ktlt\" (UID: \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\") " pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.159574 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc565fd6-de46-476f-9dc4-8e53aad38fdd-operator-scripts\") pod \"neutron-84e9-account-create-update-4ktlt\" (UID: \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\") " pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.194460 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st4n6\" (UniqueName: \"kubernetes.io/projected/fc565fd6-de46-476f-9dc4-8e53aad38fdd-kube-api-access-st4n6\") pod \"neutron-84e9-account-create-update-4ktlt\" (UID: \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\") " pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:26 crc kubenswrapper[4945]: I0108 23:37:26.337710 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:27 crc kubenswrapper[4945]: I0108 23:37:27.917094 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.000983 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1155ea44-2cab-445e-a621-fbd85a2b31a9" (UID: "1155ea44-2cab-445e-a621-fbd85a2b31a9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:27.998352 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-ring-data-devices\") pod \"1155ea44-2cab-445e-a621-fbd85a2b31a9\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.001144 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1155ea44-2cab-445e-a621-fbd85a2b31a9-etc-swift\") pod \"1155ea44-2cab-445e-a621-fbd85a2b31a9\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.001338 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-scripts\") pod \"1155ea44-2cab-445e-a621-fbd85a2b31a9\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.001916 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-swiftconf\") pod \"1155ea44-2cab-445e-a621-fbd85a2b31a9\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.001960 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-combined-ca-bundle\") pod \"1155ea44-2cab-445e-a621-fbd85a2b31a9\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.001983 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k6pv\" (UniqueName: \"kubernetes.io/projected/1155ea44-2cab-445e-a621-fbd85a2b31a9-kube-api-access-2k6pv\") pod \"1155ea44-2cab-445e-a621-fbd85a2b31a9\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.002066 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-dispersionconf\") pod \"1155ea44-2cab-445e-a621-fbd85a2b31a9\" (UID: \"1155ea44-2cab-445e-a621-fbd85a2b31a9\") " Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.002851 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1155ea44-2cab-445e-a621-fbd85a2b31a9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1155ea44-2cab-445e-a621-fbd85a2b31a9" (UID: "1155ea44-2cab-445e-a621-fbd85a2b31a9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.004895 4945 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.004921 4945 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1155ea44-2cab-445e-a621-fbd85a2b31a9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.008973 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1155ea44-2cab-445e-a621-fbd85a2b31a9-kube-api-access-2k6pv" (OuterVolumeSpecName: "kube-api-access-2k6pv") pod "1155ea44-2cab-445e-a621-fbd85a2b31a9" (UID: "1155ea44-2cab-445e-a621-fbd85a2b31a9"). InnerVolumeSpecName "kube-api-access-2k6pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.031434 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1155ea44-2cab-445e-a621-fbd85a2b31a9" (UID: "1155ea44-2cab-445e-a621-fbd85a2b31a9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.102388 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1155ea44-2cab-445e-a621-fbd85a2b31a9" (UID: "1155ea44-2cab-445e-a621-fbd85a2b31a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.108140 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.108364 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k6pv\" (UniqueName: \"kubernetes.io/projected/1155ea44-2cab-445e-a621-fbd85a2b31a9-kube-api-access-2k6pv\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.108466 4945 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.122247 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-scripts" (OuterVolumeSpecName: "scripts") pod "1155ea44-2cab-445e-a621-fbd85a2b31a9" (UID: "1155ea44-2cab-445e-a621-fbd85a2b31a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.146735 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1155ea44-2cab-445e-a621-fbd85a2b31a9" (UID: "1155ea44-2cab-445e-a621-fbd85a2b31a9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.216028 4945 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1155ea44-2cab-445e-a621-fbd85a2b31a9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.216071 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1155ea44-2cab-445e-a621-fbd85a2b31a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.281998 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nqm45" event={"ID":"1155ea44-2cab-445e-a621-fbd85a2b31a9","Type":"ContainerDied","Data":"0351019ea0765d2c0adb3da28e8ecad2e1443464e013c9e769a45ec4a95d28ef"} Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.282057 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nqm45" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.282074 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0351019ea0765d2c0adb3da28e8ecad2e1443464e013c9e769a45ec4a95d28ef" Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.305372 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r-config-rjz97" event={"ID":"a22b7e6a-80f3-478b-9655-450397f76bd7","Type":"ContainerStarted","Data":"b514d850d6745386d7be079c491de61196a919abe4bdeb33eea8e6b7d331d9fd"} Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.555593 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d684-account-create-update-ng64c"] Jan 08 23:37:28 crc kubenswrapper[4945]: W0108 23:37:28.583565 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc565fd6_de46_476f_9dc4_8e53aad38fdd.slice/crio-d58d39fc6c5d4d2df0471d1ead043ffcaaae94399ed7b11b3b0490bf7db5ef8c WatchSource:0}: Error finding container d58d39fc6c5d4d2df0471d1ead043ffcaaae94399ed7b11b3b0490bf7db5ef8c: Status 404 returned error can't find the container with id d58d39fc6c5d4d2df0471d1ead043ffcaaae94399ed7b11b3b0490bf7db5ef8c Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.596963 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84e9-account-create-update-4ktlt"] Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.669558 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee25-account-create-update-fhpch"] Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.682875 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f2zr9"] Jan 08 23:37:28 crc kubenswrapper[4945]: W0108 23:37:28.689711 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69fe9da9_1222_42c9_aefc_b051e72f81f7.slice/crio-33cc11e40494a5958711b0b6d0736580b31bcca31954878aeee20a1fb4540e56 WatchSource:0}: Error finding container 33cc11e40494a5958711b0b6d0736580b31bcca31954878aeee20a1fb4540e56: Status 404 returned error can't find the container with id 33cc11e40494a5958711b0b6d0736580b31bcca31954878aeee20a1fb4540e56 Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.698825 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2hqz2"] Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.886376 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ckkb7"] Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.902807 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bzgn7"] Jan 08 23:37:28 crc kubenswrapper[4945]: W0108 23:37:28.904197 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3459aa58_67f7_4d0c_a3ae_3a53bf5404b3.slice/crio-0b58f9274111a1969d4f05c8a3a803e20232398b7081c96cd864c0c4eea3930c WatchSource:0}: Error finding container 0b58f9274111a1969d4f05c8a3a803e20232398b7081c96cd864c0c4eea3930c: Status 404 returned error can't find the container with id 0b58f9274111a1969d4f05c8a3a803e20232398b7081c96cd864c0c4eea3930c Jan 08 23:37:28 crc kubenswrapper[4945]: W0108 23:37:28.907091 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3773ecb5_8e59_462c_9fc6_323c779b2544.slice/crio-f6323b8d6358da26eeb09807c41c4644930da76f7c47dbd7e0dfb0cc96fd0968 WatchSource:0}: Error finding container f6323b8d6358da26eeb09807c41c4644930da76f7c47dbd7e0dfb0cc96fd0968: Status 404 returned error can't find the container with id f6323b8d6358da26eeb09807c41c4644930da76f7c47dbd7e0dfb0cc96fd0968 Jan 08 23:37:28 crc kubenswrapper[4945]: I0108 23:37:28.912792 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5nhlb"] Jan 08 23:37:28 crc kubenswrapper[4945]: W0108 23:37:28.918280 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod688e13f9_5653_41da_ba2a_541ffaa8cec9.slice/crio-cdaf717b4200f998a233407f34da75ff3619bf0d98e1c25effd116f61170e17a WatchSource:0}: Error finding container cdaf717b4200f998a233407f34da75ff3619bf0d98e1c25effd116f61170e17a: Status 404 returned error can't find the container with id cdaf717b4200f998a233407f34da75ff3619bf0d98e1c25effd116f61170e17a Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.316117 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bzgn7" event={"ID":"3773ecb5-8e59-462c-9fc6-323c779b2544","Type":"ContainerStarted","Data":"f6323b8d6358da26eeb09807c41c4644930da76f7c47dbd7e0dfb0cc96fd0968"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.317703 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-94g76" event={"ID":"91d2a120-b7c1-44e5-a3c0-6720acab34a7","Type":"ContainerStarted","Data":"7ba4688fc263c1aed714211d1515d8de9f81d9d620cdc2a0a4a94d12379599b0"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.321497 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee25-account-create-update-fhpch" event={"ID":"1e56604b-5a70-4403-9e9e-4842d685fadd","Type":"ContainerStarted","Data":"d2e978ab712082f3949094db7841350523945e36f34a236aa78d8eccecab2130"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.321533 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee25-account-create-update-fhpch" event={"ID":"1e56604b-5a70-4403-9e9e-4842d685fadd","Type":"ContainerStarted","Data":"aa3f53ba4d54ab037061558f849876317db37aca0e27feb3171f1cf605c7e096"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.327035 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2hqz2" event={"ID":"69fe9da9-1222-42c9-aefc-b051e72f81f7","Type":"ContainerStarted","Data":"41f18d58019ccf0e8de3811d8fb7972fe95dfcd60b5d75a34716b91d48b13c31"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.327071 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2hqz2" event={"ID":"69fe9da9-1222-42c9-aefc-b051e72f81f7","Type":"ContainerStarted","Data":"33cc11e40494a5958711b0b6d0736580b31bcca31954878aeee20a1fb4540e56"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.329393 4945 generic.go:334] "Generic (PLEG): container finished" podID="fc565fd6-de46-476f-9dc4-8e53aad38fdd" containerID="d5dfa1736e28bd5c1e8e475e83ae1a58305fb151056fd8aa88095ae069f33e9f" exitCode=0 Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.329491 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84e9-account-create-update-4ktlt" event={"ID":"fc565fd6-de46-476f-9dc4-8e53aad38fdd","Type":"ContainerDied","Data":"d5dfa1736e28bd5c1e8e475e83ae1a58305fb151056fd8aa88095ae069f33e9f"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.329537 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84e9-account-create-update-4ktlt" event={"ID":"fc565fd6-de46-476f-9dc4-8e53aad38fdd","Type":"ContainerStarted","Data":"d58d39fc6c5d4d2df0471d1ead043ffcaaae94399ed7b11b3b0490bf7db5ef8c"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.332197 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nhlb" event={"ID":"688e13f9-5653-41da-ba2a-541ffaa8cec9","Type":"ContainerStarted","Data":"c31deb01de0248bacf86cd4619d48921e63f82d7fd0acb625f30b485b1583b9c"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.332229 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nhlb" event={"ID":"688e13f9-5653-41da-ba2a-541ffaa8cec9","Type":"ContainerStarted","Data":"cdaf717b4200f998a233407f34da75ff3619bf0d98e1c25effd116f61170e17a"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.334533 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-94g76" podStartSLOduration=2.247233446 podStartE2EDuration="17.33451424s" podCreationTimestamp="2026-01-08 23:37:12 +0000 UTC" firstStartedPulling="2026-01-08 23:37:12.988753606 +0000 UTC m=+1303.299912552" lastFinishedPulling="2026-01-08 23:37:28.0760344 +0000 UTC m=+1318.387193346" observedRunningTime="2026-01-08 23:37:29.333798382 +0000 UTC m=+1319.644957318" watchObservedRunningTime="2026-01-08 23:37:29.33451424 +0000 UTC m=+1319.645673186" Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.337940 4945 generic.go:334] "Generic (PLEG): container finished" podID="a22b7e6a-80f3-478b-9655-450397f76bd7" containerID="d9bcffeedebb5f2415e6032ae74495e88aa2cce79304fc735c99dd55983ce56a" exitCode=0 Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.338075 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r-config-rjz97" event={"ID":"a22b7e6a-80f3-478b-9655-450397f76bd7","Type":"ContainerDied","Data":"d9bcffeedebb5f2415e6032ae74495e88aa2cce79304fc735c99dd55983ce56a"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.341319 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ckkb7" event={"ID":"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3","Type":"ContainerStarted","Data":"78dedf13b581dfc3615764faf2c2b69436f59e182f5e1737cf0c1f55ca61fdce"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.341366 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ckkb7" event={"ID":"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3","Type":"ContainerStarted","Data":"0b58f9274111a1969d4f05c8a3a803e20232398b7081c96cd864c0c4eea3930c"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.344494 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f2zr9" event={"ID":"008a988d-6834-4bbe-b9ec-333cfe1c534c","Type":"ContainerStarted","Data":"78d6d25e434b638f130918731f0903e985aeff175b1cf38b27837112f11da21d"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.344526 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f2zr9" event={"ID":"008a988d-6834-4bbe-b9ec-333cfe1c534c","Type":"ContainerStarted","Data":"b50e9b5eff89cce6b3b256b13c4a4d100c5d00144d5b5a2c023abd4b43d7e5ce"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.350326 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d684-account-create-update-ng64c" event={"ID":"08cf1f31-7393-4381-9c13-723fe4732c95","Type":"ContainerStarted","Data":"2aeb79cff0dbb2629fd63ffcddab5f0c962f21df339af5b74c3440576cf61f80"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.350379 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d684-account-create-update-ng64c" event={"ID":"08cf1f31-7393-4381-9c13-723fe4732c95","Type":"ContainerStarted","Data":"4624332500f88a5774c6c87bc1dc98fda73244646f31f25b843fcb12f9e526fb"} Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.416226 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-5nhlb" podStartSLOduration=4.416206548 podStartE2EDuration="4.416206548s" podCreationTimestamp="2026-01-08 23:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:29.388972265 +0000 UTC m=+1319.700131211" watchObservedRunningTime="2026-01-08 23:37:29.416206548 +0000 UTC m=+1319.727365494" Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.430851 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-2hqz2" podStartSLOduration=9.430833859 podStartE2EDuration="9.430833859s" podCreationTimestamp="2026-01-08 23:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:29.4254313 +0000 UTC m=+1319.736590236" watchObservedRunningTime="2026-01-08 23:37:29.430833859 +0000 UTC m=+1319.741992805" Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.433308 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ee25-account-create-update-fhpch" podStartSLOduration=4.433299508 podStartE2EDuration="4.433299508s" podCreationTimestamp="2026-01-08 23:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:29.411634239 +0000 UTC m=+1319.722793185" watchObservedRunningTime="2026-01-08 23:37:29.433299508 +0000 UTC m=+1319.744458454" Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.474438 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-ckkb7" podStartSLOduration=4.474398574 podStartE2EDuration="4.474398574s" podCreationTimestamp="2026-01-08 23:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:29.471639368 +0000 UTC m=+1319.782798314" watchObservedRunningTime="2026-01-08 23:37:29.474398574 +0000 UTC m=+1319.785557510" Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.493845 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-d684-account-create-update-ng64c" podStartSLOduration=4.49381837 podStartE2EDuration="4.49381837s" podCreationTimestamp="2026-01-08 23:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:29.485601603 +0000 UTC m=+1319.796760549" watchObservedRunningTime="2026-01-08 23:37:29.49381837 +0000 UTC m=+1319.804977316" Jan 08 23:37:29 crc kubenswrapper[4945]: I0108 23:37:29.507973 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-f2zr9" podStartSLOduration=4.507943079 podStartE2EDuration="4.507943079s" podCreationTimestamp="2026-01-08 23:37:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:29.504621489 +0000 UTC m=+1319.815780435" watchObservedRunningTime="2026-01-08 23:37:29.507943079 +0000 UTC m=+1319.819102025" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.362720 4945 generic.go:334] "Generic (PLEG): container finished" podID="69fe9da9-1222-42c9-aefc-b051e72f81f7" containerID="41f18d58019ccf0e8de3811d8fb7972fe95dfcd60b5d75a34716b91d48b13c31" exitCode=0 Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.362907 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2hqz2" event={"ID":"69fe9da9-1222-42c9-aefc-b051e72f81f7","Type":"ContainerDied","Data":"41f18d58019ccf0e8de3811d8fb7972fe95dfcd60b5d75a34716b91d48b13c31"} Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.369727 4945 generic.go:334] "Generic (PLEG): container finished" podID="688e13f9-5653-41da-ba2a-541ffaa8cec9" containerID="c31deb01de0248bacf86cd4619d48921e63f82d7fd0acb625f30b485b1583b9c" exitCode=0 Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.369784 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nhlb" event={"ID":"688e13f9-5653-41da-ba2a-541ffaa8cec9","Type":"ContainerDied","Data":"c31deb01de0248bacf86cd4619d48921e63f82d7fd0acb625f30b485b1583b9c"} Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.372644 4945 generic.go:334] "Generic (PLEG): container finished" podID="3459aa58-67f7-4d0c-a3ae-3a53bf5404b3" containerID="78dedf13b581dfc3615764faf2c2b69436f59e182f5e1737cf0c1f55ca61fdce" exitCode=0 Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.372674 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ckkb7" event={"ID":"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3","Type":"ContainerDied","Data":"78dedf13b581dfc3615764faf2c2b69436f59e182f5e1737cf0c1f55ca61fdce"} Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.375806 4945 generic.go:334] "Generic (PLEG): container finished" podID="008a988d-6834-4bbe-b9ec-333cfe1c534c" containerID="78d6d25e434b638f130918731f0903e985aeff175b1cf38b27837112f11da21d" exitCode=0 Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.375867 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f2zr9" event={"ID":"008a988d-6834-4bbe-b9ec-333cfe1c534c","Type":"ContainerDied","Data":"78d6d25e434b638f130918731f0903e985aeff175b1cf38b27837112f11da21d"} Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.383429 4945 generic.go:334] "Generic (PLEG): container finished" podID="08cf1f31-7393-4381-9c13-723fe4732c95" containerID="2aeb79cff0dbb2629fd63ffcddab5f0c962f21df339af5b74c3440576cf61f80" exitCode=0 Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.383629 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d684-account-create-update-ng64c" event={"ID":"08cf1f31-7393-4381-9c13-723fe4732c95","Type":"ContainerDied","Data":"2aeb79cff0dbb2629fd63ffcddab5f0c962f21df339af5b74c3440576cf61f80"} Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.386060 4945 generic.go:334] "Generic (PLEG): container finished" podID="1e56604b-5a70-4403-9e9e-4842d685fadd" containerID="d2e978ab712082f3949094db7841350523945e36f34a236aa78d8eccecab2130" exitCode=0 Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.386388 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee25-account-create-update-fhpch" event={"ID":"1e56604b-5a70-4403-9e9e-4842d685fadd","Type":"ContainerDied","Data":"d2e978ab712082f3949094db7841350523945e36f34a236aa78d8eccecab2130"} Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.796706 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.873115 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.891822 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cnnm\" (UniqueName: \"kubernetes.io/projected/a22b7e6a-80f3-478b-9655-450397f76bd7-kube-api-access-9cnnm\") pod \"a22b7e6a-80f3-478b-9655-450397f76bd7\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.892871 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run-ovn\") pod \"a22b7e6a-80f3-478b-9655-450397f76bd7\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.892924 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run\") pod \"a22b7e6a-80f3-478b-9655-450397f76bd7\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.892977 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-additional-scripts\") pod \"a22b7e6a-80f3-478b-9655-450397f76bd7\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.893035 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-log-ovn\") pod \"a22b7e6a-80f3-478b-9655-450397f76bd7\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.893089 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-scripts\") pod \"a22b7e6a-80f3-478b-9655-450397f76bd7\" (UID: \"a22b7e6a-80f3-478b-9655-450397f76bd7\") " Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.893074 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run" (OuterVolumeSpecName: "var-run") pod "a22b7e6a-80f3-478b-9655-450397f76bd7" (UID: "a22b7e6a-80f3-478b-9655-450397f76bd7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.893111 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a22b7e6a-80f3-478b-9655-450397f76bd7" (UID: "a22b7e6a-80f3-478b-9655-450397f76bd7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.893850 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a22b7e6a-80f3-478b-9655-450397f76bd7" (UID: "a22b7e6a-80f3-478b-9655-450397f76bd7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.894097 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-scripts" (OuterVolumeSpecName: "scripts") pod "a22b7e6a-80f3-478b-9655-450397f76bd7" (UID: "a22b7e6a-80f3-478b-9655-450397f76bd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.894150 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a22b7e6a-80f3-478b-9655-450397f76bd7" (UID: "a22b7e6a-80f3-478b-9655-450397f76bd7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.894252 4945 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.894270 4945 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.894282 4945 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.901387 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22b7e6a-80f3-478b-9655-450397f76bd7-kube-api-access-9cnnm" (OuterVolumeSpecName: "kube-api-access-9cnnm") pod "a22b7e6a-80f3-478b-9655-450397f76bd7" (UID: "a22b7e6a-80f3-478b-9655-450397f76bd7"). InnerVolumeSpecName "kube-api-access-9cnnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.996475 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc565fd6-de46-476f-9dc4-8e53aad38fdd-operator-scripts\") pod \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\" (UID: \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\") " Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.996737 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st4n6\" (UniqueName: \"kubernetes.io/projected/fc565fd6-de46-476f-9dc4-8e53aad38fdd-kube-api-access-st4n6\") pod \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\" (UID: \"fc565fd6-de46-476f-9dc4-8e53aad38fdd\") " Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.997478 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc565fd6-de46-476f-9dc4-8e53aad38fdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc565fd6-de46-476f-9dc4-8e53aad38fdd" (UID: "fc565fd6-de46-476f-9dc4-8e53aad38fdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.998663 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cnnm\" (UniqueName: \"kubernetes.io/projected/a22b7e6a-80f3-478b-9655-450397f76bd7-kube-api-access-9cnnm\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.998692 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc565fd6-de46-476f-9dc4-8e53aad38fdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.998706 4945 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a22b7e6a-80f3-478b-9655-450397f76bd7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:30 crc kubenswrapper[4945]: I0108 23:37:30.998720 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22b7e6a-80f3-478b-9655-450397f76bd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.002128 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc565fd6-de46-476f-9dc4-8e53aad38fdd-kube-api-access-st4n6" (OuterVolumeSpecName: "kube-api-access-st4n6") pod "fc565fd6-de46-476f-9dc4-8e53aad38fdd" (UID: "fc565fd6-de46-476f-9dc4-8e53aad38fdd"). InnerVolumeSpecName "kube-api-access-st4n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.103247 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st4n6\" (UniqueName: \"kubernetes.io/projected/fc565fd6-de46-476f-9dc4-8e53aad38fdd-kube-api-access-st4n6\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.405949 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r-config-rjz97" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.405792 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r-config-rjz97" event={"ID":"a22b7e6a-80f3-478b-9655-450397f76bd7","Type":"ContainerDied","Data":"b514d850d6745386d7be079c491de61196a919abe4bdeb33eea8e6b7d331d9fd"} Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.406530 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b514d850d6745386d7be079c491de61196a919abe4bdeb33eea8e6b7d331d9fd" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.410902 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84e9-account-create-update-4ktlt" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.410957 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84e9-account-create-update-4ktlt" event={"ID":"fc565fd6-de46-476f-9dc4-8e53aad38fdd","Type":"ContainerDied","Data":"d58d39fc6c5d4d2df0471d1ead043ffcaaae94399ed7b11b3b0490bf7db5ef8c"} Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.411057 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58d39fc6c5d4d2df0471d1ead043ffcaaae94399ed7b11b3b0490bf7db5ef8c" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.850962 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.907838 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fr87r-config-rjz97"] Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.920196 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fe9da9-1222-42c9-aefc-b051e72f81f7-operator-scripts\") pod \"69fe9da9-1222-42c9-aefc-b051e72f81f7\" (UID: \"69fe9da9-1222-42c9-aefc-b051e72f81f7\") " Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.920296 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzbjb\" (UniqueName: \"kubernetes.io/projected/69fe9da9-1222-42c9-aefc-b051e72f81f7-kube-api-access-rzbjb\") pod \"69fe9da9-1222-42c9-aefc-b051e72f81f7\" (UID: \"69fe9da9-1222-42c9-aefc-b051e72f81f7\") " Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.921646 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69fe9da9-1222-42c9-aefc-b051e72f81f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69fe9da9-1222-42c9-aefc-b051e72f81f7" (UID: "69fe9da9-1222-42c9-aefc-b051e72f81f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.927119 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fr87r-config-rjz97"] Jan 08 23:37:31 crc kubenswrapper[4945]: I0108 23:37:31.944258 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69fe9da9-1222-42c9-aefc-b051e72f81f7-kube-api-access-rzbjb" (OuterVolumeSpecName: "kube-api-access-rzbjb") pod "69fe9da9-1222-42c9-aefc-b051e72f81f7" (UID: "69fe9da9-1222-42c9-aefc-b051e72f81f7"). InnerVolumeSpecName "kube-api-access-rzbjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.017743 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22b7e6a-80f3-478b-9655-450397f76bd7" path="/var/lib/kubelet/pods/a22b7e6a-80f3-478b-9655-450397f76bd7/volumes" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.022448 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.022651 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzbjb\" (UniqueName: \"kubernetes.io/projected/69fe9da9-1222-42c9-aefc-b051e72f81f7-kube-api-access-rzbjb\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.022667 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69fe9da9-1222-42c9-aefc-b051e72f81f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.030986 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"swift-storage-0\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " pod="openstack/swift-storage-0" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.051926 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.082197 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.088528 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.123845 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.134701 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.149069 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226086 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008a988d-6834-4bbe-b9ec-333cfe1c534c-operator-scripts\") pod \"008a988d-6834-4bbe-b9ec-333cfe1c534c\" (UID: \"008a988d-6834-4bbe-b9ec-333cfe1c534c\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226577 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688e13f9-5653-41da-ba2a-541ffaa8cec9-operator-scripts\") pod \"688e13f9-5653-41da-ba2a-541ffaa8cec9\" (UID: \"688e13f9-5653-41da-ba2a-541ffaa8cec9\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226683 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb2nw\" (UniqueName: \"kubernetes.io/projected/008a988d-6834-4bbe-b9ec-333cfe1c534c-kube-api-access-qb2nw\") pod \"008a988d-6834-4bbe-b9ec-333cfe1c534c\" (UID: \"008a988d-6834-4bbe-b9ec-333cfe1c534c\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226719 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-operator-scripts\") pod \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\" (UID: \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226777 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2srgf\" (UniqueName: \"kubernetes.io/projected/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-kube-api-access-2srgf\") pod \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\" (UID: \"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226816 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj4mp\" (UniqueName: \"kubernetes.io/projected/688e13f9-5653-41da-ba2a-541ffaa8cec9-kube-api-access-bj4mp\") pod \"688e13f9-5653-41da-ba2a-541ffaa8cec9\" (UID: \"688e13f9-5653-41da-ba2a-541ffaa8cec9\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226839 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5pz7\" (UniqueName: \"kubernetes.io/projected/08cf1f31-7393-4381-9c13-723fe4732c95-kube-api-access-q5pz7\") pod \"08cf1f31-7393-4381-9c13-723fe4732c95\" (UID: \"08cf1f31-7393-4381-9c13-723fe4732c95\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226876 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e56604b-5a70-4403-9e9e-4842d685fadd-operator-scripts\") pod \"1e56604b-5a70-4403-9e9e-4842d685fadd\" (UID: \"1e56604b-5a70-4403-9e9e-4842d685fadd\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226894 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss8pd\" (UniqueName: \"kubernetes.io/projected/1e56604b-5a70-4403-9e9e-4842d685fadd-kube-api-access-ss8pd\") pod \"1e56604b-5a70-4403-9e9e-4842d685fadd\" (UID: \"1e56604b-5a70-4403-9e9e-4842d685fadd\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.226911 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf1f31-7393-4381-9c13-723fe4732c95-operator-scripts\") pod \"08cf1f31-7393-4381-9c13-723fe4732c95\" (UID: \"08cf1f31-7393-4381-9c13-723fe4732c95\") " Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.234809 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cf1f31-7393-4381-9c13-723fe4732c95-kube-api-access-q5pz7" (OuterVolumeSpecName: "kube-api-access-q5pz7") pod "08cf1f31-7393-4381-9c13-723fe4732c95" (UID: "08cf1f31-7393-4381-9c13-723fe4732c95"). InnerVolumeSpecName "kube-api-access-q5pz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.234936 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e56604b-5a70-4403-9e9e-4842d685fadd-kube-api-access-ss8pd" (OuterVolumeSpecName: "kube-api-access-ss8pd") pod "1e56604b-5a70-4403-9e9e-4842d685fadd" (UID: "1e56604b-5a70-4403-9e9e-4842d685fadd"). InnerVolumeSpecName "kube-api-access-ss8pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.235135 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008a988d-6834-4bbe-b9ec-333cfe1c534c-kube-api-access-qb2nw" (OuterVolumeSpecName: "kube-api-access-qb2nw") pod "008a988d-6834-4bbe-b9ec-333cfe1c534c" (UID: "008a988d-6834-4bbe-b9ec-333cfe1c534c"). InnerVolumeSpecName "kube-api-access-qb2nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.234773 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-kube-api-access-2srgf" (OuterVolumeSpecName: "kube-api-access-2srgf") pod "3459aa58-67f7-4d0c-a3ae-3a53bf5404b3" (UID: "3459aa58-67f7-4d0c-a3ae-3a53bf5404b3"). InnerVolumeSpecName "kube-api-access-2srgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.235474 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688e13f9-5653-41da-ba2a-541ffaa8cec9-kube-api-access-bj4mp" (OuterVolumeSpecName: "kube-api-access-bj4mp") pod "688e13f9-5653-41da-ba2a-541ffaa8cec9" (UID: "688e13f9-5653-41da-ba2a-541ffaa8cec9"). InnerVolumeSpecName "kube-api-access-bj4mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.329082 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb2nw\" (UniqueName: \"kubernetes.io/projected/008a988d-6834-4bbe-b9ec-333cfe1c534c-kube-api-access-qb2nw\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.329123 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2srgf\" (UniqueName: \"kubernetes.io/projected/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-kube-api-access-2srgf\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.329140 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj4mp\" (UniqueName: \"kubernetes.io/projected/688e13f9-5653-41da-ba2a-541ffaa8cec9-kube-api-access-bj4mp\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.329152 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5pz7\" (UniqueName: \"kubernetes.io/projected/08cf1f31-7393-4381-9c13-723fe4732c95-kube-api-access-q5pz7\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.329166 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss8pd\" (UniqueName: \"kubernetes.io/projected/1e56604b-5a70-4403-9e9e-4842d685fadd-kube-api-access-ss8pd\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.423938 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2hqz2" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.439267 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nhlb" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.462640 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ckkb7" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.492636 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f2zr9" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.519503 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d684-account-create-update-ng64c" Jan 08 23:37:32 crc kubenswrapper[4945]: I0108 23:37:32.566317 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee25-account-create-update-fhpch" Jan 08 23:37:32 crc kubenswrapper[4945]: W0108 23:37:32.872455 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12eb7cf8_4c67_4574_a65b_dc82c7285c68.slice/crio-1c0f9d06b9e2dfdbc40e0336032b284914d71290109416254f93987886aeed79 WatchSource:0}: Error finding container 1c0f9d06b9e2dfdbc40e0336032b284914d71290109416254f93987886aeed79: Status 404 returned error can't find the container with id 1c0f9d06b9e2dfdbc40e0336032b284914d71290109416254f93987886aeed79 Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.027550 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/008a988d-6834-4bbe-b9ec-333cfe1c534c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "008a988d-6834-4bbe-b9ec-333cfe1c534c" (UID: "008a988d-6834-4bbe-b9ec-333cfe1c534c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.060944 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/008a988d-6834-4bbe-b9ec-333cfe1c534c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.121873 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3459aa58-67f7-4d0c-a3ae-3a53bf5404b3" (UID: "3459aa58-67f7-4d0c-a3ae-3a53bf5404b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.122022 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e56604b-5a70-4403-9e9e-4842d685fadd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e56604b-5a70-4403-9e9e-4842d685fadd" (UID: "1e56604b-5a70-4403-9e9e-4842d685fadd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.122068 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08cf1f31-7393-4381-9c13-723fe4732c95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08cf1f31-7393-4381-9c13-723fe4732c95" (UID: "08cf1f31-7393-4381-9c13-723fe4732c95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.122068 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688e13f9-5653-41da-ba2a-541ffaa8cec9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "688e13f9-5653-41da-ba2a-541ffaa8cec9" (UID: "688e13f9-5653-41da-ba2a-541ffaa8cec9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.162443 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.162482 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e56604b-5a70-4403-9e9e-4842d685fadd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.162494 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cf1f31-7393-4381-9c13-723fe4732c95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.162503 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/688e13f9-5653-41da-ba2a-541ffaa8cec9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:33 crc kubenswrapper[4945]: E0108 23:37:33.305577 4945 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.306s" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305640 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2hqz2" event={"ID":"69fe9da9-1222-42c9-aefc-b051e72f81f7","Type":"ContainerDied","Data":"33cc11e40494a5958711b0b6d0736580b31bcca31954878aeee20a1fb4540e56"} Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305673 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33cc11e40494a5958711b0b6d0736580b31bcca31954878aeee20a1fb4540e56" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305684 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nhlb" event={"ID":"688e13f9-5653-41da-ba2a-541ffaa8cec9","Type":"ContainerDied","Data":"cdaf717b4200f998a233407f34da75ff3619bf0d98e1c25effd116f61170e17a"} Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305697 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdaf717b4200f998a233407f34da75ff3619bf0d98e1c25effd116f61170e17a" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305757 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ckkb7" event={"ID":"3459aa58-67f7-4d0c-a3ae-3a53bf5404b3","Type":"ContainerDied","Data":"0b58f9274111a1969d4f05c8a3a803e20232398b7081c96cd864c0c4eea3930c"} Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305767 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b58f9274111a1969d4f05c8a3a803e20232398b7081c96cd864c0c4eea3930c" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305774 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f2zr9" event={"ID":"008a988d-6834-4bbe-b9ec-333cfe1c534c","Type":"ContainerDied","Data":"b50e9b5eff89cce6b3b256b13c4a4d100c5d00144d5b5a2c023abd4b43d7e5ce"} Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305786 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50e9b5eff89cce6b3b256b13c4a4d100c5d00144d5b5a2c023abd4b43d7e5ce" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305793 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d684-account-create-update-ng64c" event={"ID":"08cf1f31-7393-4381-9c13-723fe4732c95","Type":"ContainerDied","Data":"4624332500f88a5774c6c87bc1dc98fda73244646f31f25b843fcb12f9e526fb"} Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305804 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4624332500f88a5774c6c87bc1dc98fda73244646f31f25b843fcb12f9e526fb" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.305811 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee25-account-create-update-fhpch" event={"ID":"1e56604b-5a70-4403-9e9e-4842d685fadd","Type":"ContainerDied","Data":"aa3f53ba4d54ab037061558f849876317db37aca0e27feb3171f1cf605c7e096"} Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.306033 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa3f53ba4d54ab037061558f849876317db37aca0e27feb3171f1cf605c7e096" Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.306122 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 08 23:37:33 crc kubenswrapper[4945]: I0108 23:37:33.574462 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"1c0f9d06b9e2dfdbc40e0336032b284914d71290109416254f93987886aeed79"} Jan 08 23:37:39 crc kubenswrapper[4945]: I0108 23:37:39.650444 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bzgn7" event={"ID":"3773ecb5-8e59-462c-9fc6-323c779b2544","Type":"ContainerStarted","Data":"33a4af0b086fbfcabe69a8e7c7988dfc06b32af17d4f4a43c92a55eb98b2ab95"} Jan 08 23:37:39 crc kubenswrapper[4945]: I0108 23:37:39.654862 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c"} Jan 08 23:37:40 crc kubenswrapper[4945]: I0108 23:37:40.052535 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bzgn7" podStartSLOduration=5.547293224 podStartE2EDuration="15.052509638s" podCreationTimestamp="2026-01-08 23:37:25 +0000 UTC" firstStartedPulling="2026-01-08 23:37:28.910237891 +0000 UTC m=+1319.221396837" lastFinishedPulling="2026-01-08 23:37:38.415454305 +0000 UTC m=+1328.726613251" observedRunningTime="2026-01-08 23:37:39.702729957 +0000 UTC m=+1330.013888933" watchObservedRunningTime="2026-01-08 23:37:40.052509638 +0000 UTC m=+1330.363668584" Jan 08 23:37:40 crc kubenswrapper[4945]: I0108 23:37:40.670843 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa"} Jan 08 23:37:40 crc kubenswrapper[4945]: I0108 23:37:40.671422 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9"} Jan 08 23:37:41 crc kubenswrapper[4945]: I0108 23:37:41.706202 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e"} Jan 08 23:37:42 crc kubenswrapper[4945]: I0108 23:37:42.724403 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3"} Jan 08 23:37:42 crc kubenswrapper[4945]: I0108 23:37:42.724958 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067"} Jan 08 23:37:42 crc kubenswrapper[4945]: I0108 23:37:42.724981 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6"} Jan 08 23:37:42 crc kubenswrapper[4945]: I0108 23:37:42.725038 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c"} Jan 08 23:37:42 crc kubenswrapper[4945]: I0108 23:37:42.727583 4945 generic.go:334] "Generic (PLEG): container finished" podID="91d2a120-b7c1-44e5-a3c0-6720acab34a7" containerID="7ba4688fc263c1aed714211d1515d8de9f81d9d620cdc2a0a4a94d12379599b0" exitCode=0 Jan 08 23:37:42 crc kubenswrapper[4945]: I0108 23:37:42.727722 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-94g76" event={"ID":"91d2a120-b7c1-44e5-a3c0-6720acab34a7","Type":"ContainerDied","Data":"7ba4688fc263c1aed714211d1515d8de9f81d9d620cdc2a0a4a94d12379599b0"} Jan 08 23:37:42 crc kubenswrapper[4945]: I0108 23:37:42.729405 4945 generic.go:334] "Generic (PLEG): container finished" podID="3773ecb5-8e59-462c-9fc6-323c779b2544" containerID="33a4af0b086fbfcabe69a8e7c7988dfc06b32af17d4f4a43c92a55eb98b2ab95" exitCode=0 Jan 08 23:37:42 crc kubenswrapper[4945]: I0108 23:37:42.729452 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bzgn7" event={"ID":"3773ecb5-8e59-462c-9fc6-323c779b2544","Type":"ContainerDied","Data":"33a4af0b086fbfcabe69a8e7c7988dfc06b32af17d4f4a43c92a55eb98b2ab95"} Jan 08 23:37:43 crc kubenswrapper[4945]: I0108 23:37:43.578609 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:37:43 crc kubenswrapper[4945]: I0108 23:37:43.579139 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.095266 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.209128 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-combined-ca-bundle\") pod \"3773ecb5-8e59-462c-9fc6-323c779b2544\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.209239 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-config-data\") pod \"3773ecb5-8e59-462c-9fc6-323c779b2544\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.209529 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf4n7\" (UniqueName: \"kubernetes.io/projected/3773ecb5-8e59-462c-9fc6-323c779b2544-kube-api-access-sf4n7\") pod \"3773ecb5-8e59-462c-9fc6-323c779b2544\" (UID: \"3773ecb5-8e59-462c-9fc6-323c779b2544\") " Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.218587 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3773ecb5-8e59-462c-9fc6-323c779b2544-kube-api-access-sf4n7" (OuterVolumeSpecName: "kube-api-access-sf4n7") pod "3773ecb5-8e59-462c-9fc6-323c779b2544" (UID: "3773ecb5-8e59-462c-9fc6-323c779b2544"). InnerVolumeSpecName "kube-api-access-sf4n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.220820 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-94g76" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.245559 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3773ecb5-8e59-462c-9fc6-323c779b2544" (UID: "3773ecb5-8e59-462c-9fc6-323c779b2544"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.313832 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-db-sync-config-data\") pod \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.314484 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-config-data\") pod \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.314623 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fms5\" (UniqueName: \"kubernetes.io/projected/91d2a120-b7c1-44e5-a3c0-6720acab34a7-kube-api-access-9fms5\") pod \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.314698 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-combined-ca-bundle\") pod \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\" (UID: \"91d2a120-b7c1-44e5-a3c0-6720acab34a7\") " Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.315193 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf4n7\" (UniqueName: \"kubernetes.io/projected/3773ecb5-8e59-462c-9fc6-323c779b2544-kube-api-access-sf4n7\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.315209 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.322807 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-config-data" (OuterVolumeSpecName: "config-data") pod "3773ecb5-8e59-462c-9fc6-323c779b2544" (UID: "3773ecb5-8e59-462c-9fc6-323c779b2544"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.342410 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d2a120-b7c1-44e5-a3c0-6720acab34a7-kube-api-access-9fms5" (OuterVolumeSpecName: "kube-api-access-9fms5") pod "91d2a120-b7c1-44e5-a3c0-6720acab34a7" (UID: "91d2a120-b7c1-44e5-a3c0-6720acab34a7"). InnerVolumeSpecName "kube-api-access-9fms5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.343981 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "91d2a120-b7c1-44e5-a3c0-6720acab34a7" (UID: "91d2a120-b7c1-44e5-a3c0-6720acab34a7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.358248 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91d2a120-b7c1-44e5-a3c0-6720acab34a7" (UID: "91d2a120-b7c1-44e5-a3c0-6720acab34a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.389472 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-config-data" (OuterVolumeSpecName: "config-data") pod "91d2a120-b7c1-44e5-a3c0-6720acab34a7" (UID: "91d2a120-b7c1-44e5-a3c0-6720acab34a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.420457 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3773ecb5-8e59-462c-9fc6-323c779b2544-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.420533 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fms5\" (UniqueName: \"kubernetes.io/projected/91d2a120-b7c1-44e5-a3c0-6720acab34a7-kube-api-access-9fms5\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.420551 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.420565 4945 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.420576 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91d2a120-b7c1-44e5-a3c0-6720acab34a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.750271 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bzgn7" event={"ID":"3773ecb5-8e59-462c-9fc6-323c779b2544","Type":"ContainerDied","Data":"f6323b8d6358da26eeb09807c41c4644930da76f7c47dbd7e0dfb0cc96fd0968"} Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.750339 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6323b8d6358da26eeb09807c41c4644930da76f7c47dbd7e0dfb0cc96fd0968" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.750419 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bzgn7" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.757750 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253"} Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.757810 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143"} Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.757827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5"} Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.762316 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-94g76" event={"ID":"91d2a120-b7c1-44e5-a3c0-6720acab34a7","Type":"ContainerDied","Data":"862f0bf334d578b8cf40849d232b2c83ec1f827356ff664ac4e64939d0c8f673"} Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.762355 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="862f0bf334d578b8cf40849d232b2c83ec1f827356ff664ac4e64939d0c8f673" Jan 08 23:37:44 crc kubenswrapper[4945]: I0108 23:37:44.762451 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-94g76" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.069511 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-vxtgk"] Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070047 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69fe9da9-1222-42c9-aefc-b051e72f81f7" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070071 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="69fe9da9-1222-42c9-aefc-b051e72f81f7" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070086 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e56604b-5a70-4403-9e9e-4842d685fadd" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070095 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e56604b-5a70-4403-9e9e-4842d685fadd" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070106 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc565fd6-de46-476f-9dc4-8e53aad38fdd" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070116 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc565fd6-de46-476f-9dc4-8e53aad38fdd" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070128 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d2a120-b7c1-44e5-a3c0-6720acab34a7" containerName="glance-db-sync" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070136 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d2a120-b7c1-44e5-a3c0-6720acab34a7" containerName="glance-db-sync" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070156 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="008a988d-6834-4bbe-b9ec-333cfe1c534c" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070163 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="008a988d-6834-4bbe-b9ec-333cfe1c534c" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070183 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688e13f9-5653-41da-ba2a-541ffaa8cec9" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070191 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="688e13f9-5653-41da-ba2a-541ffaa8cec9" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070214 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a22b7e6a-80f3-478b-9655-450397f76bd7" containerName="ovn-config" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070222 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a22b7e6a-80f3-478b-9655-450397f76bd7" containerName="ovn-config" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070234 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1155ea44-2cab-445e-a621-fbd85a2b31a9" containerName="swift-ring-rebalance" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070242 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1155ea44-2cab-445e-a621-fbd85a2b31a9" containerName="swift-ring-rebalance" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070256 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3773ecb5-8e59-462c-9fc6-323c779b2544" containerName="keystone-db-sync" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070263 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3773ecb5-8e59-462c-9fc6-323c779b2544" containerName="keystone-db-sync" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070279 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cf1f31-7393-4381-9c13-723fe4732c95" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070287 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cf1f31-7393-4381-9c13-723fe4732c95" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: E0108 23:37:45.070296 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3459aa58-67f7-4d0c-a3ae-3a53bf5404b3" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070304 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3459aa58-67f7-4d0c-a3ae-3a53bf5404b3" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070500 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a22b7e6a-80f3-478b-9655-450397f76bd7" containerName="ovn-config" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070515 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="69fe9da9-1222-42c9-aefc-b051e72f81f7" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070525 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3773ecb5-8e59-462c-9fc6-323c779b2544" containerName="keystone-db-sync" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070537 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc565fd6-de46-476f-9dc4-8e53aad38fdd" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070548 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1155ea44-2cab-445e-a621-fbd85a2b31a9" containerName="swift-ring-rebalance" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070558 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="688e13f9-5653-41da-ba2a-541ffaa8cec9" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070571 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e56604b-5a70-4403-9e9e-4842d685fadd" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070584 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="008a988d-6834-4bbe-b9ec-333cfe1c534c" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070595 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d2a120-b7c1-44e5-a3c0-6720acab34a7" containerName="glance-db-sync" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070607 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cf1f31-7393-4381-9c13-723fe4732c95" containerName="mariadb-account-create-update" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.070622 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3459aa58-67f7-4d0c-a3ae-3a53bf5404b3" containerName="mariadb-database-create" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.071808 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.080771 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-vxtgk"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.108662 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-db9bs"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.110892 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.125220 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.125589 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.125758 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.125930 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.126122 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bwwj4" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.235148 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-db9bs"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282552 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-config-data\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282608 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-config\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282636 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwdm\" (UniqueName: \"kubernetes.io/projected/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-kube-api-access-bnwdm\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282660 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-scripts\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282675 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-fernet-keys\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282707 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282744 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282760 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-combined-ca-bundle\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282780 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vb8\" (UniqueName: \"kubernetes.io/projected/b55731b1-a202-455d-928f-626b1303d36e-kube-api-access-t4vb8\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282797 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-credential-keys\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.282834 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.353509 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-d4r6l"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.355233 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.358046 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zm8w5" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.370487 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.371107 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384160 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384221 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384239 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-combined-ca-bundle\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384259 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vb8\" (UniqueName: \"kubernetes.io/projected/b55731b1-a202-455d-928f-626b1303d36e-kube-api-access-t4vb8\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384277 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-credential-keys\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384321 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384343 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-config-data\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384367 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-config\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384404 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwdm\" (UniqueName: \"kubernetes.io/projected/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-kube-api-access-bnwdm\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384426 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-scripts\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.384444 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-fernet-keys\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.393933 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-config\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.394838 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.395986 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.398450 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.399885 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9vrvr"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.402793 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-credential-keys\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.403020 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.409545 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.409804 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.409926 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5jxlg" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.411821 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-config-data\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.417566 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-scripts\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.424298 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-combined-ca-bundle\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.425720 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-fernet-keys\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.453168 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-d4r6l"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.456126 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwdm\" (UniqueName: \"kubernetes.io/projected/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-kube-api-access-bnwdm\") pod \"dnsmasq-dns-5c9d85d47c-vxtgk\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.460987 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vb8\" (UniqueName: \"kubernetes.io/projected/b55731b1-a202-455d-928f-626b1303d36e-kube-api-access-t4vb8\") pod \"keystone-bootstrap-db9bs\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.488371 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9vrvr"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.489716 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-config\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.489759 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-db-sync-config-data\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.489808 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-config-data\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.489843 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ch2\" (UniqueName: \"kubernetes.io/projected/050d08ce-2edb-4748-ad2d-de4183cd0188-kube-api-access-84ch2\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.489894 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-combined-ca-bundle\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.489934 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/050d08ce-2edb-4748-ad2d-de4183cd0188-etc-machine-id\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.489965 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-combined-ca-bundle\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.490003 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sgnn\" (UniqueName: \"kubernetes.io/projected/1c46c438-5dec-4a52-b24e-110451d11489-kube-api-access-6sgnn\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.490033 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-scripts\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.494378 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.571282 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.573743 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.582503 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.594588 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596068 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-config-data\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596114 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84ch2\" (UniqueName: \"kubernetes.io/projected/050d08ce-2edb-4748-ad2d-de4183cd0188-kube-api-access-84ch2\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596163 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-combined-ca-bundle\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596207 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/050d08ce-2edb-4748-ad2d-de4183cd0188-etc-machine-id\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596243 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-combined-ca-bundle\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596266 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sgnn\" (UniqueName: \"kubernetes.io/projected/1c46c438-5dec-4a52-b24e-110451d11489-kube-api-access-6sgnn\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596298 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-scripts\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596325 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-config\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.596346 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-db-sync-config-data\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.605421 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/050d08ce-2edb-4748-ad2d-de4183cd0188-etc-machine-id\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.616847 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-db-sync-config-data\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.619324 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-scripts\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.625011 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.627413 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-combined-ca-bundle\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.643333 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sgnn\" (UniqueName: \"kubernetes.io/projected/1c46c438-5dec-4a52-b24e-110451d11489-kube-api-access-6sgnn\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.649603 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ch2\" (UniqueName: \"kubernetes.io/projected/050d08ce-2edb-4748-ad2d-de4183cd0188-kube-api-access-84ch2\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.650453 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-config-data\") pod \"cinder-db-sync-9vrvr\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.654029 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-combined-ca-bundle\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.659770 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-config\") pod \"neutron-db-sync-d4r6l\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.689730 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-vxtgk"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.691801 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.697731 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-log-httpd\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.697781 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-config-data\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.697807 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-scripts\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.697850 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-run-httpd\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.697883 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbncw\" (UniqueName: \"kubernetes.io/projected/6b76113a-10a6-4ff6-9ec0-a65a70f906af-kube-api-access-tbncw\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.697912 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.697966 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.699806 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.762694 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j2txr"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.763885 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.771268 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-lbjzr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.771583 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.812445 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.812812 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-544zq"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.813164 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-run-httpd\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.813283 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbncw\" (UniqueName: \"kubernetes.io/projected/6b76113a-10a6-4ff6-9ec0-a65a70f906af-kube-api-access-tbncw\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.813321 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.813474 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.813625 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-log-httpd\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.813687 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-config-data\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.813715 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-scripts\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.814507 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.816069 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-log-httpd\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.816628 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-run-httpd\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.820269 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerName="galera" probeResult="failure" output="command timed out" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.831748 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.833836 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-config-data\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.836424 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.838273 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-scripts\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.844691 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbncw\" (UniqueName: \"kubernetes.io/projected/6b76113a-10a6-4ff6-9ec0-a65a70f906af-kube-api-access-tbncw\") pod \"ceilometer-0\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " pod="openstack/ceilometer-0" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.860723 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.862123 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j2txr"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.885873 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-x2nr9"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.887416 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.894686 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2xtw8" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.894727 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.896755 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-544zq"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.904932 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150"} Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.913074 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x2nr9"] Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.916653 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-logs\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.916759 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.916782 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-combined-ca-bundle\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.916815 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-config-data\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.916881 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p49w\" (UniqueName: \"kubernetes.io/projected/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-kube-api-access-6p49w\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.916920 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxpq\" (UniqueName: \"kubernetes.io/projected/42dea1be-7edc-4f03-bd80-524c1e040925-kube-api-access-crxpq\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.916945 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-config\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.916965 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-dns-svc\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.917005 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-scripts\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:45 crc kubenswrapper[4945]: I0108 23:37:45.917023 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:45.999467 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019087 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crxpq\" (UniqueName: \"kubernetes.io/projected/42dea1be-7edc-4f03-bd80-524c1e040925-kube-api-access-crxpq\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019148 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-config\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-dns-svc\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019214 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-scripts\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019248 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-combined-ca-bundle\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019271 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019299 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-logs\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019333 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019359 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-combined-ca-bundle\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019455 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-config-data\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019609 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-db-sync-config-data\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019650 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v7fd\" (UniqueName: \"kubernetes.io/projected/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-kube-api-access-7v7fd\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.019679 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p49w\" (UniqueName: \"kubernetes.io/projected/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-kube-api-access-6p49w\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.030916 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.031858 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-config\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.032338 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-combined-ca-bundle\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.032425 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-dns-svc\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.032725 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-logs\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.033272 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.037306 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-config-data\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.045852 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-scripts\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.084180 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crxpq\" (UniqueName: \"kubernetes.io/projected/42dea1be-7edc-4f03-bd80-524c1e040925-kube-api-access-crxpq\") pod \"dnsmasq-dns-56798b757f-544zq\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.096648 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p49w\" (UniqueName: \"kubernetes.io/projected/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-kube-api-access-6p49w\") pod \"placement-db-sync-j2txr\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.120798 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-db-sync-config-data\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.120848 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v7fd\" (UniqueName: \"kubernetes.io/projected/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-kube-api-access-7v7fd\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.120909 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-combined-ca-bundle\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.148267 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v7fd\" (UniqueName: \"kubernetes.io/projected/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-kube-api-access-7v7fd\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.148681 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-combined-ca-bundle\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.150505 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-db-sync-config-data\") pod \"barbican-db-sync-x2nr9\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.166369 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.210701 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.292424 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.293940 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.304046 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.308642 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.311364 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.311571 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zpdb9" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.324528 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2txr" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.431017 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.431377 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.431437 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k4ms\" (UniqueName: \"kubernetes.io/projected/36d227ea-271c-46fd-8c63-b3257ddba425-kube-api-access-2k4ms\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.431486 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-config-data\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.431507 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-scripts\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.431532 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-logs\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.431559 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.527301 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-d4r6l"] Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.533201 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.534254 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.534331 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4ms\" (UniqueName: \"kubernetes.io/projected/36d227ea-271c-46fd-8c63-b3257ddba425-kube-api-access-2k4ms\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.534386 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-config-data\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.534420 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-scripts\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.534462 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-logs\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.534484 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.534651 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.534932 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.537415 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-logs\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.544570 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.548366 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-scripts\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.601700 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4ms\" (UniqueName: \"kubernetes.io/projected/36d227ea-271c-46fd-8c63-b3257ddba425-kube-api-access-2k4ms\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.629193 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-config-data\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.643290 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.654259 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-db9bs"] Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.673199 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:37:46 crc kubenswrapper[4945]: I0108 23:37:46.849981 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-vxtgk"] Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.020613 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.032276 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.045028 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.051673 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.078239 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf"} Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.078278 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703"} Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.087469 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.087562 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p5vd\" (UniqueName: \"kubernetes.io/projected/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-kube-api-access-9p5vd\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.087595 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.087662 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.087713 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.087752 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.087779 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.092838 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-db9bs" event={"ID":"b55731b1-a202-455d-928f-626b1303d36e","Type":"ContainerStarted","Data":"0dd42768276aba92d5c5943375a59eceb1b5c60ca7a40e22981c43c9c75e02b2"} Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.095671 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-d4r6l" event={"ID":"1c46c438-5dec-4a52-b24e-110451d11489","Type":"ContainerStarted","Data":"fa37c2b6b512696222e9cbea01a54882140948e7fec47f81f568340d42b3a92a"} Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.098471 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" event={"ID":"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd","Type":"ContainerStarted","Data":"cd6676fe717f18ddf9883ca706887d765891628b0ee732887fea3a07cb4676ce"} Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.191553 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p5vd\" (UniqueName: \"kubernetes.io/projected/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-kube-api-access-9p5vd\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.191598 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.191654 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.191688 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.191715 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.191740 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.191782 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.198646 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.204345 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-logs\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.209223 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.212828 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.213334 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.223955 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.225729 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p5vd\" (UniqueName: \"kubernetes.io/projected/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-kube-api-access-9p5vd\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.231630 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-544zq"] Jan 08 23:37:47 crc kubenswrapper[4945]: W0108 23:37:47.242925 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b76113a_10a6_4ff6_9ec0_a65a70f906af.slice/crio-a4fb2cd5c2829c9b6f67610513d6e23b06c06b4c549f20387429e6d67e384c6e WatchSource:0}: Error finding container a4fb2cd5c2829c9b6f67610513d6e23b06c06b4c549f20387429e6d67e384c6e: Status 404 returned error can't find the container with id a4fb2cd5c2829c9b6f67610513d6e23b06c06b4c549f20387429e6d67e384c6e Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.262036 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.272432 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-x2nr9"] Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.276847 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.278915 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9vrvr"] Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.391354 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.477427 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j2txr"] Jan 08 23:37:47 crc kubenswrapper[4945]: W0108 23:37:47.612236 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecf1ba9a_68ca_46ac_b5f3_5ec6f7acdf5d.slice/crio-9bb367d132c898d225efa4dafc679052619d38d9225b3c57b536ab19259d1fd7 WatchSource:0}: Error finding container 9bb367d132c898d225efa4dafc679052619d38d9225b3c57b536ab19259d1fd7: Status 404 returned error can't find the container with id 9bb367d132c898d225efa4dafc679052619d38d9225b3c57b536ab19259d1fd7 Jan 08 23:37:47 crc kubenswrapper[4945]: I0108 23:37:47.743792 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.105868 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.138210 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9vrvr" event={"ID":"050d08ce-2edb-4748-ad2d-de4183cd0188","Type":"ContainerStarted","Data":"c03394bc51ab0587ca060a94bba18795c106321801347236fc9beb5b338cff1f"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.140262 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2nr9" event={"ID":"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0","Type":"ContainerStarted","Data":"b42c4f4aa5212e95bc4f248175d0cd41430c5a99949f19715cd0d843217b9d27"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.145661 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-db9bs" event={"ID":"b55731b1-a202-455d-928f-626b1303d36e","Type":"ContainerStarted","Data":"11c4411a90c0748f03440e5a2641a144c020846bb16a39aafa38e17332ca7787"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.147648 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerStarted","Data":"a4fb2cd5c2829c9b6f67610513d6e23b06c06b4c549f20387429e6d67e384c6e"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.149388 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-d4r6l" event={"ID":"1c46c438-5dec-4a52-b24e-110451d11489","Type":"ContainerStarted","Data":"507ed9d48a17db04852cb6c40036d1609461dd561a6a0987bc82a1477858fb25"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.164650 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" event={"ID":"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd","Type":"ContainerDied","Data":"f4c1235c87cc98568fbf08766af7aaaadaf7737f0fc48ef724ead9d5edce45b7"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.164600 4945 generic.go:334] "Generic (PLEG): container finished" podID="3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" containerID="f4c1235c87cc98568fbf08766af7aaaadaf7737f0fc48ef724ead9d5edce45b7" exitCode=0 Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.185234 4945 generic.go:334] "Generic (PLEG): container finished" podID="42dea1be-7edc-4f03-bd80-524c1e040925" containerID="b0ee91dbb609235577903764cb188525510fb9ea99c823b67d6f39a7b0577884" exitCode=0 Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.185363 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-544zq" event={"ID":"42dea1be-7edc-4f03-bd80-524c1e040925","Type":"ContainerDied","Data":"b0ee91dbb609235577903764cb188525510fb9ea99c823b67d6f39a7b0577884"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.185401 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-544zq" event={"ID":"42dea1be-7edc-4f03-bd80-524c1e040925","Type":"ContainerStarted","Data":"0b940233dd12575366dc1f8eeaacfac2253b6c777e61881a4231285cf47a7861"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.206224 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-db9bs" podStartSLOduration=3.206201629 podStartE2EDuration="3.206201629s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:48.18126081 +0000 UTC m=+1338.492419756" watchObservedRunningTime="2026-01-08 23:37:48.206201629 +0000 UTC m=+1338.517360575" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.206348 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-d4r6l" podStartSLOduration=3.206343612 podStartE2EDuration="3.206343612s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:48.19916047 +0000 UTC m=+1338.510319416" watchObservedRunningTime="2026-01-08 23:37:48.206343612 +0000 UTC m=+1338.517502558" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.208972 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36d227ea-271c-46fd-8c63-b3257ddba425","Type":"ContainerStarted","Data":"ab4fd618c8494987653e78f96b5cb2669c7618a61a28e55f682706458f33523c"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.303900 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerStarted","Data":"b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.308723 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2txr" event={"ID":"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d","Type":"ContainerStarted","Data":"9bb367d132c898d225efa4dafc679052619d38d9225b3c57b536ab19259d1fd7"} Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.348287 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.557047723 podStartE2EDuration="50.348267597s" podCreationTimestamp="2026-01-08 23:36:58 +0000 UTC" firstStartedPulling="2026-01-08 23:37:32.875800023 +0000 UTC m=+1323.186958969" lastFinishedPulling="2026-01-08 23:37:43.667019897 +0000 UTC m=+1333.978178843" observedRunningTime="2026-01-08 23:37:48.339695901 +0000 UTC m=+1338.650854847" watchObservedRunningTime="2026-01-08 23:37:48.348267597 +0000 UTC m=+1338.659426543" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.663339 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.704369 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.730607 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnwdm\" (UniqueName: \"kubernetes.io/projected/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-kube-api-access-bnwdm\") pod \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.730658 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-sb\") pod \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.730717 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-nb\") pod \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.730755 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-dns-svc\") pod \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.730794 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-config\") pod \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\" (UID: \"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd\") " Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.761182 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-kube-api-access-bnwdm" (OuterVolumeSpecName: "kube-api-access-bnwdm") pod "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" (UID: "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd"). InnerVolumeSpecName "kube-api-access-bnwdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.783520 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.799653 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" (UID: "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.835242 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnwdm\" (UniqueName: \"kubernetes.io/projected/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-kube-api-access-bnwdm\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.835267 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.884803 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.918130 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-544zq"] Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.931929 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-config" (OuterVolumeSpecName: "config") pod "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" (UID: "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.933464 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" (UID: "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.937551 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.937710 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.960903 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kdtnn"] Jan 08 23:37:48 crc kubenswrapper[4945]: E0108 23:37:48.961824 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" containerName="init" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.961915 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" containerName="init" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.962371 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" containerName="init" Jan 08 23:37:48 crc kubenswrapper[4945]: I0108 23:37:48.965691 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" (UID: "3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.000561 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kdtnn"] Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.002314 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.037095 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.039371 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.039464 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.039597 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-config\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.039711 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49644\" (UniqueName: \"kubernetes.io/projected/7de1ea8f-460e-4944-88d6-ebcccbea2119-kube-api-access-49644\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.039815 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.039940 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.040011 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.142689 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.142786 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.142854 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-config\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.142909 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49644\" (UniqueName: \"kubernetes.io/projected/7de1ea8f-460e-4944-88d6-ebcccbea2119-kube-api-access-49644\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.142946 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.145954 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.148389 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.149691 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.152084 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.152252 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-config\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.161322 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.199084 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49644\" (UniqueName: \"kubernetes.io/projected/7de1ea8f-460e-4944-88d6-ebcccbea2119-kube-api-access-49644\") pod \"dnsmasq-dns-56df8fb6b7-kdtnn\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.375661 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" event={"ID":"3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd","Type":"ContainerDied","Data":"cd6676fe717f18ddf9883ca706887d765891628b0ee732887fea3a07cb4676ce"} Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.375756 4945 scope.go:117] "RemoveContainer" containerID="f4c1235c87cc98568fbf08766af7aaaadaf7737f0fc48ef724ead9d5edce45b7" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.375960 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-vxtgk" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.393636 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.442253 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-544zq" event={"ID":"42dea1be-7edc-4f03-bd80-524c1e040925","Type":"ContainerStarted","Data":"8e2cc7df014d94d63eb80c53c6f34520999d5f210cb192bfcdeb71aee7e61c22"} Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.444161 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.447061 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe2cf423-11b9-4cf7-a4d4-715201d4c6de","Type":"ContainerStarted","Data":"81b038052ae591a91c50fac582303521cc8167f77623205095ef684b189b48ba"} Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.452021 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-vxtgk"] Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.455492 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36d227ea-271c-46fd-8c63-b3257ddba425","Type":"ContainerStarted","Data":"5896f56c438aecf20fc46543ea92d3983ac5ca5ed1a69af6ac143778911c4afe"} Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.480069 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-vxtgk"] Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.486542 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56798b757f-544zq" podStartSLOduration=4.4865250230000004 podStartE2EDuration="4.486525023s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:49.479354221 +0000 UTC m=+1339.790513177" watchObservedRunningTime="2026-01-08 23:37:49.486525023 +0000 UTC m=+1339.797683969" Jan 08 23:37:49 crc kubenswrapper[4945]: W0108 23:37:49.827125 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7de1ea8f_460e_4944_88d6_ebcccbea2119.slice/crio-83cf91cf1cc90b3339f2896cf238a3c31539a4ec3f01d638fb03adaf12548b5a WatchSource:0}: Error finding container 83cf91cf1cc90b3339f2896cf238a3c31539a4ec3f01d638fb03adaf12548b5a: Status 404 returned error can't find the container with id 83cf91cf1cc90b3339f2896cf238a3c31539a4ec3f01d638fb03adaf12548b5a Jan 08 23:37:49 crc kubenswrapper[4945]: I0108 23:37:49.833948 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kdtnn"] Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.049358 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd" path="/var/lib/kubelet/pods/3f19e4d5-38e3-4f45-b2fa-1b3596b4fafd/volumes" Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.485904 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36d227ea-271c-46fd-8c63-b3257ddba425","Type":"ContainerStarted","Data":"d7b53c8a21a348618d2ea90f4cd3193587d915a3b5095daad463e57032192cbf"} Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.486221 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" containerName="glance-log" containerID="cri-o://5896f56c438aecf20fc46543ea92d3983ac5ca5ed1a69af6ac143778911c4afe" gracePeriod=30 Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.486688 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" containerName="glance-httpd" containerID="cri-o://d7b53c8a21a348618d2ea90f4cd3193587d915a3b5095daad463e57032192cbf" gracePeriod=30 Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.508302 4945 generic.go:334] "Generic (PLEG): container finished" podID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerID="a0b433e8a29176f3296a162478c6b6e9a042df1f8295699ccb71378ea9609f32" exitCode=0 Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.508403 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" event={"ID":"7de1ea8f-460e-4944-88d6-ebcccbea2119","Type":"ContainerDied","Data":"a0b433e8a29176f3296a162478c6b6e9a042df1f8295699ccb71378ea9609f32"} Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.508481 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" event={"ID":"7de1ea8f-460e-4944-88d6-ebcccbea2119","Type":"ContainerStarted","Data":"83cf91cf1cc90b3339f2896cf238a3c31539a4ec3f01d638fb03adaf12548b5a"} Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.515653 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.515636481 podStartE2EDuration="5.515636481s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:50.513549641 +0000 UTC m=+1340.824708587" watchObservedRunningTime="2026-01-08 23:37:50.515636481 +0000 UTC m=+1340.826795427" Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.532042 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe2cf423-11b9-4cf7-a4d4-715201d4c6de","Type":"ContainerStarted","Data":"a8bc0da3af5c463ffde3a76429bc7754671ff203b56c08846d3561fb4551221a"} Jan 08 23:37:50 crc kubenswrapper[4945]: I0108 23:37:50.532180 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56798b757f-544zq" podUID="42dea1be-7edc-4f03-bd80-524c1e040925" containerName="dnsmasq-dns" containerID="cri-o://8e2cc7df014d94d63eb80c53c6f34520999d5f210cb192bfcdeb71aee7e61c22" gracePeriod=10 Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.551794 4945 generic.go:334] "Generic (PLEG): container finished" podID="36d227ea-271c-46fd-8c63-b3257ddba425" containerID="d7b53c8a21a348618d2ea90f4cd3193587d915a3b5095daad463e57032192cbf" exitCode=143 Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.552131 4945 generic.go:334] "Generic (PLEG): container finished" podID="36d227ea-271c-46fd-8c63-b3257ddba425" containerID="5896f56c438aecf20fc46543ea92d3983ac5ca5ed1a69af6ac143778911c4afe" exitCode=143 Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.551984 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36d227ea-271c-46fd-8c63-b3257ddba425","Type":"ContainerDied","Data":"d7b53c8a21a348618d2ea90f4cd3193587d915a3b5095daad463e57032192cbf"} Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.552234 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36d227ea-271c-46fd-8c63-b3257ddba425","Type":"ContainerDied","Data":"5896f56c438aecf20fc46543ea92d3983ac5ca5ed1a69af6ac143778911c4afe"} Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.567407 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" event={"ID":"7de1ea8f-460e-4944-88d6-ebcccbea2119","Type":"ContainerStarted","Data":"7f543aedadea990fccebaa450c3d82ded223678ae0a0882c86653be7724add5d"} Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.567913 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.571736 4945 generic.go:334] "Generic (PLEG): container finished" podID="42dea1be-7edc-4f03-bd80-524c1e040925" containerID="8e2cc7df014d94d63eb80c53c6f34520999d5f210cb192bfcdeb71aee7e61c22" exitCode=0 Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.571805 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-544zq" event={"ID":"42dea1be-7edc-4f03-bd80-524c1e040925","Type":"ContainerDied","Data":"8e2cc7df014d94d63eb80c53c6f34520999d5f210cb192bfcdeb71aee7e61c22"} Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.576804 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe2cf423-11b9-4cf7-a4d4-715201d4c6de","Type":"ContainerStarted","Data":"0376618cb2bddaa5a22870ea5963b125f565cf8b829236831b630fb80986eb17"} Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.576924 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerName="glance-log" containerID="cri-o://a8bc0da3af5c463ffde3a76429bc7754671ff203b56c08846d3561fb4551221a" gracePeriod=30 Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.577210 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerName="glance-httpd" containerID="cri-o://0376618cb2bddaa5a22870ea5963b125f565cf8b829236831b630fb80986eb17" gracePeriod=30 Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.594509 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" podStartSLOduration=3.594491011 podStartE2EDuration="3.594491011s" podCreationTimestamp="2026-01-08 23:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:51.584120903 +0000 UTC m=+1341.895279869" watchObservedRunningTime="2026-01-08 23:37:51.594491011 +0000 UTC m=+1341.905649957" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.613709 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.613692282 podStartE2EDuration="6.613692282s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:37:51.608479277 +0000 UTC m=+1341.919638223" watchObservedRunningTime="2026-01-08 23:37:51.613692282 +0000 UTC m=+1341.924851228" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.710727 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.718878 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.818653 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crxpq\" (UniqueName: \"kubernetes.io/projected/42dea1be-7edc-4f03-bd80-524c1e040925-kube-api-access-crxpq\") pod \"42dea1be-7edc-4f03-bd80-524c1e040925\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.818924 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-logs\") pod \"36d227ea-271c-46fd-8c63-b3257ddba425\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.818965 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-httpd-run\") pod \"36d227ea-271c-46fd-8c63-b3257ddba425\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.818982 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"36d227ea-271c-46fd-8c63-b3257ddba425\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819023 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-config\") pod \"42dea1be-7edc-4f03-bd80-524c1e040925\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819080 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-nb\") pod \"42dea1be-7edc-4f03-bd80-524c1e040925\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819137 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-sb\") pod \"42dea1be-7edc-4f03-bd80-524c1e040925\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819161 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k4ms\" (UniqueName: \"kubernetes.io/projected/36d227ea-271c-46fd-8c63-b3257ddba425-kube-api-access-2k4ms\") pod \"36d227ea-271c-46fd-8c63-b3257ddba425\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819212 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-combined-ca-bundle\") pod \"36d227ea-271c-46fd-8c63-b3257ddba425\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819232 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-dns-svc\") pod \"42dea1be-7edc-4f03-bd80-524c1e040925\" (UID: \"42dea1be-7edc-4f03-bd80-524c1e040925\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819280 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-scripts\") pod \"36d227ea-271c-46fd-8c63-b3257ddba425\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819301 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-config-data\") pod \"36d227ea-271c-46fd-8c63-b3257ddba425\" (UID: \"36d227ea-271c-46fd-8c63-b3257ddba425\") " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819807 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-logs" (OuterVolumeSpecName: "logs") pod "36d227ea-271c-46fd-8c63-b3257ddba425" (UID: "36d227ea-271c-46fd-8c63-b3257ddba425"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.819808 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "36d227ea-271c-46fd-8c63-b3257ddba425" (UID: "36d227ea-271c-46fd-8c63-b3257ddba425"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.832858 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d227ea-271c-46fd-8c63-b3257ddba425-kube-api-access-2k4ms" (OuterVolumeSpecName: "kube-api-access-2k4ms") pod "36d227ea-271c-46fd-8c63-b3257ddba425" (UID: "36d227ea-271c-46fd-8c63-b3257ddba425"). InnerVolumeSpecName "kube-api-access-2k4ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.836563 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42dea1be-7edc-4f03-bd80-524c1e040925-kube-api-access-crxpq" (OuterVolumeSpecName: "kube-api-access-crxpq") pod "42dea1be-7edc-4f03-bd80-524c1e040925" (UID: "42dea1be-7edc-4f03-bd80-524c1e040925"). InnerVolumeSpecName "kube-api-access-crxpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.839377 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-scripts" (OuterVolumeSpecName: "scripts") pod "36d227ea-271c-46fd-8c63-b3257ddba425" (UID: "36d227ea-271c-46fd-8c63-b3257ddba425"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.857527 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "36d227ea-271c-46fd-8c63-b3257ddba425" (UID: "36d227ea-271c-46fd-8c63-b3257ddba425"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.875281 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36d227ea-271c-46fd-8c63-b3257ddba425" (UID: "36d227ea-271c-46fd-8c63-b3257ddba425"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.887660 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42dea1be-7edc-4f03-bd80-524c1e040925" (UID: "42dea1be-7edc-4f03-bd80-524c1e040925"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.893367 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42dea1be-7edc-4f03-bd80-524c1e040925" (UID: "42dea1be-7edc-4f03-bd80-524c1e040925"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.901455 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-config" (OuterVolumeSpecName: "config") pod "42dea1be-7edc-4f03-bd80-524c1e040925" (UID: "42dea1be-7edc-4f03-bd80-524c1e040925"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.907349 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42dea1be-7edc-4f03-bd80-524c1e040925" (UID: "42dea1be-7edc-4f03-bd80-524c1e040925"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921108 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921139 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921149 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crxpq\" (UniqueName: \"kubernetes.io/projected/42dea1be-7edc-4f03-bd80-524c1e040925-kube-api-access-crxpq\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921159 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921167 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36d227ea-271c-46fd-8c63-b3257ddba425-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921195 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921206 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921214 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921223 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42dea1be-7edc-4f03-bd80-524c1e040925-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921231 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k4ms\" (UniqueName: \"kubernetes.io/projected/36d227ea-271c-46fd-8c63-b3257ddba425-kube-api-access-2k4ms\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.921239 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.938106 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-config-data" (OuterVolumeSpecName: "config-data") pod "36d227ea-271c-46fd-8c63-b3257ddba425" (UID: "36d227ea-271c-46fd-8c63-b3257ddba425"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:37:51 crc kubenswrapper[4945]: I0108 23:37:51.947069 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.030782 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.030838 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d227ea-271c-46fd-8c63-b3257ddba425-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.589301 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-544zq" event={"ID":"42dea1be-7edc-4f03-bd80-524c1e040925","Type":"ContainerDied","Data":"0b940233dd12575366dc1f8eeaacfac2253b6c777e61881a4231285cf47a7861"} Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.589381 4945 scope.go:117] "RemoveContainer" containerID="8e2cc7df014d94d63eb80c53c6f34520999d5f210cb192bfcdeb71aee7e61c22" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.589593 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-544zq" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.604358 4945 generic.go:334] "Generic (PLEG): container finished" podID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerID="0376618cb2bddaa5a22870ea5963b125f565cf8b829236831b630fb80986eb17" exitCode=0 Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.604809 4945 generic.go:334] "Generic (PLEG): container finished" podID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerID="a8bc0da3af5c463ffde3a76429bc7754671ff203b56c08846d3561fb4551221a" exitCode=143 Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.604891 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe2cf423-11b9-4cf7-a4d4-715201d4c6de","Type":"ContainerDied","Data":"0376618cb2bddaa5a22870ea5963b125f565cf8b829236831b630fb80986eb17"} Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.604927 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe2cf423-11b9-4cf7-a4d4-715201d4c6de","Type":"ContainerDied","Data":"a8bc0da3af5c463ffde3a76429bc7754671ff203b56c08846d3561fb4551221a"} Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.618246 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36d227ea-271c-46fd-8c63-b3257ddba425","Type":"ContainerDied","Data":"ab4fd618c8494987653e78f96b5cb2669c7618a61a28e55f682706458f33523c"} Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.618281 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.624425 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-544zq"] Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.647118 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-544zq"] Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.672515 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.689540 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.706721 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:52 crc kubenswrapper[4945]: E0108 23:37:52.707129 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" containerName="glance-log" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.707152 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" containerName="glance-log" Jan 08 23:37:52 crc kubenswrapper[4945]: E0108 23:37:52.707162 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42dea1be-7edc-4f03-bd80-524c1e040925" containerName="init" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.707168 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="42dea1be-7edc-4f03-bd80-524c1e040925" containerName="init" Jan 08 23:37:52 crc kubenswrapper[4945]: E0108 23:37:52.707181 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42dea1be-7edc-4f03-bd80-524c1e040925" containerName="dnsmasq-dns" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.707190 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="42dea1be-7edc-4f03-bd80-524c1e040925" containerName="dnsmasq-dns" Jan 08 23:37:52 crc kubenswrapper[4945]: E0108 23:37:52.707207 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" containerName="glance-httpd" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.707214 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" containerName="glance-httpd" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.707404 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" containerName="glance-httpd" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.707415 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" containerName="glance-log" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.707437 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="42dea1be-7edc-4f03-bd80-524c1e040925" containerName="dnsmasq-dns" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.718396 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.720341 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.732182 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.848974 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.849084 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-scripts\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.849138 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.849169 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.849190 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-logs\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.849252 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-config-data\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.849718 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fql62\" (UniqueName: \"kubernetes.io/projected/113b4c74-e409-4095-801c-2ea5ab73064c-kube-api-access-fql62\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.952306 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fql62\" (UniqueName: \"kubernetes.io/projected/113b4c74-e409-4095-801c-2ea5ab73064c-kube-api-access-fql62\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.952453 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.952499 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-scripts\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.952543 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.952573 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.952596 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-logs\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.952631 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-config-data\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.953410 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.953584 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-logs\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.953885 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.959059 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.959354 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-scripts\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.961371 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-config-data\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.984903 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fql62\" (UniqueName: \"kubernetes.io/projected/113b4c74-e409-4095-801c-2ea5ab73064c-kube-api-access-fql62\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:52 crc kubenswrapper[4945]: I0108 23:37:52.993240 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:37:53 crc kubenswrapper[4945]: I0108 23:37:53.055331 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:37:53 crc kubenswrapper[4945]: I0108 23:37:53.628802 4945 generic.go:334] "Generic (PLEG): container finished" podID="b55731b1-a202-455d-928f-626b1303d36e" containerID="11c4411a90c0748f03440e5a2641a144c020846bb16a39aafa38e17332ca7787" exitCode=0 Jan 08 23:37:53 crc kubenswrapper[4945]: I0108 23:37:53.629166 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-db9bs" event={"ID":"b55731b1-a202-455d-928f-626b1303d36e","Type":"ContainerDied","Data":"11c4411a90c0748f03440e5a2641a144c020846bb16a39aafa38e17332ca7787"} Jan 08 23:37:54 crc kubenswrapper[4945]: I0108 23:37:54.013130 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d227ea-271c-46fd-8c63-b3257ddba425" path="/var/lib/kubelet/pods/36d227ea-271c-46fd-8c63-b3257ddba425/volumes" Jan 08 23:37:54 crc kubenswrapper[4945]: I0108 23:37:54.013711 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42dea1be-7edc-4f03-bd80-524c1e040925" path="/var/lib/kubelet/pods/42dea1be-7edc-4f03-bd80-524c1e040925/volumes" Jan 08 23:37:56 crc kubenswrapper[4945]: I0108 23:37:56.153266 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:37:59 crc kubenswrapper[4945]: I0108 23:37:59.396118 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:37:59 crc kubenswrapper[4945]: I0108 23:37:59.455781 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-btlgr"] Jan 08 23:37:59 crc kubenswrapper[4945]: I0108 23:37:59.456097 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerName="dnsmasq-dns" containerID="cri-o://31f5128f3ee2a062618b95bb3d21ba814f7cc510009648d249d8e84cbe8e1094" gracePeriod=10 Jan 08 23:37:59 crc kubenswrapper[4945]: I0108 23:37:59.699445 4945 generic.go:334] "Generic (PLEG): container finished" podID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerID="31f5128f3ee2a062618b95bb3d21ba814f7cc510009648d249d8e84cbe8e1094" exitCode=0 Jan 08 23:37:59 crc kubenswrapper[4945]: I0108 23:37:59.699525 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" event={"ID":"3602d69c-6735-47da-b4ca-ef53f5e70a29","Type":"ContainerDied","Data":"31f5128f3ee2a062618b95bb3d21ba814f7cc510009648d249d8e84cbe8e1094"} Jan 08 23:38:00 crc kubenswrapper[4945]: I0108 23:38:00.919050 4945 scope.go:117] "RemoveContainer" containerID="b0ee91dbb609235577903764cb188525510fb9ea99c823b67d6f39a7b0577884" Jan 08 23:38:04 crc kubenswrapper[4945]: I0108 23:38:04.259319 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Jan 08 23:38:09 crc kubenswrapper[4945]: I0108 23:38:09.258815 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.426754 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.436791 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555144 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4vb8\" (UniqueName: \"kubernetes.io/projected/b55731b1-a202-455d-928f-626b1303d36e-kube-api-access-t4vb8\") pod \"b55731b1-a202-455d-928f-626b1303d36e\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555229 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-httpd-run\") pod \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555264 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-config-data\") pod \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555451 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-scripts\") pod \"b55731b1-a202-455d-928f-626b1303d36e\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555526 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-logs\") pod \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555561 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p5vd\" (UniqueName: \"kubernetes.io/projected/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-kube-api-access-9p5vd\") pod \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555638 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555689 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-combined-ca-bundle\") pod \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555732 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-credential-keys\") pod \"b55731b1-a202-455d-928f-626b1303d36e\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555781 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-scripts\") pod \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\" (UID: \"fe2cf423-11b9-4cf7-a4d4-715201d4c6de\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555817 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-config-data\") pod \"b55731b1-a202-455d-928f-626b1303d36e\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555868 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-fernet-keys\") pod \"b55731b1-a202-455d-928f-626b1303d36e\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.555929 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-combined-ca-bundle\") pod \"b55731b1-a202-455d-928f-626b1303d36e\" (UID: \"b55731b1-a202-455d-928f-626b1303d36e\") " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.559431 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fe2cf423-11b9-4cf7-a4d4-715201d4c6de" (UID: "fe2cf423-11b9-4cf7-a4d4-715201d4c6de"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.559707 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-logs" (OuterVolumeSpecName: "logs") pod "fe2cf423-11b9-4cf7-a4d4-715201d4c6de" (UID: "fe2cf423-11b9-4cf7-a4d4-715201d4c6de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.566410 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b55731b1-a202-455d-928f-626b1303d36e" (UID: "b55731b1-a202-455d-928f-626b1303d36e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.567795 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-scripts" (OuterVolumeSpecName: "scripts") pod "fe2cf423-11b9-4cf7-a4d4-715201d4c6de" (UID: "fe2cf423-11b9-4cf7-a4d4-715201d4c6de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.583195 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55731b1-a202-455d-928f-626b1303d36e-kube-api-access-t4vb8" (OuterVolumeSpecName: "kube-api-access-t4vb8") pod "b55731b1-a202-455d-928f-626b1303d36e" (UID: "b55731b1-a202-455d-928f-626b1303d36e"). InnerVolumeSpecName "kube-api-access-t4vb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.583259 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b55731b1-a202-455d-928f-626b1303d36e" (UID: "b55731b1-a202-455d-928f-626b1303d36e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.583299 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-kube-api-access-9p5vd" (OuterVolumeSpecName: "kube-api-access-9p5vd") pod "fe2cf423-11b9-4cf7-a4d4-715201d4c6de" (UID: "fe2cf423-11b9-4cf7-a4d4-715201d4c6de"). InnerVolumeSpecName "kube-api-access-9p5vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.583634 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-scripts" (OuterVolumeSpecName: "scripts") pod "b55731b1-a202-455d-928f-626b1303d36e" (UID: "b55731b1-a202-455d-928f-626b1303d36e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.594790 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "fe2cf423-11b9-4cf7-a4d4-715201d4c6de" (UID: "fe2cf423-11b9-4cf7-a4d4-715201d4c6de"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.599564 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe2cf423-11b9-4cf7-a4d4-715201d4c6de" (UID: "fe2cf423-11b9-4cf7-a4d4-715201d4c6de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.599945 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b55731b1-a202-455d-928f-626b1303d36e" (UID: "b55731b1-a202-455d-928f-626b1303d36e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.600650 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-config-data" (OuterVolumeSpecName: "config-data") pod "b55731b1-a202-455d-928f-626b1303d36e" (UID: "b55731b1-a202-455d-928f-626b1303d36e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.627239 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-config-data" (OuterVolumeSpecName: "config-data") pod "fe2cf423-11b9-4cf7-a4d4-715201d4c6de" (UID: "fe2cf423-11b9-4cf7-a4d4-715201d4c6de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670024 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670055 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p5vd\" (UniqueName: \"kubernetes.io/projected/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-kube-api-access-9p5vd\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670082 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670093 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670103 4945 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670111 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670118 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670126 4945 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670133 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670142 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4vb8\" (UniqueName: \"kubernetes.io/projected/b55731b1-a202-455d-928f-626b1303d36e-kube-api-access-t4vb8\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670149 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670156 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2cf423-11b9-4cf7-a4d4-715201d4c6de-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.670164 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b55731b1-a202-455d-928f-626b1303d36e-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.689013 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.771252 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.824204 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-db9bs" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.824615 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-db9bs" event={"ID":"b55731b1-a202-455d-928f-626b1303d36e","Type":"ContainerDied","Data":"0dd42768276aba92d5c5943375a59eceb1b5c60ca7a40e22981c43c9c75e02b2"} Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.824692 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd42768276aba92d5c5943375a59eceb1b5c60ca7a40e22981c43c9c75e02b2" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.826943 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fe2cf423-11b9-4cf7-a4d4-715201d4c6de","Type":"ContainerDied","Data":"81b038052ae591a91c50fac582303521cc8167f77623205095ef684b189b48ba"} Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.827052 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.874900 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.894129 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.914256 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:38:10 crc kubenswrapper[4945]: E0108 23:38:10.915393 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55731b1-a202-455d-928f-626b1303d36e" containerName="keystone-bootstrap" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.915421 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55731b1-a202-455d-928f-626b1303d36e" containerName="keystone-bootstrap" Jan 08 23:38:10 crc kubenswrapper[4945]: E0108 23:38:10.915431 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerName="glance-log" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.915439 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerName="glance-log" Jan 08 23:38:10 crc kubenswrapper[4945]: E0108 23:38:10.915483 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerName="glance-httpd" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.915489 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerName="glance-httpd" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.915678 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerName="glance-httpd" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.915692 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55731b1-a202-455d-928f-626b1303d36e" containerName="keystone-bootstrap" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.915699 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" containerName="glance-log" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.916863 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.925378 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.925492 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.943688 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.981204 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.981295 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.981334 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-logs\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.981432 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.981551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.981612 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvwhw\" (UniqueName: \"kubernetes.io/projected/0fe7dee5-62fc-46a9-8247-7d675b504104-kube-api-access-wvwhw\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.981661 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:10 crc kubenswrapper[4945]: I0108 23:38:10.981886 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.083852 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.084027 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.084059 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.084084 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-logs\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.084134 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.084228 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.084288 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvwhw\" (UniqueName: \"kubernetes.io/projected/0fe7dee5-62fc-46a9-8247-7d675b504104-kube-api-access-wvwhw\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.084339 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.085329 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.085457 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-logs\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.085515 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.088857 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.091817 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.092089 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.093336 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.106712 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvwhw\" (UniqueName: \"kubernetes.io/projected/0fe7dee5-62fc-46a9-8247-7d675b504104-kube-api-access-wvwhw\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.117678 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.244335 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.616788 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-db9bs"] Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.630053 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-db9bs"] Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.749750 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ntq4n"] Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.758251 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.758268 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ntq4n"] Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.760920 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.761129 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.761175 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bwwj4" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.761213 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.761469 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.901856 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-scripts\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.901915 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-credential-keys\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.902043 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-fernet-keys\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.902062 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-combined-ca-bundle\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.902098 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-config-data\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.902176 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8knzc\" (UniqueName: \"kubernetes.io/projected/e500c7f0-c056-45f2-816d-d904fd8e18cf-kube-api-access-8knzc\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:11 crc kubenswrapper[4945]: I0108 23:38:11.954219 4945 scope.go:117] "RemoveContainer" containerID="d7b53c8a21a348618d2ea90f4cd3193587d915a3b5095daad463e57032192cbf" Jan 08 23:38:11 crc kubenswrapper[4945]: E0108 23:38:11.998871 4945 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 08 23:38:11 crc kubenswrapper[4945]: E0108 23:38:11.999058 4945 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84ch2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9vrvr_openstack(050d08ce-2edb-4748-ad2d-de4183cd0188): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 08 23:38:12 crc kubenswrapper[4945]: E0108 23:38:12.000247 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9vrvr" podUID="050d08ce-2edb-4748-ad2d-de4183cd0188" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.005059 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-scripts\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.008573 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-scripts\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.011079 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-credential-keys\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.011937 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-fernet-keys\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.012249 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-combined-ca-bundle\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.012459 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-config-data\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.012726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8knzc\" (UniqueName: \"kubernetes.io/projected/e500c7f0-c056-45f2-816d-d904fd8e18cf-kube-api-access-8knzc\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.015972 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-fernet-keys\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.016447 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55731b1-a202-455d-928f-626b1303d36e" path="/var/lib/kubelet/pods/b55731b1-a202-455d-928f-626b1303d36e/volumes" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.017094 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-combined-ca-bundle\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.017543 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe2cf423-11b9-4cf7-a4d4-715201d4c6de" path="/var/lib/kubelet/pods/fe2cf423-11b9-4cf7-a4d4-715201d4c6de/volumes" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.019193 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-credential-keys\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.019597 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-config-data\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.031971 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8knzc\" (UniqueName: \"kubernetes.io/projected/e500c7f0-c056-45f2-816d-d904fd8e18cf-kube-api-access-8knzc\") pod \"keystone-bootstrap-ntq4n\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.117150 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.229828 4945 scope.go:117] "RemoveContainer" containerID="5896f56c438aecf20fc46543ea92d3983ac5ca5ed1a69af6ac143778911c4afe" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.363424 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.416177 4945 scope.go:117] "RemoveContainer" containerID="0376618cb2bddaa5a22870ea5963b125f565cf8b829236831b630fb80986eb17" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.422515 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-dns-svc\") pod \"3602d69c-6735-47da-b4ca-ef53f5e70a29\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.422582 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-config\") pod \"3602d69c-6735-47da-b4ca-ef53f5e70a29\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.422659 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-sb\") pod \"3602d69c-6735-47da-b4ca-ef53f5e70a29\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.422723 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfctn\" (UniqueName: \"kubernetes.io/projected/3602d69c-6735-47da-b4ca-ef53f5e70a29-kube-api-access-dfctn\") pod \"3602d69c-6735-47da-b4ca-ef53f5e70a29\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.422794 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-nb\") pod \"3602d69c-6735-47da-b4ca-ef53f5e70a29\" (UID: \"3602d69c-6735-47da-b4ca-ef53f5e70a29\") " Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.444937 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3602d69c-6735-47da-b4ca-ef53f5e70a29-kube-api-access-dfctn" (OuterVolumeSpecName: "kube-api-access-dfctn") pod "3602d69c-6735-47da-b4ca-ef53f5e70a29" (UID: "3602d69c-6735-47da-b4ca-ef53f5e70a29"). InnerVolumeSpecName "kube-api-access-dfctn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.473529 4945 scope.go:117] "RemoveContainer" containerID="a8bc0da3af5c463ffde3a76429bc7754671ff203b56c08846d3561fb4551221a" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.476524 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ntq4n"] Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.526357 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfctn\" (UniqueName: \"kubernetes.io/projected/3602d69c-6735-47da-b4ca-ef53f5e70a29-kube-api-access-dfctn\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.605727 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.670741 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3602d69c-6735-47da-b4ca-ef53f5e70a29" (UID: "3602d69c-6735-47da-b4ca-ef53f5e70a29"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.686144 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3602d69c-6735-47da-b4ca-ef53f5e70a29" (UID: "3602d69c-6735-47da-b4ca-ef53f5e70a29"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.686650 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3602d69c-6735-47da-b4ca-ef53f5e70a29" (UID: "3602d69c-6735-47da-b4ca-ef53f5e70a29"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.703949 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-config" (OuterVolumeSpecName: "config") pod "3602d69c-6735-47da-b4ca-ef53f5e70a29" (UID: "3602d69c-6735-47da-b4ca-ef53f5e70a29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.731803 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.731859 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.731874 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.731890 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3602d69c-6735-47da-b4ca-ef53f5e70a29-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.857104 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntq4n" event={"ID":"e500c7f0-c056-45f2-816d-d904fd8e18cf","Type":"ContainerStarted","Data":"b2bdbe2d5d8c4f4089a11754dceb462ff90f96d7e8055610e250d38dc2a8b7a5"} Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.865936 4945 generic.go:334] "Generic (PLEG): container finished" podID="1c46c438-5dec-4a52-b24e-110451d11489" containerID="507ed9d48a17db04852cb6c40036d1609461dd561a6a0987bc82a1477858fb25" exitCode=0 Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.866096 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-d4r6l" event={"ID":"1c46c438-5dec-4a52-b24e-110451d11489","Type":"ContainerDied","Data":"507ed9d48a17db04852cb6c40036d1609461dd561a6a0987bc82a1477858fb25"} Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.876614 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2nr9" event={"ID":"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0","Type":"ContainerStarted","Data":"6d3373416d06d09c31beb8c4a908f1408aa4767816ccbdd4c183de5420b32d29"} Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.890707 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" event={"ID":"3602d69c-6735-47da-b4ca-ef53f5e70a29","Type":"ContainerDied","Data":"92fdcc564b517d5cad25b7a4c008a327351b91816ee5118f89b9f18fd0fbabea"} Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.890796 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-btlgr" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.890800 4945 scope.go:117] "RemoveContainer" containerID="31f5128f3ee2a062618b95bb3d21ba814f7cc510009648d249d8e84cbe8e1094" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.895250 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fe7dee5-62fc-46a9-8247-7d675b504104","Type":"ContainerStarted","Data":"19fca36f42bd4e932b4204e2cc1804ed0ade5a9f5f05f7298a4603d615cbdf60"} Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.896715 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerStarted","Data":"2d8c64675625b64f38ddb135175e04eceeb5c95dd30ca524d9fff77e79106f06"} Jan 08 23:38:12 crc kubenswrapper[4945]: E0108 23:38:12.897881 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9vrvr" podUID="050d08ce-2edb-4748-ad2d-de4183cd0188" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.906120 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-x2nr9" podStartSLOduration=3.220048195 podStartE2EDuration="27.906096049s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="2026-01-08 23:37:47.290164543 +0000 UTC m=+1337.601323489" lastFinishedPulling="2026-01-08 23:38:11.976212397 +0000 UTC m=+1362.287371343" observedRunningTime="2026-01-08 23:38:12.905364541 +0000 UTC m=+1363.216523487" watchObservedRunningTime="2026-01-08 23:38:12.906096049 +0000 UTC m=+1363.217254995" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.925738 4945 scope.go:117] "RemoveContainer" containerID="6ed647e63ed5ced94c25ecc087c8142d70f440529129b6cab6669745d50b1e92" Jan 08 23:38:12 crc kubenswrapper[4945]: I0108 23:38:12.954035 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-btlgr"] Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.010753 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-btlgr"] Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.132536 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:38:13 crc kubenswrapper[4945]: W0108 23:38:13.147728 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod113b4c74_e409_4095_801c_2ea5ab73064c.slice/crio-9958c82a8ec2a28435df84e00e5078bf326583a019bb6ca3ec6975b936606b0f WatchSource:0}: Error finding container 9958c82a8ec2a28435df84e00e5078bf326583a019bb6ca3ec6975b936606b0f: Status 404 returned error can't find the container with id 9958c82a8ec2a28435df84e00e5078bf326583a019bb6ca3ec6975b936606b0f Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.579732 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.580322 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.580391 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.581439 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ea29ffcc641534adace455f20d68f37c7c8da0950e832af522e2661b455a0c2"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.581504 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://6ea29ffcc641534adace455f20d68f37c7c8da0950e832af522e2661b455a0c2" gracePeriod=600 Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.910368 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2txr" event={"ID":"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d","Type":"ContainerStarted","Data":"6f22349d33e5ca7679067ee358e025809e2066d028072d0e655a6e5e5971a3f7"} Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.913310 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntq4n" event={"ID":"e500c7f0-c056-45f2-816d-d904fd8e18cf","Type":"ContainerStarted","Data":"aa21910ea6f2611e24574bdfa939c904fc8185af95c178b62d819c74b5854c82"} Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.926501 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="6ea29ffcc641534adace455f20d68f37c7c8da0950e832af522e2661b455a0c2" exitCode=0 Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.926565 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"6ea29ffcc641534adace455f20d68f37c7c8da0950e832af522e2661b455a0c2"} Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.926599 4945 scope.go:117] "RemoveContainer" containerID="e3f7c0fd5402fc991541e7265a64423cf96ba0034b54b94c9210237909eb4a91" Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.944167 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"113b4c74-e409-4095-801c-2ea5ab73064c","Type":"ContainerStarted","Data":"9958c82a8ec2a28435df84e00e5078bf326583a019bb6ca3ec6975b936606b0f"} Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.966740 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fe7dee5-62fc-46a9-8247-7d675b504104","Type":"ContainerStarted","Data":"6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1"} Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.985746 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j2txr" podStartSLOduration=4.624288425 podStartE2EDuration="28.985727932s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="2026-01-08 23:37:47.614799611 +0000 UTC m=+1337.925958557" lastFinishedPulling="2026-01-08 23:38:11.976239118 +0000 UTC m=+1362.287398064" observedRunningTime="2026-01-08 23:38:13.941479097 +0000 UTC m=+1364.252638043" watchObservedRunningTime="2026-01-08 23:38:13.985727932 +0000 UTC m=+1364.296886878" Jan 08 23:38:13 crc kubenswrapper[4945]: I0108 23:38:13.986525 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ntq4n" podStartSLOduration=2.986520881 podStartE2EDuration="2.986520881s" podCreationTimestamp="2026-01-08 23:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:13.984956483 +0000 UTC m=+1364.296115439" watchObservedRunningTime="2026-01-08 23:38:13.986520881 +0000 UTC m=+1364.297679827" Jan 08 23:38:14 crc kubenswrapper[4945]: I0108 23:38:14.041121 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" path="/var/lib/kubelet/pods/3602d69c-6735-47da-b4ca-ef53f5e70a29/volumes" Jan 08 23:38:14 crc kubenswrapper[4945]: I0108 23:38:14.979765 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"113b4c74-e409-4095-801c-2ea5ab73064c","Type":"ContainerStarted","Data":"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d"} Jan 08 23:38:14 crc kubenswrapper[4945]: I0108 23:38:14.983652 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fe7dee5-62fc-46a9-8247-7d675b504104","Type":"ContainerStarted","Data":"303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701"} Jan 08 23:38:14 crc kubenswrapper[4945]: I0108 23:38:14.989382 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8"} Jan 08 23:38:15 crc kubenswrapper[4945]: I0108 23:38:15.017086 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.017066733 podStartE2EDuration="5.017066733s" podCreationTimestamp="2026-01-08 23:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:15.014358858 +0000 UTC m=+1365.325517804" watchObservedRunningTime="2026-01-08 23:38:15.017066733 +0000 UTC m=+1365.328225669" Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.707461 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.844155 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-combined-ca-bundle\") pod \"1c46c438-5dec-4a52-b24e-110451d11489\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.844247 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sgnn\" (UniqueName: \"kubernetes.io/projected/1c46c438-5dec-4a52-b24e-110451d11489-kube-api-access-6sgnn\") pod \"1c46c438-5dec-4a52-b24e-110451d11489\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.844335 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-config\") pod \"1c46c438-5dec-4a52-b24e-110451d11489\" (UID: \"1c46c438-5dec-4a52-b24e-110451d11489\") " Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.857174 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c46c438-5dec-4a52-b24e-110451d11489-kube-api-access-6sgnn" (OuterVolumeSpecName: "kube-api-access-6sgnn") pod "1c46c438-5dec-4a52-b24e-110451d11489" (UID: "1c46c438-5dec-4a52-b24e-110451d11489"). InnerVolumeSpecName "kube-api-access-6sgnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.881168 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c46c438-5dec-4a52-b24e-110451d11489" (UID: "1c46c438-5dec-4a52-b24e-110451d11489"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.884097 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-config" (OuterVolumeSpecName: "config") pod "1c46c438-5dec-4a52-b24e-110451d11489" (UID: "1c46c438-5dec-4a52-b24e-110451d11489"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.946547 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.946582 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sgnn\" (UniqueName: \"kubernetes.io/projected/1c46c438-5dec-4a52-b24e-110451d11489-kube-api-access-6sgnn\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:16 crc kubenswrapper[4945]: I0108 23:38:16.946592 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1c46c438-5dec-4a52-b24e-110451d11489-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.032720 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerStarted","Data":"a99fce2db4d78245892dfd0050a3ce7d0cd826c916a3584cd13699b65a41a67c"} Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.035385 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-d4r6l" event={"ID":"1c46c438-5dec-4a52-b24e-110451d11489","Type":"ContainerDied","Data":"fa37c2b6b512696222e9cbea01a54882140948e7fec47f81f568340d42b3a92a"} Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.035433 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa37c2b6b512696222e9cbea01a54882140948e7fec47f81f568340d42b3a92a" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.035535 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-d4r6l" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.044908 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" containerName="glance-log" containerID="cri-o://34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d" gracePeriod=30 Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.046475 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"113b4c74-e409-4095-801c-2ea5ab73064c","Type":"ContainerStarted","Data":"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22"} Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.046795 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" containerName="glance-httpd" containerID="cri-o://e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22" gracePeriod=30 Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.082837 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=25.082810488 podStartE2EDuration="25.082810488s" podCreationTimestamp="2026-01-08 23:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:17.076632209 +0000 UTC m=+1367.387791155" watchObservedRunningTime="2026-01-08 23:38:17.082810488 +0000 UTC m=+1367.393969454" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.837275 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.912456 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d2vhx"] Jan 08 23:38:17 crc kubenswrapper[4945]: E0108 23:38:17.913301 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" containerName="glance-httpd" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.913393 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" containerName="glance-httpd" Jan 08 23:38:17 crc kubenswrapper[4945]: E0108 23:38:17.913476 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" containerName="glance-log" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.913552 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" containerName="glance-log" Jan 08 23:38:17 crc kubenswrapper[4945]: E0108 23:38:17.913636 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerName="init" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.913714 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerName="init" Jan 08 23:38:17 crc kubenswrapper[4945]: E0108 23:38:17.913771 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c46c438-5dec-4a52-b24e-110451d11489" containerName="neutron-db-sync" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.913819 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c46c438-5dec-4a52-b24e-110451d11489" containerName="neutron-db-sync" Jan 08 23:38:17 crc kubenswrapper[4945]: E0108 23:38:17.913876 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerName="dnsmasq-dns" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.913925 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerName="dnsmasq-dns" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.916771 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3602d69c-6735-47da-b4ca-ef53f5e70a29" containerName="dnsmasq-dns" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.920104 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c46c438-5dec-4a52-b24e-110451d11489" containerName="neutron-db-sync" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.929984 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" containerName="glance-log" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.930796 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" containerName="glance-httpd" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.937131 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.957853 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d2vhx"] Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.965817 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"113b4c74-e409-4095-801c-2ea5ab73064c\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.965879 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-logs\") pod \"113b4c74-e409-4095-801c-2ea5ab73064c\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.965947 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-combined-ca-bundle\") pod \"113b4c74-e409-4095-801c-2ea5ab73064c\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.966015 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-config-data\") pod \"113b4c74-e409-4095-801c-2ea5ab73064c\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.966125 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fql62\" (UniqueName: \"kubernetes.io/projected/113b4c74-e409-4095-801c-2ea5ab73064c-kube-api-access-fql62\") pod \"113b4c74-e409-4095-801c-2ea5ab73064c\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.966182 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-scripts\") pod \"113b4c74-e409-4095-801c-2ea5ab73064c\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.966240 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-httpd-run\") pod \"113b4c74-e409-4095-801c-2ea5ab73064c\" (UID: \"113b4c74-e409-4095-801c-2ea5ab73064c\") " Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.967749 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "113b4c74-e409-4095-801c-2ea5ab73064c" (UID: "113b4c74-e409-4095-801c-2ea5ab73064c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.975177 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-logs" (OuterVolumeSpecName: "logs") pod "113b4c74-e409-4095-801c-2ea5ab73064c" (UID: "113b4c74-e409-4095-801c-2ea5ab73064c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.981707 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113b4c74-e409-4095-801c-2ea5ab73064c-kube-api-access-fql62" (OuterVolumeSpecName: "kube-api-access-fql62") pod "113b4c74-e409-4095-801c-2ea5ab73064c" (UID: "113b4c74-e409-4095-801c-2ea5ab73064c"). InnerVolumeSpecName "kube-api-access-fql62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:17 crc kubenswrapper[4945]: I0108 23:38:17.983442 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "113b4c74-e409-4095-801c-2ea5ab73064c" (UID: "113b4c74-e409-4095-801c-2ea5ab73064c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.014449 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-scripts" (OuterVolumeSpecName: "scripts") pod "113b4c74-e409-4095-801c-2ea5ab73064c" (UID: "113b4c74-e409-4095-801c-2ea5ab73064c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.078625 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv72n\" (UniqueName: \"kubernetes.io/projected/68f944b6-14dc-4ad6-968b-a29fee612e05-kube-api-access-sv72n\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.078701 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.078745 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.078782 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-config\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.078828 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.078885 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.078978 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fql62\" (UniqueName: \"kubernetes.io/projected/113b4c74-e409-4095-801c-2ea5ab73064c-kube-api-access-fql62\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.079031 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.079044 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.079068 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.079079 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113b4c74-e409-4095-801c-2ea5ab73064c-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.120231 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56c778dc56-gkxp6"] Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.121565 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.124333 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56c778dc56-gkxp6"] Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.124428 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.128412 4945 generic.go:334] "Generic (PLEG): container finished" podID="113b4c74-e409-4095-801c-2ea5ab73064c" containerID="e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22" exitCode=0 Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.128453 4945 generic.go:334] "Generic (PLEG): container finished" podID="113b4c74-e409-4095-801c-2ea5ab73064c" containerID="34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d" exitCode=143 Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.128564 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"113b4c74-e409-4095-801c-2ea5ab73064c","Type":"ContainerDied","Data":"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22"} Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.128606 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"113b4c74-e409-4095-801c-2ea5ab73064c","Type":"ContainerDied","Data":"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d"} Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.128617 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"113b4c74-e409-4095-801c-2ea5ab73064c","Type":"ContainerDied","Data":"9958c82a8ec2a28435df84e00e5078bf326583a019bb6ca3ec6975b936606b0f"} Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.128641 4945 scope.go:117] "RemoveContainer" containerID="e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.129056 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.140614 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-zm8w5" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.140819 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.140959 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.141063 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.164069 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-config-data" (OuterVolumeSpecName: "config-data") pod "113b4c74-e409-4095-801c-2ea5ab73064c" (UID: "113b4c74-e409-4095-801c-2ea5ab73064c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.185315 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "113b4c74-e409-4095-801c-2ea5ab73064c" (UID: "113b4c74-e409-4095-801c-2ea5ab73064c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.185927 4945 generic.go:334] "Generic (PLEG): container finished" podID="ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" containerID="6f22349d33e5ca7679067ee358e025809e2066d028072d0e655a6e5e5971a3f7" exitCode=0 Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.185975 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2txr" event={"ID":"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d","Type":"ContainerDied","Data":"6f22349d33e5ca7679067ee358e025809e2066d028072d0e655a6e5e5971a3f7"} Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188469 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188556 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-config\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188585 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188609 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-combined-ca-bundle\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188631 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-httpd-config\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188660 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-config\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188689 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188722 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-ovndb-tls-certs\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188744 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmbl7\" (UniqueName: \"kubernetes.io/projected/5e4bc49b-f408-42e8-b805-6ba01f62c232-kube-api-access-kmbl7\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188783 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188827 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv72n\" (UniqueName: \"kubernetes.io/projected/68f944b6-14dc-4ad6-968b-a29fee612e05-kube-api-access-sv72n\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188872 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188883 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.188893 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113b4c74-e409-4095-801c-2ea5ab73064c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.189957 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.190272 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-config\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.190622 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.194698 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.194856 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.241276 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv72n\" (UniqueName: \"kubernetes.io/projected/68f944b6-14dc-4ad6-968b-a29fee612e05-kube-api-access-sv72n\") pod \"dnsmasq-dns-6b7b667979-d2vhx\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.264028 4945 scope.go:117] "RemoveContainer" containerID="34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.297652 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-config\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.297726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-combined-ca-bundle\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.297750 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-httpd-config\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.297803 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-ovndb-tls-certs\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.297828 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmbl7\" (UniqueName: \"kubernetes.io/projected/5e4bc49b-f408-42e8-b805-6ba01f62c232-kube-api-access-kmbl7\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.306785 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-ovndb-tls-certs\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.309734 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-combined-ca-bundle\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.314829 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmbl7\" (UniqueName: \"kubernetes.io/projected/5e4bc49b-f408-42e8-b805-6ba01f62c232-kube-api-access-kmbl7\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.314836 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-httpd-config\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.320987 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-config\") pod \"neutron-56c778dc56-gkxp6\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.350202 4945 scope.go:117] "RemoveContainer" containerID="e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22" Jan 08 23:38:18 crc kubenswrapper[4945]: E0108 23:38:18.351420 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22\": container with ID starting with e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22 not found: ID does not exist" containerID="e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.351466 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22"} err="failed to get container status \"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22\": rpc error: code = NotFound desc = could not find container \"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22\": container with ID starting with e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22 not found: ID does not exist" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.351498 4945 scope.go:117] "RemoveContainer" containerID="34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d" Jan 08 23:38:18 crc kubenswrapper[4945]: E0108 23:38:18.353106 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d\": container with ID starting with 34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d not found: ID does not exist" containerID="34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.353138 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d"} err="failed to get container status \"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d\": rpc error: code = NotFound desc = could not find container \"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d\": container with ID starting with 34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d not found: ID does not exist" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.353161 4945 scope.go:117] "RemoveContainer" containerID="e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.357115 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22"} err="failed to get container status \"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22\": rpc error: code = NotFound desc = could not find container \"e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22\": container with ID starting with e2bb667b66621ce7a54aa59c966097e707176302968810b4d4495d4924144c22 not found: ID does not exist" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.357153 4945 scope.go:117] "RemoveContainer" containerID="34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.369153 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d"} err="failed to get container status \"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d\": rpc error: code = NotFound desc = could not find container \"34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d\": container with ID starting with 34099752aa738908e424053ee507946c54df981080d7486d954adc5efe1f456d not found: ID does not exist" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.422027 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.468532 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.477714 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.498424 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.500919 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.511079 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.519564 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.521111 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.575778 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.605769 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.605811 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.605864 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-scripts\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.605903 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-config-data\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.605925 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.605949 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zztxc\" (UniqueName: \"kubernetes.io/projected/88232648-bf7d-4f3d-83e6-2a5b25b7538c-kube-api-access-zztxc\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.605979 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-logs\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.606032 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.713178 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.714540 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.714652 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.714766 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-scripts\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.719293 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-config-data\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.719419 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.719530 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zztxc\" (UniqueName: \"kubernetes.io/projected/88232648-bf7d-4f3d-83e6-2a5b25b7538c-kube-api-access-zztxc\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.719640 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-logs\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.720180 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-logs\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.717154 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.721049 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.731199 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.731751 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.739212 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-scripts\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.743368 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zztxc\" (UniqueName: \"kubernetes.io/projected/88232648-bf7d-4f3d-83e6-2a5b25b7538c-kube-api-access-zztxc\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.766891 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-config-data\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.796470 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " pod="openstack/glance-default-external-api-0" Jan 08 23:38:18 crc kubenswrapper[4945]: I0108 23:38:18.915701 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.090057 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d2vhx"] Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.218859 4945 generic.go:334] "Generic (PLEG): container finished" podID="3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0" containerID="6d3373416d06d09c31beb8c4a908f1408aa4767816ccbdd4c183de5420b32d29" exitCode=0 Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.218978 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2nr9" event={"ID":"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0","Type":"ContainerDied","Data":"6d3373416d06d09c31beb8c4a908f1408aa4767816ccbdd4c183de5420b32d29"} Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.244634 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" event={"ID":"68f944b6-14dc-4ad6-968b-a29fee612e05","Type":"ContainerStarted","Data":"a5e31376d8b6e6f870c71e92bafe3b457bee7c860b6649bac84bbfa9cf3f0a3a"} Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.253589 4945 generic.go:334] "Generic (PLEG): container finished" podID="e500c7f0-c056-45f2-816d-d904fd8e18cf" containerID="aa21910ea6f2611e24574bdfa939c904fc8185af95c178b62d819c74b5854c82" exitCode=0 Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.253933 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntq4n" event={"ID":"e500c7f0-c056-45f2-816d-d904fd8e18cf","Type":"ContainerDied","Data":"aa21910ea6f2611e24574bdfa939c904fc8185af95c178b62d819c74b5854c82"} Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.349851 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56c778dc56-gkxp6"] Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.627605 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.853559 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2txr" Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.957721 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p49w\" (UniqueName: \"kubernetes.io/projected/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-kube-api-access-6p49w\") pod \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.958177 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-combined-ca-bundle\") pod \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.958221 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-scripts\") pod \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.958297 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-config-data\") pod \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.958402 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-logs\") pod \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\" (UID: \"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d\") " Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.959279 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-logs" (OuterVolumeSpecName: "logs") pod "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" (UID: "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.963836 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-kube-api-access-6p49w" (OuterVolumeSpecName: "kube-api-access-6p49w") pod "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" (UID: "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d"). InnerVolumeSpecName "kube-api-access-6p49w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.963903 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-scripts" (OuterVolumeSpecName: "scripts") pod "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" (UID: "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:19 crc kubenswrapper[4945]: I0108 23:38:19.998115 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" (UID: "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.021247 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-config-data" (OuterVolumeSpecName: "config-data") pod "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" (UID: "ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.041873 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113b4c74-e409-4095-801c-2ea5ab73064c" path="/var/lib/kubelet/pods/113b4c74-e409-4095-801c-2ea5ab73064c/volumes" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.063455 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p49w\" (UniqueName: \"kubernetes.io/projected/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-kube-api-access-6p49w\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.063842 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.064507 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.064710 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.064793 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.275315 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88232648-bf7d-4f3d-83e6-2a5b25b7538c","Type":"ContainerStarted","Data":"0b8d75aa88340d67f5224d36fad7ebf4c7bcd460519fbbcf51a7cc386f5f511a"} Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.277897 4945 generic.go:334] "Generic (PLEG): container finished" podID="68f944b6-14dc-4ad6-968b-a29fee612e05" containerID="0f99a87bdea1eff208101c82f4779e143be91009a93ed470a007ba1963148367" exitCode=0 Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.277941 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" event={"ID":"68f944b6-14dc-4ad6-968b-a29fee612e05","Type":"ContainerDied","Data":"0f99a87bdea1eff208101c82f4779e143be91009a93ed470a007ba1963148367"} Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.282486 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c778dc56-gkxp6" event={"ID":"5e4bc49b-f408-42e8-b805-6ba01f62c232","Type":"ContainerStarted","Data":"5f1fa5a966c12e0f0ce73ab8d05df2190bb0336be121f257d0db8fb19f3bdb50"} Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.282522 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c778dc56-gkxp6" event={"ID":"5e4bc49b-f408-42e8-b805-6ba01f62c232","Type":"ContainerStarted","Data":"4e3fde323e4c628301acc65502d08735f89c661dd437d36fdf2d37345e81ed6d"} Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.282534 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c778dc56-gkxp6" event={"ID":"5e4bc49b-f408-42e8-b805-6ba01f62c232","Type":"ContainerStarted","Data":"8fe430110bcf3e72050967787b51f45e7d445656d98f8cce71f1ce0e2f0162f4"} Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.283229 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.296624 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2txr" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.296690 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2txr" event={"ID":"ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d","Type":"ContainerDied","Data":"9bb367d132c898d225efa4dafc679052619d38d9225b3c57b536ab19259d1fd7"} Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.296753 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bb367d132c898d225efa4dafc679052619d38d9225b3c57b536ab19259d1fd7" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.385718 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56c778dc56-gkxp6" podStartSLOduration=3.385698267 podStartE2EDuration="3.385698267s" podCreationTimestamp="2026-01-08 23:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:20.373385201 +0000 UTC m=+1370.684544147" watchObservedRunningTime="2026-01-08 23:38:20.385698267 +0000 UTC m=+1370.696857203" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.401126 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d4586c964-cfb7b"] Jan 08 23:38:20 crc kubenswrapper[4945]: E0108 23:38:20.401748 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" containerName="placement-db-sync" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.401778 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" containerName="placement-db-sync" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.402037 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" containerName="placement-db-sync" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.403313 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.411912 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.412370 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.415286 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.416253 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.416453 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-lbjzr" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.432757 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d4586c964-cfb7b"] Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.474168 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-combined-ca-bundle\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.475354 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-scripts\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.475474 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-config-data\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.475622 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-internal-tls-certs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.475766 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-public-tls-certs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.475861 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfz5z\" (UniqueName: \"kubernetes.io/projected/3c1913ce-ea65-4745-baf8-621191c50b55-kube-api-access-tfz5z\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.475939 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c1913ce-ea65-4745-baf8-621191c50b55-logs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.579971 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-config-data\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.580159 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-internal-tls-certs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.580239 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-public-tls-certs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.580288 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfz5z\" (UniqueName: \"kubernetes.io/projected/3c1913ce-ea65-4745-baf8-621191c50b55-kube-api-access-tfz5z\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.580320 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c1913ce-ea65-4745-baf8-621191c50b55-logs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.580389 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-combined-ca-bundle\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.580434 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-scripts\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.587368 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c1913ce-ea65-4745-baf8-621191c50b55-logs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.596216 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-combined-ca-bundle\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.596377 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-public-tls-certs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.598809 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-scripts\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.603421 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfz5z\" (UniqueName: \"kubernetes.io/projected/3c1913ce-ea65-4745-baf8-621191c50b55-kube-api-access-tfz5z\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.611263 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-internal-tls-certs\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.617487 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-config-data\") pod \"placement-d4586c964-cfb7b\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.750876 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6794547bf7-wqlnm"] Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.752874 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.753910 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.763464 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.764913 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6794547bf7-wqlnm"] Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.769770 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.785632 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-combined-ca-bundle\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.785855 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-config\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.785939 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-internal-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.786099 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpxn\" (UniqueName: \"kubernetes.io/projected/3b682d87-6d87-4d38-b1c5-a5e4c3664472-kube-api-access-xwpxn\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.786193 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-httpd-config\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.786366 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-ovndb-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.786466 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-public-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.885146 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.888834 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-scripts\") pod \"e500c7f0-c056-45f2-816d-d904fd8e18cf\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.888948 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-credential-keys\") pod \"e500c7f0-c056-45f2-816d-d904fd8e18cf\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.889009 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-fernet-keys\") pod \"e500c7f0-c056-45f2-816d-d904fd8e18cf\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.889051 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8knzc\" (UniqueName: \"kubernetes.io/projected/e500c7f0-c056-45f2-816d-d904fd8e18cf-kube-api-access-8knzc\") pod \"e500c7f0-c056-45f2-816d-d904fd8e18cf\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.889095 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-config-data\") pod \"e500c7f0-c056-45f2-816d-d904fd8e18cf\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.889254 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-combined-ca-bundle\") pod \"e500c7f0-c056-45f2-816d-d904fd8e18cf\" (UID: \"e500c7f0-c056-45f2-816d-d904fd8e18cf\") " Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.890116 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwpxn\" (UniqueName: \"kubernetes.io/projected/3b682d87-6d87-4d38-b1c5-a5e4c3664472-kube-api-access-xwpxn\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.890160 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-httpd-config\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.890182 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-ovndb-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.890226 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-public-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.890273 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-combined-ca-bundle\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.890313 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-config\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.890348 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-internal-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.898870 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e500c7f0-c056-45f2-816d-d904fd8e18cf-kube-api-access-8knzc" (OuterVolumeSpecName: "kube-api-access-8knzc") pod "e500c7f0-c056-45f2-816d-d904fd8e18cf" (UID: "e500c7f0-c056-45f2-816d-d904fd8e18cf"). InnerVolumeSpecName "kube-api-access-8knzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.903000 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-scripts" (OuterVolumeSpecName: "scripts") pod "e500c7f0-c056-45f2-816d-d904fd8e18cf" (UID: "e500c7f0-c056-45f2-816d-d904fd8e18cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.907255 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-public-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.909241 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e500c7f0-c056-45f2-816d-d904fd8e18cf" (UID: "e500c7f0-c056-45f2-816d-d904fd8e18cf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.909721 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-httpd-config\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.909897 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-ovndb-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.910101 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-internal-tls-certs\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.910731 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-config\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.916410 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e500c7f0-c056-45f2-816d-d904fd8e18cf" (UID: "e500c7f0-c056-45f2-816d-d904fd8e18cf"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.927374 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-combined-ca-bundle\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.928180 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwpxn\" (UniqueName: \"kubernetes.io/projected/3b682d87-6d87-4d38-b1c5-a5e4c3664472-kube-api-access-xwpxn\") pod \"neutron-6794547bf7-wqlnm\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.948408 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-config-data" (OuterVolumeSpecName: "config-data") pod "e500c7f0-c056-45f2-816d-d904fd8e18cf" (UID: "e500c7f0-c056-45f2-816d-d904fd8e18cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.952924 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e500c7f0-c056-45f2-816d-d904fd8e18cf" (UID: "e500c7f0-c056-45f2-816d-d904fd8e18cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:20 crc kubenswrapper[4945]: I0108 23:38:20.987581 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.001058 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.001191 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.001220 4945 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.001230 4945 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.001239 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8knzc\" (UniqueName: \"kubernetes.io/projected/e500c7f0-c056-45f2-816d-d904fd8e18cf-kube-api-access-8knzc\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.001249 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e500c7f0-c056-45f2-816d-d904fd8e18cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.111526 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v7fd\" (UniqueName: \"kubernetes.io/projected/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-kube-api-access-7v7fd\") pod \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.111939 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-db-sync-config-data\") pod \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.112036 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-combined-ca-bundle\") pod \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\" (UID: \"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0\") " Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.118618 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0" (UID: "3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.145876 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-kube-api-access-7v7fd" (OuterVolumeSpecName: "kube-api-access-7v7fd") pod "3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0" (UID: "3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0"). InnerVolumeSpecName "kube-api-access-7v7fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.182254 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.210243 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0" (UID: "3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.217309 4945 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.217347 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.217358 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v7fd\" (UniqueName: \"kubernetes.io/projected/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0-kube-api-access-7v7fd\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.246246 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.246304 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.340522 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.346182 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.351685 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntq4n" event={"ID":"e500c7f0-c056-45f2-816d-d904fd8e18cf","Type":"ContainerDied","Data":"b2bdbe2d5d8c4f4089a11754dceb462ff90f96d7e8055610e250d38dc2a8b7a5"} Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.351732 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2bdbe2d5d8c4f4089a11754dceb462ff90f96d7e8055610e250d38dc2a8b7a5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.351804 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntq4n" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.362649 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-x2nr9" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.362653 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-x2nr9" event={"ID":"3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0","Type":"ContainerDied","Data":"b42c4f4aa5212e95bc4f248175d0cd41430c5a99949f19715cd0d843217b9d27"} Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.363216 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42c4f4aa5212e95bc4f248175d0cd41430c5a99949f19715cd0d843217b9d27" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.364248 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88232648-bf7d-4f3d-83e6-2a5b25b7538c","Type":"ContainerStarted","Data":"4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6"} Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.367194 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" event={"ID":"68f944b6-14dc-4ad6-968b-a29fee612e05","Type":"ContainerStarted","Data":"69f386188d97d9554be24d1e7f98363ccd3e60c6a40580eb4004918dfb705d54"} Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.367532 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.367758 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.449721 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" podStartSLOduration=4.449690314 podStartE2EDuration="4.449690314s" podCreationTimestamp="2026-01-08 23:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:21.447804189 +0000 UTC m=+1371.758963135" watchObservedRunningTime="2026-01-08 23:38:21.449690314 +0000 UTC m=+1371.760849270" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.492171 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-76b55d6f4b-r5hn5"] Jan 08 23:38:21 crc kubenswrapper[4945]: E0108 23:38:21.492764 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e500c7f0-c056-45f2-816d-d904fd8e18cf" containerName="keystone-bootstrap" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.492779 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e500c7f0-c056-45f2-816d-d904fd8e18cf" containerName="keystone-bootstrap" Jan 08 23:38:21 crc kubenswrapper[4945]: E0108 23:38:21.492814 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0" containerName="barbican-db-sync" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.492820 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0" containerName="barbican-db-sync" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.493030 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0" containerName="barbican-db-sync" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.493050 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e500c7f0-c056-45f2-816d-d904fd8e18cf" containerName="keystone-bootstrap" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.493832 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.497011 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.497702 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.497844 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.498151 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.498661 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bwwj4" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.498963 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.522568 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76b55d6f4b-r5hn5"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.641053 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-combined-ca-bundle\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.641226 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-fernet-keys\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.641353 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-credential-keys\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.641445 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-scripts\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.641538 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-internal-tls-certs\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.641612 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-public-tls-certs\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.641650 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdfpv\" (UniqueName: \"kubernetes.io/projected/842a2e91-c7e4-4435-aa81-c1a888cf6a51-kube-api-access-vdfpv\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.641796 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-config-data\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.673604 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-654744c45f-2rmcg"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.675618 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.679284 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2xtw8" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.679515 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.679652 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.693315 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d4586c964-cfb7b"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.740177 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-654744c45f-2rmcg"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.744495 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-combined-ca-bundle\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.744565 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-fernet-keys\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.744601 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-credential-keys\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.744630 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-scripts\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.744671 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-internal-tls-certs\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.744702 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-public-tls-certs\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.744728 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdfpv\" (UniqueName: \"kubernetes.io/projected/842a2e91-c7e4-4435-aa81-c1a888cf6a51-kube-api-access-vdfpv\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.744762 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-config-data\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.763884 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-internal-tls-certs\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.764753 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-combined-ca-bundle\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.765745 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-config-data\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.771122 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-fernet-keys\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.771251 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-f5458d448-xj5lz"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.781710 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-public-tls-certs\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.795357 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdfpv\" (UniqueName: \"kubernetes.io/projected/842a2e91-c7e4-4435-aa81-c1a888cf6a51-kube-api-access-vdfpv\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.802690 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-credential-keys\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.812457 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-scripts\") pod \"keystone-76b55d6f4b-r5hn5\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.831221 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f5458d448-xj5lz"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.831403 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.839015 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.851948 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data-custom\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.852030 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046bb87c-2b1c-46eb-9db3-78270701ec34-logs\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.852095 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.852128 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-combined-ca-bundle\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.852156 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csjpd\" (UniqueName: \"kubernetes.io/projected/046bb87c-2b1c-46eb-9db3-78270701ec34-kube-api-access-csjpd\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.904159 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d2vhx"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.919348 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-68f4b46db6-4tg9b"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.921716 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.932683 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.933949 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.947800 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68f4b46db6-4tg9b"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.964436 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.975428 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-combined-ca-bundle\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.975629 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csjpd\" (UniqueName: \"kubernetes.io/projected/046bb87c-2b1c-46eb-9db3-78270701ec34-kube-api-access-csjpd\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.975726 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbe6e840-6658-49dd-b547-c58c4bc1479a-logs\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.975950 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data-custom\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.976101 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data-custom\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.976274 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-combined-ca-bundle\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.976411 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046bb87c-2b1c-46eb-9db3-78270701ec34-logs\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.977220 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.977360 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pcrq\" (UniqueName: \"kubernetes.io/projected/dbe6e840-6658-49dd-b547-c58c4bc1479a-kube-api-access-9pcrq\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.979576 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046bb87c-2b1c-46eb-9db3-78270701ec34-logs\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.985538 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data-custom\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.989640 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dgqbp"] Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.993521 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:21 crc kubenswrapper[4945]: I0108 23:38:21.998669 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-combined-ca-bundle\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.006957 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csjpd\" (UniqueName: \"kubernetes.io/projected/046bb87c-2b1c-46eb-9db3-78270701ec34-kube-api-access-csjpd\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.007598 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data\") pod \"barbican-worker-654744c45f-2rmcg\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.031205 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.063855 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dgqbp"] Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080241 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-combined-ca-bundle\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080305 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pcrq\" (UniqueName: \"kubernetes.io/projected/dbe6e840-6658-49dd-b547-c58c4bc1479a-kube-api-access-9pcrq\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080349 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080378 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr67v\" (UniqueName: \"kubernetes.io/projected/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-kube-api-access-vr67v\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080404 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080429 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbe6e840-6658-49dd-b547-c58c4bc1479a-logs\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080463 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-logs\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080504 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data-custom\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080526 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080568 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080599 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-config\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080621 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080640 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44pgx\" (UniqueName: \"kubernetes.io/projected/8ed56205-b4d2-496d-9a5f-12edf2136d61-kube-api-access-44pgx\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080660 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-combined-ca-bundle\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080700 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data-custom\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.080729 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.082716 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbe6e840-6658-49dd-b547-c58c4bc1479a-logs\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.086260 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.088767 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data-custom\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.091579 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-combined-ca-bundle\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.107308 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pcrq\" (UniqueName: \"kubernetes.io/projected/dbe6e840-6658-49dd-b547-c58c4bc1479a-kube-api-access-9pcrq\") pod \"barbican-keystone-listener-f5458d448-xj5lz\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.174629 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6794547bf7-wqlnm"] Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.183780 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-logs\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.183883 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.183937 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.184044 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-config\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.184071 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.184093 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44pgx\" (UniqueName: \"kubernetes.io/projected/8ed56205-b4d2-496d-9a5f-12edf2136d61-kube-api-access-44pgx\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.184124 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data-custom\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.184151 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.184191 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-combined-ca-bundle\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.184237 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr67v\" (UniqueName: \"kubernetes.io/projected/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-kube-api-access-vr67v\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.184258 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.185225 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.186873 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.186896 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-config\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.187207 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-logs\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.188560 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.191190 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.201520 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.202321 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data-custom\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.202619 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-combined-ca-bundle\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.207610 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44pgx\" (UniqueName: \"kubernetes.io/projected/8ed56205-b4d2-496d-9a5f-12edf2136d61-kube-api-access-44pgx\") pod \"dnsmasq-dns-848cf88cfc-dgqbp\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.209276 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.210816 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr67v\" (UniqueName: \"kubernetes.io/projected/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-kube-api-access-vr67v\") pod \"barbican-api-68f4b46db6-4tg9b\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.260021 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.391538 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:22 crc kubenswrapper[4945]: I0108 23:38:22.403410 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:23 crc kubenswrapper[4945]: I0108 23:38:23.398246 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" podUID="68f944b6-14dc-4ad6-968b-a29fee612e05" containerName="dnsmasq-dns" containerID="cri-o://69f386188d97d9554be24d1e7f98363ccd3e60c6a40580eb4004918dfb705d54" gracePeriod=10 Jan 08 23:38:24 crc kubenswrapper[4945]: I0108 23:38:24.406720 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:24 crc kubenswrapper[4945]: I0108 23:38:24.406863 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 08 23:38:24 crc kubenswrapper[4945]: I0108 23:38:24.407878 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 08 23:38:24 crc kubenswrapper[4945]: I0108 23:38:24.415265 4945 generic.go:334] "Generic (PLEG): container finished" podID="68f944b6-14dc-4ad6-968b-a29fee612e05" containerID="69f386188d97d9554be24d1e7f98363ccd3e60c6a40580eb4004918dfb705d54" exitCode=0 Jan 08 23:38:24 crc kubenswrapper[4945]: I0108 23:38:24.415313 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" event={"ID":"68f944b6-14dc-4ad6-968b-a29fee612e05","Type":"ContainerDied","Data":"69f386188d97d9554be24d1e7f98363ccd3e60c6a40580eb4004918dfb705d54"} Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.096456 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75cbb987cb-dt6t6"] Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.099209 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.101639 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.106672 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75cbb987cb-dt6t6"] Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.112930 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.256390 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-combined-ca-bundle\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.256461 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.256846 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/227e0b3d-d5ba-4265-a7b9-0419deb61603-logs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.256979 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-internal-tls-certs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.257058 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data-custom\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.257118 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxsw\" (UniqueName: \"kubernetes.io/projected/227e0b3d-d5ba-4265-a7b9-0419deb61603-kube-api-access-nqxsw\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.257290 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-public-tls-certs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.359926 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-public-tls-certs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.360023 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-combined-ca-bundle\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.360064 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.360168 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/227e0b3d-d5ba-4265-a7b9-0419deb61603-logs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.360203 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-internal-tls-certs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.360225 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data-custom\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.360254 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxsw\" (UniqueName: \"kubernetes.io/projected/227e0b3d-d5ba-4265-a7b9-0419deb61603-kube-api-access-nqxsw\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.361193 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/227e0b3d-d5ba-4265-a7b9-0419deb61603-logs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.369468 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.375810 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data-custom\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.376791 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-combined-ca-bundle\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.379111 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-internal-tls-certs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.380251 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxsw\" (UniqueName: \"kubernetes.io/projected/227e0b3d-d5ba-4265-a7b9-0419deb61603-kube-api-access-nqxsw\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.387186 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-public-tls-certs\") pod \"barbican-api-75cbb987cb-dt6t6\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:25 crc kubenswrapper[4945]: I0108 23:38:25.429768 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:27 crc kubenswrapper[4945]: W0108 23:38:27.666939 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b682d87_6d87_4d38_b1c5_a5e4c3664472.slice/crio-902468eada43a63143cf4f869dba61a92838abfee293b1f8c7ed3d63d557eb36 WatchSource:0}: Error finding container 902468eada43a63143cf4f869dba61a92838abfee293b1f8c7ed3d63d557eb36: Status 404 returned error can't find the container with id 902468eada43a63143cf4f869dba61a92838abfee293b1f8c7ed3d63d557eb36 Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.065895 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.133559 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-config\") pod \"68f944b6-14dc-4ad6-968b-a29fee612e05\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.133725 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv72n\" (UniqueName: \"kubernetes.io/projected/68f944b6-14dc-4ad6-968b-a29fee612e05-kube-api-access-sv72n\") pod \"68f944b6-14dc-4ad6-968b-a29fee612e05\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.133817 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-nb\") pod \"68f944b6-14dc-4ad6-968b-a29fee612e05\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.133848 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-svc\") pod \"68f944b6-14dc-4ad6-968b-a29fee612e05\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.133899 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-sb\") pod \"68f944b6-14dc-4ad6-968b-a29fee612e05\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.133944 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-swift-storage-0\") pod \"68f944b6-14dc-4ad6-968b-a29fee612e05\" (UID: \"68f944b6-14dc-4ad6-968b-a29fee612e05\") " Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.144466 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f944b6-14dc-4ad6-968b-a29fee612e05-kube-api-access-sv72n" (OuterVolumeSpecName: "kube-api-access-sv72n") pod "68f944b6-14dc-4ad6-968b-a29fee612e05" (UID: "68f944b6-14dc-4ad6-968b-a29fee612e05"). InnerVolumeSpecName "kube-api-access-sv72n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.237440 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv72n\" (UniqueName: \"kubernetes.io/projected/68f944b6-14dc-4ad6-968b-a29fee612e05-kube-api-access-sv72n\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.244379 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "68f944b6-14dc-4ad6-968b-a29fee612e05" (UID: "68f944b6-14dc-4ad6-968b-a29fee612e05"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.256543 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-config" (OuterVolumeSpecName: "config") pod "68f944b6-14dc-4ad6-968b-a29fee612e05" (UID: "68f944b6-14dc-4ad6-968b-a29fee612e05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.276332 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68f944b6-14dc-4ad6-968b-a29fee612e05" (UID: "68f944b6-14dc-4ad6-968b-a29fee612e05"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.297001 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68f944b6-14dc-4ad6-968b-a29fee612e05" (UID: "68f944b6-14dc-4ad6-968b-a29fee612e05"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.303213 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68f944b6-14dc-4ad6-968b-a29fee612e05" (UID: "68f944b6-14dc-4ad6-968b-a29fee612e05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.338919 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.338962 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.338971 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.338981 4945 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.339003 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f944b6-14dc-4ad6-968b-a29fee612e05-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.355455 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75cbb987cb-dt6t6"] Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.473507 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4586c964-cfb7b" event={"ID":"3c1913ce-ea65-4745-baf8-621191c50b55","Type":"ContainerStarted","Data":"0dd038e71808305ae5cd0a96a285dd6bff4b3665e314c0a22f1ac19bea822116"} Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.475558 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6794547bf7-wqlnm" event={"ID":"3b682d87-6d87-4d38-b1c5-a5e4c3664472","Type":"ContainerStarted","Data":"902468eada43a63143cf4f869dba61a92838abfee293b1f8c7ed3d63d557eb36"} Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.477387 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75cbb987cb-dt6t6" event={"ID":"227e0b3d-d5ba-4265-a7b9-0419deb61603","Type":"ContainerStarted","Data":"2da37058507d7a534cf16b3f5044db2537edd629add3c64225613b95da29412a"} Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.485157 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" event={"ID":"68f944b6-14dc-4ad6-968b-a29fee612e05","Type":"ContainerDied","Data":"a5e31376d8b6e6f870c71e92bafe3b457bee7c860b6649bac84bbfa9cf3f0a3a"} Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.485203 4945 scope.go:117] "RemoveContainer" containerID="69f386188d97d9554be24d1e7f98363ccd3e60c6a40580eb4004918dfb705d54" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.485430 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d2vhx" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.488450 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dgqbp"] Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.508190 4945 scope.go:117] "RemoveContainer" containerID="0f99a87bdea1eff208101c82f4779e143be91009a93ed470a007ba1963148367" Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.545931 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d2vhx"] Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.575535 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d2vhx"] Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.606562 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-654744c45f-2rmcg"] Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.749008 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76b55d6f4b-r5hn5"] Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.788098 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-68f4b46db6-4tg9b"] Jan 08 23:38:28 crc kubenswrapper[4945]: I0108 23:38:28.804605 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-f5458d448-xj5lz"] Jan 08 23:38:28 crc kubenswrapper[4945]: W0108 23:38:28.811923 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod842a2e91_c7e4_4435_aa81_c1a888cf6a51.slice/crio-79aec7511bd1139b828df755afb955eddb98e3070b90c950e6a4a87e109570a3 WatchSource:0}: Error finding container 79aec7511bd1139b828df755afb955eddb98e3070b90c950e6a4a87e109570a3: Status 404 returned error can't find the container with id 79aec7511bd1139b828df755afb955eddb98e3070b90c950e6a4a87e109570a3 Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.520877 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6794547bf7-wqlnm" event={"ID":"3b682d87-6d87-4d38-b1c5-a5e4c3664472","Type":"ContainerStarted","Data":"40e878309fb2714dc92ffc1ca85d0a0b40ba57da80d5ce071bad31bc2db4462c"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.521613 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.521624 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6794547bf7-wqlnm" event={"ID":"3b682d87-6d87-4d38-b1c5-a5e4c3664472","Type":"ContainerStarted","Data":"d5e19d3d92fe8055cf6b5088d170ddda70b5ab7d24cd3f1d890303c9f017d30d"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.524478 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" event={"ID":"dbe6e840-6658-49dd-b547-c58c4bc1479a","Type":"ContainerStarted","Data":"d50dc51ad1c2502ea175e1218552265a69045edc7e39e9e62e65174f82d15ec9"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.533911 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75cbb987cb-dt6t6" event={"ID":"227e0b3d-d5ba-4265-a7b9-0419deb61603","Type":"ContainerStarted","Data":"79c3e5ad5b8d05cf65c473b2c9291f7836e5f788b3ab861c6aaa651a1b04f94d"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.533967 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75cbb987cb-dt6t6" event={"ID":"227e0b3d-d5ba-4265-a7b9-0419deb61603","Type":"ContainerStarted","Data":"08ca0607ce584cf8045417f996764ffc392d3261d41883a5078094c48ae1c950"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.534270 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.534533 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.551355 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6794547bf7-wqlnm" podStartSLOduration=9.551329394 podStartE2EDuration="9.551329394s" podCreationTimestamp="2026-01-08 23:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:29.537601243 +0000 UTC m=+1379.848760189" watchObservedRunningTime="2026-01-08 23:38:29.551329394 +0000 UTC m=+1379.862488340" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.562587 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88232648-bf7d-4f3d-83e6-2a5b25b7538c","Type":"ContainerStarted","Data":"976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.573613 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75cbb987cb-dt6t6" podStartSLOduration=4.573591469 podStartE2EDuration="4.573591469s" podCreationTimestamp="2026-01-08 23:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:29.568702002 +0000 UTC m=+1379.879860968" watchObservedRunningTime="2026-01-08 23:38:29.573591469 +0000 UTC m=+1379.884750415" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.597641 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerStarted","Data":"7af001ad30926675a7bf294c9e500c1a3e24de400155fe9ff73a89d740db9837"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.602558 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.602525926 podStartE2EDuration="11.602525926s" podCreationTimestamp="2026-01-08 23:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:29.594984514 +0000 UTC m=+1379.906143490" watchObservedRunningTime="2026-01-08 23:38:29.602525926 +0000 UTC m=+1379.913684872" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.619254 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76b55d6f4b-r5hn5" event={"ID":"842a2e91-c7e4-4435-aa81-c1a888cf6a51","Type":"ContainerStarted","Data":"1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.619320 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76b55d6f4b-r5hn5" event={"ID":"842a2e91-c7e4-4435-aa81-c1a888cf6a51","Type":"ContainerStarted","Data":"79aec7511bd1139b828df755afb955eddb98e3070b90c950e6a4a87e109570a3"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.620652 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.641984 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f4b46db6-4tg9b" event={"ID":"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7","Type":"ContainerStarted","Data":"78612bfad4ece0fb4a3a9659acbaf6e6379b58f80b5f8ccc1a0f0574671af085"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.642053 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f4b46db6-4tg9b" event={"ID":"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7","Type":"ContainerStarted","Data":"7f05985840a6a4eb2c433f2c9e15631157f43722ad528cbef5036e64fa0f8ae3"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.642062 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f4b46db6-4tg9b" event={"ID":"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7","Type":"ContainerStarted","Data":"73712fcb5f29989d546818b216204e9e85ba4c1b6caa14fbbeeb6d0728d3d2df"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.642632 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.642658 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.644104 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9vrvr" event={"ID":"050d08ce-2edb-4748-ad2d-de4183cd0188","Type":"ContainerStarted","Data":"0f812bd914d4fa33163b30da0f6e0fb39998a79862f6047a13bf3435d8c67e95"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.648061 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-76b55d6f4b-r5hn5" podStartSLOduration=8.648041371 podStartE2EDuration="8.648041371s" podCreationTimestamp="2026-01-08 23:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:29.63716735 +0000 UTC m=+1379.948326316" watchObservedRunningTime="2026-01-08 23:38:29.648041371 +0000 UTC m=+1379.959200317" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.667394 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-654744c45f-2rmcg" event={"ID":"046bb87c-2b1c-46eb-9db3-78270701ec34","Type":"ContainerStarted","Data":"fc4a65f0c3dad5723294cedeb1f3ce72310aa1704e670c29260684deb0f7c9c3"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.668512 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-68f4b46db6-4tg9b" podStartSLOduration=8.668468583 podStartE2EDuration="8.668468583s" podCreationTimestamp="2026-01-08 23:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:29.663675738 +0000 UTC m=+1379.974834694" watchObservedRunningTime="2026-01-08 23:38:29.668468583 +0000 UTC m=+1379.979627529" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.695788 4945 generic.go:334] "Generic (PLEG): container finished" podID="8ed56205-b4d2-496d-9a5f-12edf2136d61" containerID="828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457" exitCode=0 Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.696740 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" event={"ID":"8ed56205-b4d2-496d-9a5f-12edf2136d61","Type":"ContainerDied","Data":"828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.696773 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" event={"ID":"8ed56205-b4d2-496d-9a5f-12edf2136d61","Type":"ContainerStarted","Data":"cb96bb66faeb1bdaf3430a19cbedb04e5ac880c3383bf5d05fe694cbd7ab80d1"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.704763 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9vrvr" podStartSLOduration=3.739504015 podStartE2EDuration="44.704734616s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="2026-01-08 23:37:47.294893127 +0000 UTC m=+1337.606052073" lastFinishedPulling="2026-01-08 23:38:28.260123728 +0000 UTC m=+1378.571282674" observedRunningTime="2026-01-08 23:38:29.70036099 +0000 UTC m=+1380.011519936" watchObservedRunningTime="2026-01-08 23:38:29.704734616 +0000 UTC m=+1380.015893562" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.719302 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4586c964-cfb7b" event={"ID":"3c1913ce-ea65-4745-baf8-621191c50b55","Type":"ContainerStarted","Data":"bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.719350 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4586c964-cfb7b" event={"ID":"3c1913ce-ea65-4745-baf8-621191c50b55","Type":"ContainerStarted","Data":"b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4"} Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.720623 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.720652 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:29 crc kubenswrapper[4945]: I0108 23:38:29.772663 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d4586c964-cfb7b" podStartSLOduration=9.77264287 podStartE2EDuration="9.77264287s" podCreationTimestamp="2026-01-08 23:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:29.754930704 +0000 UTC m=+1380.066089640" watchObservedRunningTime="2026-01-08 23:38:29.77264287 +0000 UTC m=+1380.083801806" Jan 08 23:38:30 crc kubenswrapper[4945]: I0108 23:38:30.092494 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f944b6-14dc-4ad6-968b-a29fee612e05" path="/var/lib/kubelet/pods/68f944b6-14dc-4ad6-968b-a29fee612e05/volumes" Jan 08 23:38:30 crc kubenswrapper[4945]: I0108 23:38:30.729714 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" event={"ID":"8ed56205-b4d2-496d-9a5f-12edf2136d61","Type":"ContainerStarted","Data":"e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948"} Jan 08 23:38:30 crc kubenswrapper[4945]: I0108 23:38:30.730175 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:30 crc kubenswrapper[4945]: I0108 23:38:30.756652 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" podStartSLOduration=9.7566214 podStartE2EDuration="9.7566214s" podCreationTimestamp="2026-01-08 23:38:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:30.750025902 +0000 UTC m=+1381.061184848" watchObservedRunningTime="2026-01-08 23:38:30.7566214 +0000 UTC m=+1381.067780346" Jan 08 23:38:31 crc kubenswrapper[4945]: I0108 23:38:31.749890 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" event={"ID":"dbe6e840-6658-49dd-b547-c58c4bc1479a","Type":"ContainerStarted","Data":"cbeb77761327d37cd9d6f433a669d8d50075b45d4a2017862cf0fb4848999998"} Jan 08 23:38:31 crc kubenswrapper[4945]: I0108 23:38:31.757629 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-654744c45f-2rmcg" event={"ID":"046bb87c-2b1c-46eb-9db3-78270701ec34","Type":"ContainerStarted","Data":"bef7fb091982995ae682a74e3650d7a2edfd9a14a59f6cae0ab48178e3d0612d"} Jan 08 23:38:32 crc kubenswrapper[4945]: I0108 23:38:32.768638 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-654744c45f-2rmcg" event={"ID":"046bb87c-2b1c-46eb-9db3-78270701ec34","Type":"ContainerStarted","Data":"d25ef862eebe28203473e7e0b4d587e0913d21309620357f26a462668c27fa9d"} Jan 08 23:38:32 crc kubenswrapper[4945]: I0108 23:38:32.770389 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" event={"ID":"dbe6e840-6658-49dd-b547-c58c4bc1479a","Type":"ContainerStarted","Data":"f4d36b471e22f23698a2643337f5b4f19fbe9b1e28ec3042b9fac4a9d84d2ae4"} Jan 08 23:38:32 crc kubenswrapper[4945]: I0108 23:38:32.801708 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-654744c45f-2rmcg" podStartSLOduration=9.291558059 podStartE2EDuration="11.801683129s" podCreationTimestamp="2026-01-08 23:38:21 +0000 UTC" firstStartedPulling="2026-01-08 23:38:28.640729808 +0000 UTC m=+1378.951888744" lastFinishedPulling="2026-01-08 23:38:31.150854868 +0000 UTC m=+1381.462013814" observedRunningTime="2026-01-08 23:38:32.790073749 +0000 UTC m=+1383.101232705" watchObservedRunningTime="2026-01-08 23:38:32.801683129 +0000 UTC m=+1383.112842075" Jan 08 23:38:32 crc kubenswrapper[4945]: I0108 23:38:32.817264 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" podStartSLOduration=9.461077409 podStartE2EDuration="11.817241503s" podCreationTimestamp="2026-01-08 23:38:21 +0000 UTC" firstStartedPulling="2026-01-08 23:38:28.830424124 +0000 UTC m=+1379.141583060" lastFinishedPulling="2026-01-08 23:38:31.186588218 +0000 UTC m=+1381.497747154" observedRunningTime="2026-01-08 23:38:32.813091363 +0000 UTC m=+1383.124250309" watchObservedRunningTime="2026-01-08 23:38:32.817241503 +0000 UTC m=+1383.128400449" Jan 08 23:38:35 crc kubenswrapper[4945]: I0108 23:38:35.804671 4945 generic.go:334] "Generic (PLEG): container finished" podID="050d08ce-2edb-4748-ad2d-de4183cd0188" containerID="0f812bd914d4fa33163b30da0f6e0fb39998a79862f6047a13bf3435d8c67e95" exitCode=0 Jan 08 23:38:35 crc kubenswrapper[4945]: I0108 23:38:35.805299 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9vrvr" event={"ID":"050d08ce-2edb-4748-ad2d-de4183cd0188","Type":"ContainerDied","Data":"0f812bd914d4fa33163b30da0f6e0fb39998a79862f6047a13bf3435d8c67e95"} Jan 08 23:38:36 crc kubenswrapper[4945]: I0108 23:38:36.158312 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.052659 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.406262 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.509706 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kdtnn"] Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.510072 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" podUID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerName="dnsmasq-dns" containerID="cri-o://7f543aedadea990fccebaa450c3d82ded223678ae0a0882c86653be7724add5d" gracePeriod=10 Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.617240 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.701446 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-68f4b46db6-4tg9b"] Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.701688 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api-log" containerID="cri-o://7f05985840a6a4eb2c433f2c9e15631157f43722ad528cbef5036e64fa0f8ae3" gracePeriod=30 Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.702064 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" containerID="cri-o://78612bfad4ece0fb4a3a9659acbaf6e6379b58f80b5f8ccc1a0f0574671af085" gracePeriod=30 Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.711260 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.711283 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.711689 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.711766 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.726199 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.726342 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": EOF" Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.838035 4945 generic.go:334] "Generic (PLEG): container finished" podID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerID="7f05985840a6a4eb2c433f2c9e15631157f43722ad528cbef5036e64fa0f8ae3" exitCode=143 Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.838123 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f4b46db6-4tg9b" event={"ID":"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7","Type":"ContainerDied","Data":"7f05985840a6a4eb2c433f2c9e15631157f43722ad528cbef5036e64fa0f8ae3"} Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.846957 4945 generic.go:334] "Generic (PLEG): container finished" podID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerID="7f543aedadea990fccebaa450c3d82ded223678ae0a0882c86653be7724add5d" exitCode=0 Jan 08 23:38:37 crc kubenswrapper[4945]: I0108 23:38:37.848229 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" event={"ID":"7de1ea8f-460e-4944-88d6-ebcccbea2119","Type":"ContainerDied","Data":"7f543aedadea990fccebaa450c3d82ded223678ae0a0882c86653be7724add5d"} Jan 08 23:38:38 crc kubenswrapper[4945]: I0108 23:38:38.916848 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 08 23:38:38 crc kubenswrapper[4945]: I0108 23:38:38.917244 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 08 23:38:38 crc kubenswrapper[4945]: I0108 23:38:38.986077 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 08 23:38:38 crc kubenswrapper[4945]: I0108 23:38:38.987712 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 08 23:38:39 crc kubenswrapper[4945]: I0108 23:38:39.394509 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" podUID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Jan 08 23:38:39 crc kubenswrapper[4945]: I0108 23:38:39.801882 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:38:39 crc kubenswrapper[4945]: I0108 23:38:39.870821 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9vrvr" event={"ID":"050d08ce-2edb-4748-ad2d-de4183cd0188","Type":"ContainerDied","Data":"c03394bc51ab0587ca060a94bba18795c106321801347236fc9beb5b338cff1f"} Jan 08 23:38:39 crc kubenswrapper[4945]: I0108 23:38:39.870866 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9vrvr" Jan 08 23:38:39 crc kubenswrapper[4945]: I0108 23:38:39.870902 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c03394bc51ab0587ca060a94bba18795c106321801347236fc9beb5b338cff1f" Jan 08 23:38:39 crc kubenswrapper[4945]: I0108 23:38:39.872161 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 08 23:38:39 crc kubenswrapper[4945]: I0108 23:38:39.872223 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.293680 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-scripts\") pod \"050d08ce-2edb-4748-ad2d-de4183cd0188\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.294016 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84ch2\" (UniqueName: \"kubernetes.io/projected/050d08ce-2edb-4748-ad2d-de4183cd0188-kube-api-access-84ch2\") pod \"050d08ce-2edb-4748-ad2d-de4183cd0188\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.294096 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/050d08ce-2edb-4748-ad2d-de4183cd0188-etc-machine-id\") pod \"050d08ce-2edb-4748-ad2d-de4183cd0188\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.294221 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-combined-ca-bundle\") pod \"050d08ce-2edb-4748-ad2d-de4183cd0188\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.294291 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-db-sync-config-data\") pod \"050d08ce-2edb-4748-ad2d-de4183cd0188\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.294329 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-config-data\") pod \"050d08ce-2edb-4748-ad2d-de4183cd0188\" (UID: \"050d08ce-2edb-4748-ad2d-de4183cd0188\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.302148 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/050d08ce-2edb-4748-ad2d-de4183cd0188-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "050d08ce-2edb-4748-ad2d-de4183cd0188" (UID: "050d08ce-2edb-4748-ad2d-de4183cd0188"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.517059 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050d08ce-2edb-4748-ad2d-de4183cd0188-kube-api-access-84ch2" (OuterVolumeSpecName: "kube-api-access-84ch2") pod "050d08ce-2edb-4748-ad2d-de4183cd0188" (UID: "050d08ce-2edb-4748-ad2d-de4183cd0188"). InnerVolumeSpecName "kube-api-access-84ch2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.525249 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "050d08ce-2edb-4748-ad2d-de4183cd0188" (UID: "050d08ce-2edb-4748-ad2d-de4183cd0188"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.525298 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "050d08ce-2edb-4748-ad2d-de4183cd0188" (UID: "050d08ce-2edb-4748-ad2d-de4183cd0188"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.526153 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-scripts" (OuterVolumeSpecName: "scripts") pod "050d08ce-2edb-4748-ad2d-de4183cd0188" (UID: "050d08ce-2edb-4748-ad2d-de4183cd0188"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.530128 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.530295 4945 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.530370 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.530440 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84ch2\" (UniqueName: \"kubernetes.io/projected/050d08ce-2edb-4748-ad2d-de4183cd0188-kube-api-access-84ch2\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.530509 4945 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/050d08ce-2edb-4748-ad2d-de4183cd0188-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.631729 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-config-data" (OuterVolumeSpecName: "config-data") pod "050d08ce-2edb-4748-ad2d-de4183cd0188" (UID: "050d08ce-2edb-4748-ad2d-de4183cd0188"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.635893 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050d08ce-2edb-4748-ad2d-de4183cd0188-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.719404 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.839076 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-svc\") pod \"7de1ea8f-460e-4944-88d6-ebcccbea2119\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.839168 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-config\") pod \"7de1ea8f-460e-4944-88d6-ebcccbea2119\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.839234 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-swift-storage-0\") pod \"7de1ea8f-460e-4944-88d6-ebcccbea2119\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.839370 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-sb\") pod \"7de1ea8f-460e-4944-88d6-ebcccbea2119\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.841770 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49644\" (UniqueName: \"kubernetes.io/projected/7de1ea8f-460e-4944-88d6-ebcccbea2119-kube-api-access-49644\") pod \"7de1ea8f-460e-4944-88d6-ebcccbea2119\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.841882 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-nb\") pod \"7de1ea8f-460e-4944-88d6-ebcccbea2119\" (UID: \"7de1ea8f-460e-4944-88d6-ebcccbea2119\") " Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.850896 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de1ea8f-460e-4944-88d6-ebcccbea2119-kube-api-access-49644" (OuterVolumeSpecName: "kube-api-access-49644") pod "7de1ea8f-460e-4944-88d6-ebcccbea2119" (UID: "7de1ea8f-460e-4944-88d6-ebcccbea2119"). InnerVolumeSpecName "kube-api-access-49644". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.881628 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerStarted","Data":"362b1e3116c91478f3e44b192b87447563e916a1732bbcf9925fc5d78580d634"} Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.881737 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="ceilometer-central-agent" containerID="cri-o://2d8c64675625b64f38ddb135175e04eceeb5c95dd30ca524d9fff77e79106f06" gracePeriod=30 Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.881831 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.882067 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="proxy-httpd" containerID="cri-o://362b1e3116c91478f3e44b192b87447563e916a1732bbcf9925fc5d78580d634" gracePeriod=30 Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.882091 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="ceilometer-notification-agent" containerID="cri-o://a99fce2db4d78245892dfd0050a3ce7d0cd826c916a3584cd13699b65a41a67c" gracePeriod=30 Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.882190 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="sg-core" containerID="cri-o://7af001ad30926675a7bf294c9e500c1a3e24de400155fe9ff73a89d740db9837" gracePeriod=30 Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.889261 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.889704 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-kdtnn" event={"ID":"7de1ea8f-460e-4944-88d6-ebcccbea2119","Type":"ContainerDied","Data":"83cf91cf1cc90b3339f2896cf238a3c31539a4ec3f01d638fb03adaf12548b5a"} Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.889744 4945 scope.go:117] "RemoveContainer" containerID="7f543aedadea990fccebaa450c3d82ded223678ae0a0882c86653be7724add5d" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.899702 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7de1ea8f-460e-4944-88d6-ebcccbea2119" (UID: "7de1ea8f-460e-4944-88d6-ebcccbea2119"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.899863 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7de1ea8f-460e-4944-88d6-ebcccbea2119" (UID: "7de1ea8f-460e-4944-88d6-ebcccbea2119"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.910266 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7de1ea8f-460e-4944-88d6-ebcccbea2119" (UID: "7de1ea8f-460e-4944-88d6-ebcccbea2119"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.914310 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.532652359 podStartE2EDuration="55.914295812s" podCreationTimestamp="2026-01-08 23:37:45 +0000 UTC" firstStartedPulling="2026-01-08 23:37:47.245298967 +0000 UTC m=+1337.556457913" lastFinishedPulling="2026-01-08 23:38:39.62694242 +0000 UTC m=+1389.938101366" observedRunningTime="2026-01-08 23:38:40.905980932 +0000 UTC m=+1391.217139868" watchObservedRunningTime="2026-01-08 23:38:40.914295812 +0000 UTC m=+1391.225454748" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.918611 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-config" (OuterVolumeSpecName: "config") pod "7de1ea8f-460e-4944-88d6-ebcccbea2119" (UID: "7de1ea8f-460e-4944-88d6-ebcccbea2119"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.927033 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7de1ea8f-460e-4944-88d6-ebcccbea2119" (UID: "7de1ea8f-460e-4944-88d6-ebcccbea2119"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.944463 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49644\" (UniqueName: \"kubernetes.io/projected/7de1ea8f-460e-4944-88d6-ebcccbea2119-kube-api-access-49644\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.944517 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.944527 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.944542 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.944553 4945 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.944563 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7de1ea8f-460e-4944-88d6-ebcccbea2119-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:40 crc kubenswrapper[4945]: I0108 23:38:40.976791 4945 scope.go:117] "RemoveContainer" containerID="a0b433e8a29176f3296a162478c6b6e9a042df1f8295699ccb71378ea9609f32" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.147052 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:41 crc kubenswrapper[4945]: E0108 23:38:41.147696 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f944b6-14dc-4ad6-968b-a29fee612e05" containerName="init" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.147721 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f944b6-14dc-4ad6-968b-a29fee612e05" containerName="init" Jan 08 23:38:41 crc kubenswrapper[4945]: E0108 23:38:41.147733 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f944b6-14dc-4ad6-968b-a29fee612e05" containerName="dnsmasq-dns" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.147740 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f944b6-14dc-4ad6-968b-a29fee612e05" containerName="dnsmasq-dns" Jan 08 23:38:41 crc kubenswrapper[4945]: E0108 23:38:41.147767 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerName="dnsmasq-dns" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.147775 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerName="dnsmasq-dns" Jan 08 23:38:41 crc kubenswrapper[4945]: E0108 23:38:41.147787 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050d08ce-2edb-4748-ad2d-de4183cd0188" containerName="cinder-db-sync" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.147793 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="050d08ce-2edb-4748-ad2d-de4183cd0188" containerName="cinder-db-sync" Jan 08 23:38:41 crc kubenswrapper[4945]: E0108 23:38:41.147809 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerName="init" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.147816 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerName="init" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.148043 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="050d08ce-2edb-4748-ad2d-de4183cd0188" containerName="cinder-db-sync" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.148071 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de1ea8f-460e-4944-88d6-ebcccbea2119" containerName="dnsmasq-dns" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.148100 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f944b6-14dc-4ad6-968b-a29fee612e05" containerName="dnsmasq-dns" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.149392 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.154467 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.154771 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.156518 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5jxlg" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.161607 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.161859 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.187460 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-x8bw5"] Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.189978 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.217214 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-x8bw5"] Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.255332 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kdtnn"] Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.265817 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-kdtnn"] Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.266776 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.266852 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.266977 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267018 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267173 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267236 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-config\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267362 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267406 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267488 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bskj8\" (UniqueName: \"kubernetes.io/projected/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-kube-api-access-bskj8\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267750 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267821 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.267878 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vgt\" (UniqueName: \"kubernetes.io/projected/c65a6567-6928-4df5-8b0f-ed77fefddcd8-kube-api-access-78vgt\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369071 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369140 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369161 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369194 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369215 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-config\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369255 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369277 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369305 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bskj8\" (UniqueName: \"kubernetes.io/projected/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-kube-api-access-bskj8\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369328 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369356 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369384 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vgt\" (UniqueName: \"kubernetes.io/projected/c65a6567-6928-4df5-8b0f-ed77fefddcd8-kube-api-access-78vgt\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369418 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.369724 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.370313 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.370440 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.370766 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.370977 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.371283 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-config\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.375650 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.376083 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.379231 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.379752 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.389351 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78vgt\" (UniqueName: \"kubernetes.io/projected/c65a6567-6928-4df5-8b0f-ed77fefddcd8-kube-api-access-78vgt\") pod \"dnsmasq-dns-6578955fd5-x8bw5\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.391254 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bskj8\" (UniqueName: \"kubernetes.io/projected/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-kube-api-access-bskj8\") pod \"cinder-scheduler-0\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.433859 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.435881 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.450561 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.453065 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.476011 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data-custom\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.476139 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.476167 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-scripts\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.476188 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wnn\" (UniqueName: \"kubernetes.io/projected/f2907a49-f904-4d35-b4a6-700d951a6c33-kube-api-access-47wnn\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.476222 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2907a49-f904-4d35-b4a6-700d951a6c33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.476253 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.476284 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2907a49-f904-4d35-b4a6-700d951a6c33-logs\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.486810 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.530499 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.578473 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.578949 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2907a49-f904-4d35-b4a6-700d951a6c33-logs\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.578983 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data-custom\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.579108 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.579139 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-scripts\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.579163 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wnn\" (UniqueName: \"kubernetes.io/projected/f2907a49-f904-4d35-b4a6-700d951a6c33-kube-api-access-47wnn\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.579197 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2907a49-f904-4d35-b4a6-700d951a6c33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.579285 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2907a49-f904-4d35-b4a6-700d951a6c33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.579552 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2907a49-f904-4d35-b4a6-700d951a6c33-logs\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.584617 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-scripts\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.584904 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.585360 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.586887 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data-custom\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.613398 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wnn\" (UniqueName: \"kubernetes.io/projected/f2907a49-f904-4d35-b4a6-700d951a6c33-kube-api-access-47wnn\") pod \"cinder-api-0\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.754108 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.908909 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-x8bw5"] Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.918888 4945 generic.go:334] "Generic (PLEG): container finished" podID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerID="362b1e3116c91478f3e44b192b87447563e916a1732bbcf9925fc5d78580d634" exitCode=0 Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.918929 4945 generic.go:334] "Generic (PLEG): container finished" podID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerID="7af001ad30926675a7bf294c9e500c1a3e24de400155fe9ff73a89d740db9837" exitCode=2 Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.918940 4945 generic.go:334] "Generic (PLEG): container finished" podID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerID="2d8c64675625b64f38ddb135175e04eceeb5c95dd30ca524d9fff77e79106f06" exitCode=0 Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.919010 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerDied","Data":"362b1e3116c91478f3e44b192b87447563e916a1732bbcf9925fc5d78580d634"} Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.919044 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerDied","Data":"7af001ad30926675a7bf294c9e500c1a3e24de400155fe9ff73a89d740db9837"} Jan 08 23:38:41 crc kubenswrapper[4945]: I0108 23:38:41.919059 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerDied","Data":"2d8c64675625b64f38ddb135175e04eceeb5c95dd30ca524d9fff77e79106f06"} Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.068060 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de1ea8f-460e-4944-88d6-ebcccbea2119" path="/var/lib/kubelet/pods/7de1ea8f-460e-4944-88d6-ebcccbea2119/volumes" Jan 08 23:38:42 crc kubenswrapper[4945]: W0108 23:38:42.206428 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7eaa6c5_77ea_4b60_8b87_3357b73e9c5d.slice/crio-72386bf11e18236c01c19e25d59d4e8e1a46942fcc0344b35dfb57a143799407 WatchSource:0}: Error finding container 72386bf11e18236c01c19e25d59d4e8e1a46942fcc0344b35dfb57a143799407: Status 404 returned error can't find the container with id 72386bf11e18236c01c19e25d59d4e8e1a46942fcc0344b35dfb57a143799407 Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.207799 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.299929 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:42 crc kubenswrapper[4945]: W0108 23:38:42.307569 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2907a49_f904_4d35_b4a6_700d951a6c33.slice/crio-09c1dd6f5c021d8616dd80172363dee80a12f1e8b0dfb143c6045290b14f3813 WatchSource:0}: Error finding container 09c1dd6f5c021d8616dd80172363dee80a12f1e8b0dfb143c6045290b14f3813: Status 404 returned error can't find the container with id 09c1dd6f5c021d8616dd80172363dee80a12f1e8b0dfb143c6045290b14f3813 Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.942843 4945 generic.go:334] "Generic (PLEG): container finished" podID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" containerID="06f6880899812494c79a3bde8112d18f67d5ca8c6d4976c0678741e17fc576fe" exitCode=0 Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.943189 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" event={"ID":"c65a6567-6928-4df5-8b0f-ed77fefddcd8","Type":"ContainerDied","Data":"06f6880899812494c79a3bde8112d18f67d5ca8c6d4976c0678741e17fc576fe"} Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.943221 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" event={"ID":"c65a6567-6928-4df5-8b0f-ed77fefddcd8","Type":"ContainerStarted","Data":"7f7547189a01e1071ccf85f176cb5b98436dd397a2c80517803ff52c0d0c881a"} Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.964028 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d","Type":"ContainerStarted","Data":"72386bf11e18236c01c19e25d59d4e8e1a46942fcc0344b35dfb57a143799407"} Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.986442 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f2907a49-f904-4d35-b4a6-700d951a6c33","Type":"ContainerStarted","Data":"09c1dd6f5c021d8616dd80172363dee80a12f1e8b0dfb143c6045290b14f3813"} Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.991574 4945 generic.go:334] "Generic (PLEG): container finished" podID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerID="a99fce2db4d78245892dfd0050a3ce7d0cd826c916a3584cd13699b65a41a67c" exitCode=0 Jan 08 23:38:42 crc kubenswrapper[4945]: I0108 23:38:42.991631 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerDied","Data":"a99fce2db4d78245892dfd0050a3ce7d0cd826c916a3584cd13699b65a41a67c"} Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.100443 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.232045 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-scripts\") pod \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.232178 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbncw\" (UniqueName: \"kubernetes.io/projected/6b76113a-10a6-4ff6-9ec0-a65a70f906af-kube-api-access-tbncw\") pod \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.232210 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-config-data\") pod \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.232280 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-combined-ca-bundle\") pod \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.232321 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-sg-core-conf-yaml\") pod \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.232362 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-run-httpd\") pod \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.232483 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-log-httpd\") pod \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\" (UID: \"6b76113a-10a6-4ff6-9ec0-a65a70f906af\") " Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.233272 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:46124->10.217.0.156:9311: read: connection reset by peer" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.234281 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-68f4b46db6-4tg9b" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:46136->10.217.0.156:9311: read: connection reset by peer" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.235327 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b76113a-10a6-4ff6-9ec0-a65a70f906af" (UID: "6b76113a-10a6-4ff6-9ec0-a65a70f906af"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.235701 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b76113a-10a6-4ff6-9ec0-a65a70f906af" (UID: "6b76113a-10a6-4ff6-9ec0-a65a70f906af"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.279480 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b76113a-10a6-4ff6-9ec0-a65a70f906af-kube-api-access-tbncw" (OuterVolumeSpecName: "kube-api-access-tbncw") pod "6b76113a-10a6-4ff6-9ec0-a65a70f906af" (UID: "6b76113a-10a6-4ff6-9ec0-a65a70f906af"). InnerVolumeSpecName "kube-api-access-tbncw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.302261 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-scripts" (OuterVolumeSpecName: "scripts") pod "6b76113a-10a6-4ff6-9ec0-a65a70f906af" (UID: "6b76113a-10a6-4ff6-9ec0-a65a70f906af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.335650 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.335696 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbncw\" (UniqueName: \"kubernetes.io/projected/6b76113a-10a6-4ff6-9ec0-a65a70f906af-kube-api-access-tbncw\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.335722 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.335736 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b76113a-10a6-4ff6-9ec0-a65a70f906af-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.341578 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b76113a-10a6-4ff6-9ec0-a65a70f906af" (UID: "6b76113a-10a6-4ff6-9ec0-a65a70f906af"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.437677 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.551638 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b76113a-10a6-4ff6-9ec0-a65a70f906af" (UID: "6b76113a-10a6-4ff6-9ec0-a65a70f906af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.628317 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-config-data" (OuterVolumeSpecName: "config-data") pod "6b76113a-10a6-4ff6-9ec0-a65a70f906af" (UID: "6b76113a-10a6-4ff6-9ec0-a65a70f906af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.646932 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.646972 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b76113a-10a6-4ff6-9ec0-a65a70f906af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.711452 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.711579 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.713643 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 08 23:38:43 crc kubenswrapper[4945]: I0108 23:38:43.751786 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.027981 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" event={"ID":"c65a6567-6928-4df5-8b0f-ed77fefddcd8","Type":"ContainerStarted","Data":"3c0c07919ffc3a6d21d8c9d9708d89bd56bd1ac225a2cecff96ce0e36ff56531"} Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.028737 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.034158 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f2907a49-f904-4d35-b4a6-700d951a6c33","Type":"ContainerStarted","Data":"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe"} Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.055984 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" podStartSLOduration=3.055965592 podStartE2EDuration="3.055965592s" podCreationTimestamp="2026-01-08 23:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:44.05008866 +0000 UTC m=+1394.361247626" watchObservedRunningTime="2026-01-08 23:38:44.055965592 +0000 UTC m=+1394.367124528" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.062605 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b76113a-10a6-4ff6-9ec0-a65a70f906af","Type":"ContainerDied","Data":"a4fb2cd5c2829c9b6f67610513d6e23b06c06b4c549f20387429e6d67e384c6e"} Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.062683 4945 scope.go:117] "RemoveContainer" containerID="362b1e3116c91478f3e44b192b87447563e916a1732bbcf9925fc5d78580d634" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.062940 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.077373 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f4b46db6-4tg9b" event={"ID":"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7","Type":"ContainerDied","Data":"78612bfad4ece0fb4a3a9659acbaf6e6379b58f80b5f8ccc1a0f0574671af085"} Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.077272 4945 generic.go:334] "Generic (PLEG): container finished" podID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerID="78612bfad4ece0fb4a3a9659acbaf6e6379b58f80b5f8ccc1a0f0574671af085" exitCode=0 Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.077692 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-68f4b46db6-4tg9b" event={"ID":"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7","Type":"ContainerDied","Data":"73712fcb5f29989d546818b216204e9e85ba4c1b6caa14fbbeeb6d0728d3d2df"} Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.077727 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73712fcb5f29989d546818b216204e9e85ba4c1b6caa14fbbeeb6d0728d3d2df" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.080048 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.103233 4945 scope.go:117] "RemoveContainer" containerID="7af001ad30926675a7bf294c9e500c1a3e24de400155fe9ff73a89d740db9837" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.112237 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.119208 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.160402 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:38:44 crc kubenswrapper[4945]: E0108 23:38:44.161048 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="ceilometer-central-agent" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161067 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="ceilometer-central-agent" Jan 08 23:38:44 crc kubenswrapper[4945]: E0108 23:38:44.161083 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="sg-core" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161091 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="sg-core" Jan 08 23:38:44 crc kubenswrapper[4945]: E0108 23:38:44.161129 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="ceilometer-notification-agent" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161139 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="ceilometer-notification-agent" Jan 08 23:38:44 crc kubenswrapper[4945]: E0108 23:38:44.161152 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api-log" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161160 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api-log" Jan 08 23:38:44 crc kubenswrapper[4945]: E0108 23:38:44.161176 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="proxy-httpd" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161188 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="proxy-httpd" Jan 08 23:38:44 crc kubenswrapper[4945]: E0108 23:38:44.161209 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161215 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161441 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="ceilometer-central-agent" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161456 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="ceilometer-notification-agent" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161471 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="proxy-httpd" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161478 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" containerName="sg-core" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161488 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.161496 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" containerName="barbican-api-log" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.163412 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.165673 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.166276 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.166495 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr67v\" (UniqueName: \"kubernetes.io/projected/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-kube-api-access-vr67v\") pod \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.166583 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data-custom\") pod \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.166644 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data\") pod \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.166717 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-logs\") pod \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.166797 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-combined-ca-bundle\") pod \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.167451 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-logs" (OuterVolumeSpecName: "logs") pod "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" (UID: "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.167984 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.175737 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.186128 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-kube-api-access-vr67v" (OuterVolumeSpecName: "kube-api-access-vr67v") pod "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" (UID: "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7"). InnerVolumeSpecName "kube-api-access-vr67v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.190445 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" (UID: "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.197421 4945 scope.go:117] "RemoveContainer" containerID="a99fce2db4d78245892dfd0050a3ce7d0cd826c916a3584cd13699b65a41a67c" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.218808 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" (UID: "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.262221 4945 scope.go:117] "RemoveContainer" containerID="2d8c64675625b64f38ddb135175e04eceeb5c95dd30ca524d9fff77e79106f06" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.269443 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data" (OuterVolumeSpecName: "config-data") pod "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" (UID: "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.269861 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data\") pod \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\" (UID: \"7c454599-6a8d-4b1a-a99d-5ff454bbdfe7\") " Jan 08 23:38:44 crc kubenswrapper[4945]: W0108 23:38:44.269969 4945 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7/volumes/kubernetes.io~secret/config-data Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.269992 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data" (OuterVolumeSpecName: "config-data") pod "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" (UID: "7c454599-6a8d-4b1a-a99d-5ff454bbdfe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.270441 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-scripts\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.271863 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-config-data\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.271976 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-run-httpd\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.272029 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-log-httpd\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.272080 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.272108 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.272130 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbjgs\" (UniqueName: \"kubernetes.io/projected/a64d9874-292e-43c4-95f0-c18acbf5724f-kube-api-access-dbjgs\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.272873 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr67v\" (UniqueName: \"kubernetes.io/projected/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-kube-api-access-vr67v\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.273485 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.273510 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.273550 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.375713 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-run-httpd\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.375810 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-log-httpd\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.375880 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.375911 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.375939 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbjgs\" (UniqueName: \"kubernetes.io/projected/a64d9874-292e-43c4-95f0-c18acbf5724f-kube-api-access-dbjgs\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.375983 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-scripts\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.376324 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-run-httpd\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.376568 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-config-data\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.377078 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-log-httpd\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.380828 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.381131 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-scripts\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.382602 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.384040 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-config-data\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.398408 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbjgs\" (UniqueName: \"kubernetes.io/projected/a64d9874-292e-43c4-95f0-c18acbf5724f-kube-api-access-dbjgs\") pod \"ceilometer-0\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " pod="openstack/ceilometer-0" Jan 08 23:38:44 crc kubenswrapper[4945]: I0108 23:38:44.517314 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.069931 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.080299 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.100457 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d","Type":"ContainerStarted","Data":"52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d"} Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.103340 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f2907a49-f904-4d35-b4a6-700d951a6c33","Type":"ContainerStarted","Data":"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967"} Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.103464 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerName="cinder-api-log" containerID="cri-o://7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe" gracePeriod=30 Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.103513 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerName="cinder-api" containerID="cri-o://a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967" gracePeriod=30 Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.103478 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.114334 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-68f4b46db6-4tg9b" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.118162 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerStarted","Data":"f97cb1b77dacfcdff502846f5f308197ea01f3712683b74250a40d554bf16d46"} Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.129437 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.129418396 podStartE2EDuration="4.129418396s" podCreationTimestamp="2026-01-08 23:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:45.124851266 +0000 UTC m=+1395.436010212" watchObservedRunningTime="2026-01-08 23:38:45.129418396 +0000 UTC m=+1395.440577342" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.160817 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-68f4b46db6-4tg9b"] Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.169799 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-68f4b46db6-4tg9b"] Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.842053 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.934071 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2907a49-f904-4d35-b4a6-700d951a6c33-logs\") pod \"f2907a49-f904-4d35-b4a6-700d951a6c33\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.934204 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-scripts\") pod \"f2907a49-f904-4d35-b4a6-700d951a6c33\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.934292 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2907a49-f904-4d35-b4a6-700d951a6c33-etc-machine-id\") pod \"f2907a49-f904-4d35-b4a6-700d951a6c33\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.934330 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data-custom\") pod \"f2907a49-f904-4d35-b4a6-700d951a6c33\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.934412 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data\") pod \"f2907a49-f904-4d35-b4a6-700d951a6c33\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.934424 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2907a49-f904-4d35-b4a6-700d951a6c33-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f2907a49-f904-4d35-b4a6-700d951a6c33" (UID: "f2907a49-f904-4d35-b4a6-700d951a6c33"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.934487 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-combined-ca-bundle\") pod \"f2907a49-f904-4d35-b4a6-700d951a6c33\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.934545 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47wnn\" (UniqueName: \"kubernetes.io/projected/f2907a49-f904-4d35-b4a6-700d951a6c33-kube-api-access-47wnn\") pod \"f2907a49-f904-4d35-b4a6-700d951a6c33\" (UID: \"f2907a49-f904-4d35-b4a6-700d951a6c33\") " Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.935069 4945 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2907a49-f904-4d35-b4a6-700d951a6c33-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.935891 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2907a49-f904-4d35-b4a6-700d951a6c33-logs" (OuterVolumeSpecName: "logs") pod "f2907a49-f904-4d35-b4a6-700d951a6c33" (UID: "f2907a49-f904-4d35-b4a6-700d951a6c33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.939453 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-scripts" (OuterVolumeSpecName: "scripts") pod "f2907a49-f904-4d35-b4a6-700d951a6c33" (UID: "f2907a49-f904-4d35-b4a6-700d951a6c33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.940226 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2907a49-f904-4d35-b4a6-700d951a6c33" (UID: "f2907a49-f904-4d35-b4a6-700d951a6c33"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.940641 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2907a49-f904-4d35-b4a6-700d951a6c33-kube-api-access-47wnn" (OuterVolumeSpecName: "kube-api-access-47wnn") pod "f2907a49-f904-4d35-b4a6-700d951a6c33" (UID: "f2907a49-f904-4d35-b4a6-700d951a6c33"). InnerVolumeSpecName "kube-api-access-47wnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.973241 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2907a49-f904-4d35-b4a6-700d951a6c33" (UID: "f2907a49-f904-4d35-b4a6-700d951a6c33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:45 crc kubenswrapper[4945]: I0108 23:38:45.992299 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data" (OuterVolumeSpecName: "config-data") pod "f2907a49-f904-4d35-b4a6-700d951a6c33" (UID: "f2907a49-f904-4d35-b4a6-700d951a6c33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.015198 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b76113a-10a6-4ff6-9ec0-a65a70f906af" path="/var/lib/kubelet/pods/6b76113a-10a6-4ff6-9ec0-a65a70f906af/volumes" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.016325 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c454599-6a8d-4b1a-a99d-5ff454bbdfe7" path="/var/lib/kubelet/pods/7c454599-6a8d-4b1a-a99d-5ff454bbdfe7/volumes" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.036817 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.036853 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.036866 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47wnn\" (UniqueName: \"kubernetes.io/projected/f2907a49-f904-4d35-b4a6-700d951a6c33-kube-api-access-47wnn\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.036878 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2907a49-f904-4d35-b4a6-700d951a6c33-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.036889 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.036901 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2907a49-f904-4d35-b4a6-700d951a6c33-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.125714 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerStarted","Data":"2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a"} Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.128928 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d","Type":"ContainerStarted","Data":"0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66"} Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.132021 4945 generic.go:334] "Generic (PLEG): container finished" podID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerID="a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967" exitCode=0 Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.132053 4945 generic.go:334] "Generic (PLEG): container finished" podID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerID="7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe" exitCode=143 Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.132074 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f2907a49-f904-4d35-b4a6-700d951a6c33","Type":"ContainerDied","Data":"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967"} Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.132102 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f2907a49-f904-4d35-b4a6-700d951a6c33","Type":"ContainerDied","Data":"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe"} Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.132112 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f2907a49-f904-4d35-b4a6-700d951a6c33","Type":"ContainerDied","Data":"09c1dd6f5c021d8616dd80172363dee80a12f1e8b0dfb143c6045290b14f3813"} Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.132127 4945 scope.go:117] "RemoveContainer" containerID="a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.132262 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.177753 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.073819687 podStartE2EDuration="5.177727975s" podCreationTimestamp="2026-01-08 23:38:41 +0000 UTC" firstStartedPulling="2026-01-08 23:38:42.209044892 +0000 UTC m=+1392.520203828" lastFinishedPulling="2026-01-08 23:38:43.31295316 +0000 UTC m=+1393.624112116" observedRunningTime="2026-01-08 23:38:46.153666976 +0000 UTC m=+1396.464825972" watchObservedRunningTime="2026-01-08 23:38:46.177727975 +0000 UTC m=+1396.488886921" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.204603 4945 scope.go:117] "RemoveContainer" containerID="7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.204823 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.228506 4945 scope.go:117] "RemoveContainer" containerID="a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.228632 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:46 crc kubenswrapper[4945]: E0108 23:38:46.228933 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967\": container with ID starting with a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967 not found: ID does not exist" containerID="a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.228979 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967"} err="failed to get container status \"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967\": rpc error: code = NotFound desc = could not find container \"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967\": container with ID starting with a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967 not found: ID does not exist" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.229022 4945 scope.go:117] "RemoveContainer" containerID="7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe" Jan 08 23:38:46 crc kubenswrapper[4945]: E0108 23:38:46.229441 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe\": container with ID starting with 7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe not found: ID does not exist" containerID="7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.229463 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe"} err="failed to get container status \"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe\": rpc error: code = NotFound desc = could not find container \"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe\": container with ID starting with 7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe not found: ID does not exist" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.229477 4945 scope.go:117] "RemoveContainer" containerID="a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.229719 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967"} err="failed to get container status \"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967\": rpc error: code = NotFound desc = could not find container \"a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967\": container with ID starting with a78cb1f385c06ec556604bc40cddee32aef329844e962de121cd1d570a121967 not found: ID does not exist" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.229797 4945 scope.go:117] "RemoveContainer" containerID="7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.229981 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe"} err="failed to get container status \"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe\": rpc error: code = NotFound desc = could not find container \"7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe\": container with ID starting with 7d399042418d2142c618bd3c30f5fd08597d546552e37f7635e88950880babfe not found: ID does not exist" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.238693 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:46 crc kubenswrapper[4945]: E0108 23:38:46.239396 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerName="cinder-api-log" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.239427 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerName="cinder-api-log" Jan 08 23:38:46 crc kubenswrapper[4945]: E0108 23:38:46.239473 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerName="cinder-api" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.239488 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerName="cinder-api" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.239807 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerName="cinder-api" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.239844 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" containerName="cinder-api-log" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.241239 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.244255 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.246300 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.246771 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.269115 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.351477 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-scripts\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.351558 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04a2b873-3034-4b9f-9daf-81db6749d45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.351642 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.351687 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.351898 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.352012 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a2b873-3034-4b9f-9daf-81db6749d45f-logs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.352226 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6sk7\" (UniqueName: \"kubernetes.io/projected/04a2b873-3034-4b9f-9daf-81db6749d45f-kube-api-access-n6sk7\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.352530 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.352593 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.455753 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.455804 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.455863 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-scripts\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.455899 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04a2b873-3034-4b9f-9daf-81db6749d45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.455947 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.455978 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.456017 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.456034 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a2b873-3034-4b9f-9daf-81db6749d45f-logs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.456081 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sk7\" (UniqueName: \"kubernetes.io/projected/04a2b873-3034-4b9f-9daf-81db6749d45f-kube-api-access-n6sk7\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.456808 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04a2b873-3034-4b9f-9daf-81db6749d45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.457743 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a2b873-3034-4b9f-9daf-81db6749d45f-logs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.460644 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-scripts\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.461133 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.462104 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.462178 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.464084 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.465460 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.480471 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sk7\" (UniqueName: \"kubernetes.io/projected/04a2b873-3034-4b9f-9daf-81db6749d45f-kube-api-access-n6sk7\") pod \"cinder-api-0\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " pod="openstack/cinder-api-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.488068 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 08 23:38:46 crc kubenswrapper[4945]: I0108 23:38:46.585509 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 08 23:38:47 crc kubenswrapper[4945]: I0108 23:38:47.073068 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:38:47 crc kubenswrapper[4945]: W0108 23:38:47.081939 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04a2b873_3034_4b9f_9daf_81db6749d45f.slice/crio-987a8193c41368802ddc04d8a7e7ea647df691a013531235be3b2292760ae5a8 WatchSource:0}: Error finding container 987a8193c41368802ddc04d8a7e7ea647df691a013531235be3b2292760ae5a8: Status 404 returned error can't find the container with id 987a8193c41368802ddc04d8a7e7ea647df691a013531235be3b2292760ae5a8 Jan 08 23:38:47 crc kubenswrapper[4945]: I0108 23:38:47.179871 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerStarted","Data":"52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68"} Jan 08 23:38:47 crc kubenswrapper[4945]: I0108 23:38:47.182665 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04a2b873-3034-4b9f-9daf-81db6749d45f","Type":"ContainerStarted","Data":"987a8193c41368802ddc04d8a7e7ea647df691a013531235be3b2292760ae5a8"} Jan 08 23:38:48 crc kubenswrapper[4945]: I0108 23:38:48.010133 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2907a49-f904-4d35-b4a6-700d951a6c33" path="/var/lib/kubelet/pods/f2907a49-f904-4d35-b4a6-700d951a6c33/volumes" Jan 08 23:38:48 crc kubenswrapper[4945]: I0108 23:38:48.225297 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerStarted","Data":"e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29"} Jan 08 23:38:48 crc kubenswrapper[4945]: I0108 23:38:48.228125 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04a2b873-3034-4b9f-9daf-81db6749d45f","Type":"ContainerStarted","Data":"617e103fd47ab70027896060185afd85b04295da494b0f4b35c58a7ba8a8d5e8"} Jan 08 23:38:48 crc kubenswrapper[4945]: I0108 23:38:48.586105 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:49 crc kubenswrapper[4945]: I0108 23:38:49.241123 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04a2b873-3034-4b9f-9daf-81db6749d45f","Type":"ContainerStarted","Data":"ebe090f7ada13f633224e0bbcee404b72c09adfb8c09163bb99a6a8d5ca17ea4"} Jan 08 23:38:49 crc kubenswrapper[4945]: I0108 23:38:49.243041 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 08 23:38:49 crc kubenswrapper[4945]: I0108 23:38:49.246593 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerStarted","Data":"c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729"} Jan 08 23:38:49 crc kubenswrapper[4945]: I0108 23:38:49.247465 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 08 23:38:49 crc kubenswrapper[4945]: I0108 23:38:49.265063 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.265041606 podStartE2EDuration="3.265041606s" podCreationTimestamp="2026-01-08 23:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:49.262513715 +0000 UTC m=+1399.573672671" watchObservedRunningTime="2026-01-08 23:38:49.265041606 +0000 UTC m=+1399.576200562" Jan 08 23:38:49 crc kubenswrapper[4945]: I0108 23:38:49.308371 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.923878435 podStartE2EDuration="5.308343018s" podCreationTimestamp="2026-01-08 23:38:44 +0000 UTC" firstStartedPulling="2026-01-08 23:38:45.069697948 +0000 UTC m=+1395.380856894" lastFinishedPulling="2026-01-08 23:38:48.454162531 +0000 UTC m=+1398.765321477" observedRunningTime="2026-01-08 23:38:49.299111736 +0000 UTC m=+1399.610270692" watchObservedRunningTime="2026-01-08 23:38:49.308343018 +0000 UTC m=+1399.619501964" Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.196250 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.269626 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56c778dc56-gkxp6"] Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.269885 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56c778dc56-gkxp6" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerName="neutron-api" containerID="cri-o://4e3fde323e4c628301acc65502d08735f89c661dd437d36fdf2d37345e81ed6d" gracePeriod=30 Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.270248 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56c778dc56-gkxp6" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerName="neutron-httpd" containerID="cri-o://5f1fa5a966c12e0f0ce73ab8d05df2190bb0336be121f257d0db8fb19f3bdb50" gracePeriod=30 Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.533253 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.596374 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dgqbp"] Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.596661 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" podUID="8ed56205-b4d2-496d-9a5f-12edf2136d61" containerName="dnsmasq-dns" containerID="cri-o://e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948" gracePeriod=10 Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.822707 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 08 23:38:51 crc kubenswrapper[4945]: I0108 23:38:51.905275 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.147182 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.163230 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.280181 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-nb\") pod \"8ed56205-b4d2-496d-9a5f-12edf2136d61\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.280258 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-config\") pod \"8ed56205-b4d2-496d-9a5f-12edf2136d61\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.280288 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-svc\") pod \"8ed56205-b4d2-496d-9a5f-12edf2136d61\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.280409 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44pgx\" (UniqueName: \"kubernetes.io/projected/8ed56205-b4d2-496d-9a5f-12edf2136d61-kube-api-access-44pgx\") pod \"8ed56205-b4d2-496d-9a5f-12edf2136d61\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.280451 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-sb\") pod \"8ed56205-b4d2-496d-9a5f-12edf2136d61\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.280560 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-swift-storage-0\") pod \"8ed56205-b4d2-496d-9a5f-12edf2136d61\" (UID: \"8ed56205-b4d2-496d-9a5f-12edf2136d61\") " Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.294921 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed56205-b4d2-496d-9a5f-12edf2136d61-kube-api-access-44pgx" (OuterVolumeSpecName: "kube-api-access-44pgx") pod "8ed56205-b4d2-496d-9a5f-12edf2136d61" (UID: "8ed56205-b4d2-496d-9a5f-12edf2136d61"). InnerVolumeSpecName "kube-api-access-44pgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.308341 4945 generic.go:334] "Generic (PLEG): container finished" podID="8ed56205-b4d2-496d-9a5f-12edf2136d61" containerID="e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948" exitCode=0 Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.308471 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.308516 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" event={"ID":"8ed56205-b4d2-496d-9a5f-12edf2136d61","Type":"ContainerDied","Data":"e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948"} Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.308593 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-dgqbp" event={"ID":"8ed56205-b4d2-496d-9a5f-12edf2136d61","Type":"ContainerDied","Data":"cb96bb66faeb1bdaf3430a19cbedb04e5ac880c3383bf5d05fe694cbd7ab80d1"} Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.308622 4945 scope.go:117] "RemoveContainer" containerID="e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.312790 4945 generic.go:334] "Generic (PLEG): container finished" podID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerID="5f1fa5a966c12e0f0ce73ab8d05df2190bb0336be121f257d0db8fb19f3bdb50" exitCode=0 Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.313080 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerName="cinder-scheduler" containerID="cri-o://52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d" gracePeriod=30 Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.313338 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c778dc56-gkxp6" event={"ID":"5e4bc49b-f408-42e8-b805-6ba01f62c232","Type":"ContainerDied","Data":"5f1fa5a966c12e0f0ce73ab8d05df2190bb0336be121f257d0db8fb19f3bdb50"} Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.313438 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerName="probe" containerID="cri-o://0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66" gracePeriod=30 Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.378715 4945 scope.go:117] "RemoveContainer" containerID="828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.387323 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44pgx\" (UniqueName: \"kubernetes.io/projected/8ed56205-b4d2-496d-9a5f-12edf2136d61-kube-api-access-44pgx\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.388427 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ed56205-b4d2-496d-9a5f-12edf2136d61" (UID: "8ed56205-b4d2-496d-9a5f-12edf2136d61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.391454 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ed56205-b4d2-496d-9a5f-12edf2136d61" (UID: "8ed56205-b4d2-496d-9a5f-12edf2136d61"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.392382 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ed56205-b4d2-496d-9a5f-12edf2136d61" (UID: "8ed56205-b4d2-496d-9a5f-12edf2136d61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.408040 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-config" (OuterVolumeSpecName: "config") pod "8ed56205-b4d2-496d-9a5f-12edf2136d61" (UID: "8ed56205-b4d2-496d-9a5f-12edf2136d61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.410733 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ed56205-b4d2-496d-9a5f-12edf2136d61" (UID: "8ed56205-b4d2-496d-9a5f-12edf2136d61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.491210 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.491263 4945 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.491275 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.491285 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.491295 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ed56205-b4d2-496d-9a5f-12edf2136d61-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.497458 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.519761 4945 scope.go:117] "RemoveContainer" containerID="e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948" Jan 08 23:38:52 crc kubenswrapper[4945]: E0108 23:38:52.521346 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948\": container with ID starting with e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948 not found: ID does not exist" containerID="e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.521398 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948"} err="failed to get container status \"e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948\": rpc error: code = NotFound desc = could not find container \"e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948\": container with ID starting with e2bfa427614e8285711cccaa25a6dfc1e0021a7ebbcde912c69458c4a1de3948 not found: ID does not exist" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.521434 4945 scope.go:117] "RemoveContainer" containerID="828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457" Jan 08 23:38:52 crc kubenswrapper[4945]: E0108 23:38:52.525303 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457\": container with ID starting with 828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457 not found: ID does not exist" containerID="828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.525340 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457"} err="failed to get container status \"828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457\": rpc error: code = NotFound desc = could not find container \"828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457\": container with ID starting with 828efeac25e6dde543afa201cafafd8648815431eee0ecd6b063e1f14638a457 not found: ID does not exist" Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.656279 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dgqbp"] Jan 08 23:38:52 crc kubenswrapper[4945]: I0108 23:38:52.715005 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-dgqbp"] Jan 08 23:38:53 crc kubenswrapper[4945]: I0108 23:38:53.328603 4945 generic.go:334] "Generic (PLEG): container finished" podID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerID="0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66" exitCode=0 Jan 08 23:38:53 crc kubenswrapper[4945]: I0108 23:38:53.328680 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d","Type":"ContainerDied","Data":"0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66"} Jan 08 23:38:53 crc kubenswrapper[4945]: I0108 23:38:53.825459 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:38:54 crc kubenswrapper[4945]: I0108 23:38:54.014929 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed56205-b4d2-496d-9a5f-12edf2136d61" path="/var/lib/kubelet/pods/8ed56205-b4d2-496d-9a5f-12edf2136d61/volumes" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.159337 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.263453 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bskj8\" (UniqueName: \"kubernetes.io/projected/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-kube-api-access-bskj8\") pod \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.264195 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-combined-ca-bundle\") pod \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.264653 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data-custom\") pod \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.264739 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-scripts\") pod \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.264794 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data\") pod \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.264824 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-etc-machine-id\") pod \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\" (UID: \"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d\") " Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.265258 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" (UID: "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.265763 4945 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.273155 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-scripts" (OuterVolumeSpecName: "scripts") pod "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" (UID: "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.273341 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-kube-api-access-bskj8" (OuterVolumeSpecName: "kube-api-access-bskj8") pod "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" (UID: "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d"). InnerVolumeSpecName "kube-api-access-bskj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.278117 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" (UID: "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.356893 4945 generic.go:334] "Generic (PLEG): container finished" podID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerID="52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d" exitCode=0 Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.356939 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d","Type":"ContainerDied","Data":"52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d"} Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.356959 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.356973 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d","Type":"ContainerDied","Data":"72386bf11e18236c01c19e25d59d4e8e1a46942fcc0344b35dfb57a143799407"} Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.357010 4945 scope.go:117] "RemoveContainer" containerID="0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.361131 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data" (OuterVolumeSpecName: "config-data") pod "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" (UID: "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.367886 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.367918 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.367945 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.367958 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bskj8\" (UniqueName: \"kubernetes.io/projected/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-kube-api-access-bskj8\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.380368 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" (UID: "e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.401664 4945 scope.go:117] "RemoveContainer" containerID="52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.426222 4945 scope.go:117] "RemoveContainer" containerID="0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66" Jan 08 23:38:56 crc kubenswrapper[4945]: E0108 23:38:56.426902 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66\": container with ID starting with 0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66 not found: ID does not exist" containerID="0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.426984 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66"} err="failed to get container status \"0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66\": rpc error: code = NotFound desc = could not find container \"0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66\": container with ID starting with 0632d027c9e123b054ac9032a12dc91fb221d1f4b77af5c725e21fcff1de4a66 not found: ID does not exist" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.427083 4945 scope.go:117] "RemoveContainer" containerID="52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d" Jan 08 23:38:56 crc kubenswrapper[4945]: E0108 23:38:56.427513 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d\": container with ID starting with 52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d not found: ID does not exist" containerID="52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.427595 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d"} err="failed to get container status \"52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d\": rpc error: code = NotFound desc = could not find container \"52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d\": container with ID starting with 52f5850225417193db43bf1411e744476617628edfc22220c4bfa22ab6bb428d not found: ID does not exist" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.470826 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.728531 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.758441 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.774560 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:56 crc kubenswrapper[4945]: E0108 23:38:56.775286 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerName="cinder-scheduler" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.775422 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerName="cinder-scheduler" Jan 08 23:38:56 crc kubenswrapper[4945]: E0108 23:38:56.775484 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed56205-b4d2-496d-9a5f-12edf2136d61" containerName="init" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.775555 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed56205-b4d2-496d-9a5f-12edf2136d61" containerName="init" Jan 08 23:38:56 crc kubenswrapper[4945]: E0108 23:38:56.775638 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed56205-b4d2-496d-9a5f-12edf2136d61" containerName="dnsmasq-dns" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.775712 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed56205-b4d2-496d-9a5f-12edf2136d61" containerName="dnsmasq-dns" Jan 08 23:38:56 crc kubenswrapper[4945]: E0108 23:38:56.775798 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerName="probe" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.776373 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerName="probe" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.776647 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerName="cinder-scheduler" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.776752 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed56205-b4d2-496d-9a5f-12edf2136d61" containerName="dnsmasq-dns" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.776824 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" containerName="probe" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.778097 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.791483 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.793635 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.882757 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.882835 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.882875 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5skz\" (UniqueName: \"kubernetes.io/projected/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-kube-api-access-r5skz\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.882923 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.883004 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.883042 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.987301 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.987389 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.987423 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.987535 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.987562 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.987592 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5skz\" (UniqueName: \"kubernetes.io/projected/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-kube-api-access-r5skz\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.989288 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.995149 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:56 crc kubenswrapper[4945]: I0108 23:38:56.996577 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.006028 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.010591 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.013103 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5skz\" (UniqueName: \"kubernetes.io/projected/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-kube-api-access-r5skz\") pod \"cinder-scheduler-0\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " pod="openstack/cinder-scheduler-0" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.164702 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.406048 4945 generic.go:334] "Generic (PLEG): container finished" podID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerID="4e3fde323e4c628301acc65502d08735f89c661dd437d36fdf2d37345e81ed6d" exitCode=0 Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.406530 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c778dc56-gkxp6" event={"ID":"5e4bc49b-f408-42e8-b805-6ba01f62c232","Type":"ContainerDied","Data":"4e3fde323e4c628301acc65502d08735f89c661dd437d36fdf2d37345e81ed6d"} Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.430400 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.601660 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-httpd-config\") pod \"5e4bc49b-f408-42e8-b805-6ba01f62c232\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.601729 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-ovndb-tls-certs\") pod \"5e4bc49b-f408-42e8-b805-6ba01f62c232\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.601783 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmbl7\" (UniqueName: \"kubernetes.io/projected/5e4bc49b-f408-42e8-b805-6ba01f62c232-kube-api-access-kmbl7\") pod \"5e4bc49b-f408-42e8-b805-6ba01f62c232\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.601856 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-config\") pod \"5e4bc49b-f408-42e8-b805-6ba01f62c232\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.601891 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-combined-ca-bundle\") pod \"5e4bc49b-f408-42e8-b805-6ba01f62c232\" (UID: \"5e4bc49b-f408-42e8-b805-6ba01f62c232\") " Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.610180 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5e4bc49b-f408-42e8-b805-6ba01f62c232" (UID: "5e4bc49b-f408-42e8-b805-6ba01f62c232"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.610231 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4bc49b-f408-42e8-b805-6ba01f62c232-kube-api-access-kmbl7" (OuterVolumeSpecName: "kube-api-access-kmbl7") pod "5e4bc49b-f408-42e8-b805-6ba01f62c232" (UID: "5e4bc49b-f408-42e8-b805-6ba01f62c232"). InnerVolumeSpecName "kube-api-access-kmbl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.662754 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-config" (OuterVolumeSpecName: "config") pod "5e4bc49b-f408-42e8-b805-6ba01f62c232" (UID: "5e4bc49b-f408-42e8-b805-6ba01f62c232"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.670254 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e4bc49b-f408-42e8-b805-6ba01f62c232" (UID: "5e4bc49b-f408-42e8-b805-6ba01f62c232"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.700611 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5e4bc49b-f408-42e8-b805-6ba01f62c232" (UID: "5e4bc49b-f408-42e8-b805-6ba01f62c232"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.705520 4945 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.705706 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmbl7\" (UniqueName: \"kubernetes.io/projected/5e4bc49b-f408-42e8-b805-6ba01f62c232-kube-api-access-kmbl7\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.705802 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.705880 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.705968 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e4bc49b-f408-42e8-b805-6ba01f62c232-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.717986 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.955278 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 08 23:38:57 crc kubenswrapper[4945]: E0108 23:38:57.960403 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerName="neutron-httpd" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.960449 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerName="neutron-httpd" Jan 08 23:38:57 crc kubenswrapper[4945]: E0108 23:38:57.960493 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerName="neutron-api" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.960501 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerName="neutron-api" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.960903 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerName="neutron-api" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.960924 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" containerName="neutron-httpd" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.962250 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.969741 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.970230 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qg57w" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.970433 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 08 23:38:57 crc kubenswrapper[4945]: I0108 23:38:57.981455 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.013292 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d" path="/var/lib/kubelet/pods/e7eaa6c5-77ea-4b60-8b87-3357b73e9c5d/volumes" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.113893 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config-secret\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.113968 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.114117 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-combined-ca-bundle\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.114149 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7mxd\" (UniqueName: \"kubernetes.io/projected/60fef7df-b0da-45e7-8dfe-434dacea4715-kube-api-access-w7mxd\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.216876 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config-secret\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.216935 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.217012 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-combined-ca-bundle\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.217037 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7mxd\" (UniqueName: \"kubernetes.io/projected/60fef7df-b0da-45e7-8dfe-434dacea4715-kube-api-access-w7mxd\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.220836 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.225156 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config-secret\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.241786 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7mxd\" (UniqueName: \"kubernetes.io/projected/60fef7df-b0da-45e7-8dfe-434dacea4715-kube-api-access-w7mxd\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.242090 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-combined-ca-bundle\") pod \"openstackclient\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.293439 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.420223 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a","Type":"ContainerStarted","Data":"0dea239b5d2847c6d6e07195e320989a3480cc645a97a80bb53980a6369e073a"} Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.423502 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56c778dc56-gkxp6" event={"ID":"5e4bc49b-f408-42e8-b805-6ba01f62c232","Type":"ContainerDied","Data":"8fe430110bcf3e72050967787b51f45e7d445656d98f8cce71f1ce0e2f0162f4"} Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.423553 4945 scope.go:117] "RemoveContainer" containerID="5f1fa5a966c12e0f0ce73ab8d05df2190bb0336be121f257d0db8fb19f3bdb50" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.423703 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56c778dc56-gkxp6" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.450503 4945 scope.go:117] "RemoveContainer" containerID="4e3fde323e4c628301acc65502d08735f89c661dd437d36fdf2d37345e81ed6d" Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.493152 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56c778dc56-gkxp6"] Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.505084 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56c778dc56-gkxp6"] Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.646356 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 08 23:38:58 crc kubenswrapper[4945]: W0108 23:38:58.662044 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60fef7df_b0da_45e7_8dfe_434dacea4715.slice/crio-2918f6dd2977077b5c5151a32a5363cd6e0c802136b3c8d9f9340d868bd65a25 WatchSource:0}: Error finding container 2918f6dd2977077b5c5151a32a5363cd6e0c802136b3c8d9f9340d868bd65a25: Status 404 returned error can't find the container with id 2918f6dd2977077b5c5151a32a5363cd6e0c802136b3c8d9f9340d868bd65a25 Jan 08 23:38:58 crc kubenswrapper[4945]: I0108 23:38:58.805152 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 08 23:38:59 crc kubenswrapper[4945]: I0108 23:38:59.442515 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a","Type":"ContainerStarted","Data":"a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1"} Jan 08 23:38:59 crc kubenswrapper[4945]: I0108 23:38:59.443020 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a","Type":"ContainerStarted","Data":"cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725"} Jan 08 23:38:59 crc kubenswrapper[4945]: I0108 23:38:59.445258 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"60fef7df-b0da-45e7-8dfe-434dacea4715","Type":"ContainerStarted","Data":"2918f6dd2977077b5c5151a32a5363cd6e0c802136b3c8d9f9340d868bd65a25"} Jan 08 23:38:59 crc kubenswrapper[4945]: I0108 23:38:59.475720 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.475692991 podStartE2EDuration="3.475692991s" podCreationTimestamp="2026-01-08 23:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:38:59.474423021 +0000 UTC m=+1409.785581997" watchObservedRunningTime="2026-01-08 23:38:59.475692991 +0000 UTC m=+1409.786851947" Jan 08 23:39:00 crc kubenswrapper[4945]: I0108 23:39:00.036037 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4bc49b-f408-42e8-b805-6ba01f62c232" path="/var/lib/kubelet/pods/5e4bc49b-f408-42e8-b805-6ba01f62c232/volumes" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.165186 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.397343 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-96f5cc787-k4zdr"] Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.398854 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.410466 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.410826 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.411784 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.470798 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-96f5cc787-k4zdr"] Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.518040 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vhwj\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-kube-api-access-9vhwj\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.518091 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-log-httpd\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.518129 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-public-tls-certs\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.518191 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-combined-ca-bundle\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.518208 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-run-httpd\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.518229 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-etc-swift\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.518248 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-internal-tls-certs\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.518271 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-config-data\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.619583 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-public-tls-certs\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.619694 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-combined-ca-bundle\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.619715 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-run-httpd\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.619735 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-etc-swift\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.619762 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-internal-tls-certs\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.619789 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-config-data\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.619852 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vhwj\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-kube-api-access-9vhwj\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.619872 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-log-httpd\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.620237 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-run-httpd\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.620313 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-log-httpd\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.630045 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-config-data\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.634847 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-etc-swift\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.648528 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-internal-tls-certs\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.650965 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-public-tls-certs\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.653025 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-combined-ca-bundle\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.677074 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vhwj\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-kube-api-access-9vhwj\") pod \"swift-proxy-96f5cc787-k4zdr\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:02 crc kubenswrapper[4945]: I0108 23:39:02.728062 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:03 crc kubenswrapper[4945]: I0108 23:39:03.347019 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-96f5cc787-k4zdr"] Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.143131 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.143541 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="ceilometer-central-agent" containerID="cri-o://2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a" gracePeriod=30 Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.145430 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="proxy-httpd" containerID="cri-o://c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729" gracePeriod=30 Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.145504 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="sg-core" containerID="cri-o://e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29" gracePeriod=30 Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.145547 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="ceilometer-notification-agent" containerID="cri-o://52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68" gracePeriod=30 Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.160911 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": EOF" Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.504055 4945 generic.go:334] "Generic (PLEG): container finished" podID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerID="c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729" exitCode=0 Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.504086 4945 generic.go:334] "Generic (PLEG): container finished" podID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerID="e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29" exitCode=2 Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.504107 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerDied","Data":"c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729"} Jan 08 23:39:04 crc kubenswrapper[4945]: I0108 23:39:04.504138 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerDied","Data":"e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29"} Jan 08 23:39:05 crc kubenswrapper[4945]: I0108 23:39:05.522431 4945 generic.go:334] "Generic (PLEG): container finished" podID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerID="2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a" exitCode=0 Jan 08 23:39:05 crc kubenswrapper[4945]: I0108 23:39:05.522489 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerDied","Data":"2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a"} Jan 08 23:39:07 crc kubenswrapper[4945]: I0108 23:39:07.437382 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.412865 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.541677 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-config-data\") pod \"a64d9874-292e-43c4-95f0-c18acbf5724f\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.541734 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-scripts\") pod \"a64d9874-292e-43c4-95f0-c18acbf5724f\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.541775 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbjgs\" (UniqueName: \"kubernetes.io/projected/a64d9874-292e-43c4-95f0-c18acbf5724f-kube-api-access-dbjgs\") pod \"a64d9874-292e-43c4-95f0-c18acbf5724f\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.541874 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-combined-ca-bundle\") pod \"a64d9874-292e-43c4-95f0-c18acbf5724f\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.541908 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-log-httpd\") pod \"a64d9874-292e-43c4-95f0-c18acbf5724f\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.541952 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-run-httpd\") pod \"a64d9874-292e-43c4-95f0-c18acbf5724f\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.542019 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-sg-core-conf-yaml\") pod \"a64d9874-292e-43c4-95f0-c18acbf5724f\" (UID: \"a64d9874-292e-43c4-95f0-c18acbf5724f\") " Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.542627 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a64d9874-292e-43c4-95f0-c18acbf5724f" (UID: "a64d9874-292e-43c4-95f0-c18acbf5724f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.542689 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a64d9874-292e-43c4-95f0-c18acbf5724f" (UID: "a64d9874-292e-43c4-95f0-c18acbf5724f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.543141 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.543159 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a64d9874-292e-43c4-95f0-c18acbf5724f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.546958 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64d9874-292e-43c4-95f0-c18acbf5724f-kube-api-access-dbjgs" (OuterVolumeSpecName: "kube-api-access-dbjgs") pod "a64d9874-292e-43c4-95f0-c18acbf5724f" (UID: "a64d9874-292e-43c4-95f0-c18acbf5724f"). InnerVolumeSpecName "kube-api-access-dbjgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.549288 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-scripts" (OuterVolumeSpecName: "scripts") pod "a64d9874-292e-43c4-95f0-c18acbf5724f" (UID: "a64d9874-292e-43c4-95f0-c18acbf5724f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.581249 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a64d9874-292e-43c4-95f0-c18acbf5724f" (UID: "a64d9874-292e-43c4-95f0-c18acbf5724f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.583213 4945 generic.go:334] "Generic (PLEG): container finished" podID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerID="52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68" exitCode=0 Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.583318 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerDied","Data":"52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68"} Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.583721 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a64d9874-292e-43c4-95f0-c18acbf5724f","Type":"ContainerDied","Data":"f97cb1b77dacfcdff502846f5f308197ea01f3712683b74250a40d554bf16d46"} Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.583751 4945 scope.go:117] "RemoveContainer" containerID="c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.583340 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.585842 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"60fef7df-b0da-45e7-8dfe-434dacea4715","Type":"ContainerStarted","Data":"e62730e74d5750aea233a6665a98c38e3fab63a273b9b58230cb2e7f26de724f"} Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.593268 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-96f5cc787-k4zdr" event={"ID":"6dc39aab-86d6-45f6-b565-3da5375a1983","Type":"ContainerStarted","Data":"5f876eb0a75e5d6e0f574148126dfbb45b4fa70619ddd67c99cc1342b57a8d1f"} Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.593313 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-96f5cc787-k4zdr" event={"ID":"6dc39aab-86d6-45f6-b565-3da5375a1983","Type":"ContainerStarted","Data":"f3190a7601d598152a37af396c7904b88f6f42e507bb39ea42ef20cf86513a75"} Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.622563 4945 scope.go:117] "RemoveContainer" containerID="e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.642854 4945 scope.go:117] "RemoveContainer" containerID="52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.644668 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.644753 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbjgs\" (UniqueName: \"kubernetes.io/projected/a64d9874-292e-43c4-95f0-c18acbf5724f-kube-api-access-dbjgs\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.644821 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.695622 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a64d9874-292e-43c4-95f0-c18acbf5724f" (UID: "a64d9874-292e-43c4-95f0-c18acbf5724f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.714674 4945 scope.go:117] "RemoveContainer" containerID="2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.723742 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-config-data" (OuterVolumeSpecName: "config-data") pod "a64d9874-292e-43c4-95f0-c18acbf5724f" (UID: "a64d9874-292e-43c4-95f0-c18acbf5724f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.737326 4945 scope.go:117] "RemoveContainer" containerID="c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729" Jan 08 23:39:08 crc kubenswrapper[4945]: E0108 23:39:08.737966 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729\": container with ID starting with c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729 not found: ID does not exist" containerID="c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.738147 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729"} err="failed to get container status \"c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729\": rpc error: code = NotFound desc = could not find container \"c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729\": container with ID starting with c77bef1e0789d066f9d2e40bbdc28a824f0eb094e852da648cf9dbd37e2af729 not found: ID does not exist" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.738268 4945 scope.go:117] "RemoveContainer" containerID="e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29" Jan 08 23:39:08 crc kubenswrapper[4945]: E0108 23:39:08.738911 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29\": container with ID starting with e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29 not found: ID does not exist" containerID="e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.738966 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29"} err="failed to get container status \"e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29\": rpc error: code = NotFound desc = could not find container \"e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29\": container with ID starting with e6ae8d5ed2deb77e510b55f0b7ea2d69de9b458eb41b7c9ea301a635b9f7ba29 not found: ID does not exist" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.739017 4945 scope.go:117] "RemoveContainer" containerID="52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68" Jan 08 23:39:08 crc kubenswrapper[4945]: E0108 23:39:08.739473 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68\": container with ID starting with 52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68 not found: ID does not exist" containerID="52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.739575 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68"} err="failed to get container status \"52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68\": rpc error: code = NotFound desc = could not find container \"52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68\": container with ID starting with 52d9810fb0257e6109b545ac3c6580890742f8b0fcf44ab114a9e7dd2d81bd68 not found: ID does not exist" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.739601 4945 scope.go:117] "RemoveContainer" containerID="2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a" Jan 08 23:39:08 crc kubenswrapper[4945]: E0108 23:39:08.739923 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a\": container with ID starting with 2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a not found: ID does not exist" containerID="2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.739951 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a"} err="failed to get container status \"2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a\": rpc error: code = NotFound desc = could not find container \"2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a\": container with ID starting with 2abe68fafa18f1dafa12b62c457d54be030cde0f97763006e8ba5030555a4c3a not found: ID does not exist" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.746376 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.746404 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a64d9874-292e-43c4-95f0-c18acbf5724f-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.879650 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.484567576 podStartE2EDuration="11.879633073s" podCreationTimestamp="2026-01-08 23:38:57 +0000 UTC" firstStartedPulling="2026-01-08 23:38:58.667915282 +0000 UTC m=+1408.979074228" lastFinishedPulling="2026-01-08 23:39:08.062980779 +0000 UTC m=+1418.374139725" observedRunningTime="2026-01-08 23:39:08.61733131 +0000 UTC m=+1418.928490256" watchObservedRunningTime="2026-01-08 23:39:08.879633073 +0000 UTC m=+1419.190792019" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.887212 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.887451 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-log" containerID="cri-o://6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1" gracePeriod=30 Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.887850 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-httpd" containerID="cri-o://303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701" gracePeriod=30 Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.932851 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.951476 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.959664 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:08 crc kubenswrapper[4945]: E0108 23:39:08.960044 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="ceilometer-central-agent" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.960060 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="ceilometer-central-agent" Jan 08 23:39:08 crc kubenswrapper[4945]: E0108 23:39:08.960073 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="sg-core" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.960079 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="sg-core" Jan 08 23:39:08 crc kubenswrapper[4945]: E0108 23:39:08.960088 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="proxy-httpd" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.960095 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="proxy-httpd" Jan 08 23:39:08 crc kubenswrapper[4945]: E0108 23:39:08.960103 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="ceilometer-notification-agent" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.960109 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="ceilometer-notification-agent" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.960289 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="ceilometer-notification-agent" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.960303 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="proxy-httpd" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.960310 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="ceilometer-central-agent" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.960325 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" containerName="sg-core" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.962028 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.965513 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.965901 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 08 23:39:08 crc kubenswrapper[4945]: I0108 23:39:08.977570 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.051318 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-config-data\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.051751 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-run-httpd\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.051904 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-scripts\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.051954 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-log-httpd\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.052016 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.052182 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq4l7\" (UniqueName: \"kubernetes.io/projected/119ddb53-bb06-4c23-87a1-f51e76baffb6-kube-api-access-rq4l7\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.052295 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.154018 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-log-httpd\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.154085 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.154134 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq4l7\" (UniqueName: \"kubernetes.io/projected/119ddb53-bb06-4c23-87a1-f51e76baffb6-kube-api-access-rq4l7\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.154181 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.154289 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-config-data\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.154339 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-run-httpd\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.154392 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-scripts\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.154562 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-log-httpd\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.155085 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-run-httpd\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.160527 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.161620 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-config-data\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.170570 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-scripts\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.175825 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.183158 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq4l7\" (UniqueName: \"kubernetes.io/projected/119ddb53-bb06-4c23-87a1-f51e76baffb6-kube-api-access-rq4l7\") pod \"ceilometer-0\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.365711 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.376290 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d2jqk"] Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.378319 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.392870 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2jqk"] Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.459236 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-catalog-content\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.459304 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-utilities\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.459349 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scj9\" (UniqueName: \"kubernetes.io/projected/f689d02f-2f97-45fb-9ead-02449cf8f47a-kube-api-access-7scj9\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.560499 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-catalog-content\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.560572 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-utilities\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.560615 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7scj9\" (UniqueName: \"kubernetes.io/projected/f689d02f-2f97-45fb-9ead-02449cf8f47a-kube-api-access-7scj9\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.561411 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-catalog-content\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.561622 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-utilities\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.593181 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scj9\" (UniqueName: \"kubernetes.io/projected/f689d02f-2f97-45fb-9ead-02449cf8f47a-kube-api-access-7scj9\") pod \"certified-operators-d2jqk\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.618860 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-96f5cc787-k4zdr" event={"ID":"6dc39aab-86d6-45f6-b565-3da5375a1983","Type":"ContainerStarted","Data":"d385c8c468d076b34c28f8eb6f3dfb6aeddb5e1ac200b92a6cb0997c736e762e"} Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.620597 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.620645 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.633646 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fe7dee5-62fc-46a9-8247-7d675b504104","Type":"ContainerDied","Data":"6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1"} Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.633166 4945 generic.go:334] "Generic (PLEG): container finished" podID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerID="6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1" exitCode=143 Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.658661 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-96f5cc787-k4zdr" podStartSLOduration=7.6586415 podStartE2EDuration="7.6586415s" podCreationTimestamp="2026-01-08 23:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:39:09.655383792 +0000 UTC m=+1419.966542738" watchObservedRunningTime="2026-01-08 23:39:09.6586415 +0000 UTC m=+1419.969800446" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.759764 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:09 crc kubenswrapper[4945]: I0108 23:39:09.942704 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.113304 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64d9874-292e-43c4-95f0-c18acbf5724f" path="/var/lib/kubelet/pods/a64d9874-292e-43c4-95f0-c18acbf5724f/volumes" Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.345487 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d2jqk"] Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.391740 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.392037 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerName="glance-log" containerID="cri-o://4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6" gracePeriod=30 Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.392592 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerName="glance-httpd" containerID="cri-o://976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259" gracePeriod=30 Jan 08 23:39:10 crc kubenswrapper[4945]: W0108 23:39:10.398504 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf689d02f_2f97_45fb_9ead_02449cf8f47a.slice/crio-87a380d09a1d93d248460ec773b18378f87765898e1e8f75235fe180f54a806d WatchSource:0}: Error finding container 87a380d09a1d93d248460ec773b18378f87765898e1e8f75235fe180f54a806d: Status 404 returned error can't find the container with id 87a380d09a1d93d248460ec773b18378f87765898e1e8f75235fe180f54a806d Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.645310 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerStarted","Data":"5127c1e603433f815fb4c5abda9ba0b4607dba05557f0878616281e2d2b4cb05"} Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.652593 4945 generic.go:334] "Generic (PLEG): container finished" podID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerID="4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6" exitCode=143 Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.652767 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88232648-bf7d-4f3d-83e6-2a5b25b7538c","Type":"ContainerDied","Data":"4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6"} Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.655805 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jqk" event={"ID":"f689d02f-2f97-45fb-9ead-02449cf8f47a","Type":"ContainerStarted","Data":"87a380d09a1d93d248460ec773b18378f87765898e1e8f75235fe180f54a806d"} Jan 08 23:39:10 crc kubenswrapper[4945]: I0108 23:39:10.971404 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:11 crc kubenswrapper[4945]: I0108 23:39:11.665486 4945 generic.go:334] "Generic (PLEG): container finished" podID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerID="b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c" exitCode=0 Jan 08 23:39:11 crc kubenswrapper[4945]: I0108 23:39:11.665946 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jqk" event={"ID":"f689d02f-2f97-45fb-9ead-02449cf8f47a","Type":"ContainerDied","Data":"b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c"} Jan 08 23:39:11 crc kubenswrapper[4945]: I0108 23:39:11.670140 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerStarted","Data":"472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5"} Jan 08 23:39:11 crc kubenswrapper[4945]: I0108 23:39:11.670186 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerStarted","Data":"2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece"} Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.053174 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.146:9292/healthcheck\": read tcp 10.217.0.2:36422->10.217.0.146:9292: read: connection reset by peer" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.053244 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.146:9292/healthcheck\": read tcp 10.217.0.2:36418->10.217.0.146:9292: read: connection reset by peer" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.651480 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.680971 4945 generic.go:334] "Generic (PLEG): container finished" podID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerID="303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701" exitCode=0 Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.681042 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fe7dee5-62fc-46a9-8247-7d675b504104","Type":"ContainerDied","Data":"303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701"} Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.681638 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0fe7dee5-62fc-46a9-8247-7d675b504104","Type":"ContainerDied","Data":"19fca36f42bd4e932b4204e2cc1804ed0ade5a9f5f05f7298a4603d615cbdf60"} Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.681681 4945 scope.go:117] "RemoveContainer" containerID="303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.681093 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.689184 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jqk" event={"ID":"f689d02f-2f97-45fb-9ead-02449cf8f47a","Type":"ContainerStarted","Data":"1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5"} Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.695599 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerStarted","Data":"538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f"} Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.742635 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-logs\") pod \"0fe7dee5-62fc-46a9-8247-7d675b504104\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.743058 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0fe7dee5-62fc-46a9-8247-7d675b504104\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.743226 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-config-data\") pod \"0fe7dee5-62fc-46a9-8247-7d675b504104\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.743324 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-scripts\") pod \"0fe7dee5-62fc-46a9-8247-7d675b504104\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.743426 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-internal-tls-certs\") pod \"0fe7dee5-62fc-46a9-8247-7d675b504104\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.743519 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-httpd-run\") pod \"0fe7dee5-62fc-46a9-8247-7d675b504104\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.743603 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvwhw\" (UniqueName: \"kubernetes.io/projected/0fe7dee5-62fc-46a9-8247-7d675b504104-kube-api-access-wvwhw\") pod \"0fe7dee5-62fc-46a9-8247-7d675b504104\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.743729 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-combined-ca-bundle\") pod \"0fe7dee5-62fc-46a9-8247-7d675b504104\" (UID: \"0fe7dee5-62fc-46a9-8247-7d675b504104\") " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.743973 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0fe7dee5-62fc-46a9-8247-7d675b504104" (UID: "0fe7dee5-62fc-46a9-8247-7d675b504104"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.744960 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-logs" (OuterVolumeSpecName: "logs") pod "0fe7dee5-62fc-46a9-8247-7d675b504104" (UID: "0fe7dee5-62fc-46a9-8247-7d675b504104"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.746018 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.746051 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0fe7dee5-62fc-46a9-8247-7d675b504104-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.749325 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-scripts" (OuterVolumeSpecName: "scripts") pod "0fe7dee5-62fc-46a9-8247-7d675b504104" (UID: "0fe7dee5-62fc-46a9-8247-7d675b504104"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.751161 4945 scope.go:117] "RemoveContainer" containerID="6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.760138 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe7dee5-62fc-46a9-8247-7d675b504104-kube-api-access-wvwhw" (OuterVolumeSpecName: "kube-api-access-wvwhw") pod "0fe7dee5-62fc-46a9-8247-7d675b504104" (UID: "0fe7dee5-62fc-46a9-8247-7d675b504104"). InnerVolumeSpecName "kube-api-access-wvwhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.778586 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "0fe7dee5-62fc-46a9-8247-7d675b504104" (UID: "0fe7dee5-62fc-46a9-8247-7d675b504104"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.786428 4945 scope.go:117] "RemoveContainer" containerID="303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701" Jan 08 23:39:12 crc kubenswrapper[4945]: E0108 23:39:12.788558 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701\": container with ID starting with 303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701 not found: ID does not exist" containerID="303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.788709 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701"} err="failed to get container status \"303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701\": rpc error: code = NotFound desc = could not find container \"303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701\": container with ID starting with 303744c892b0ab55d1b5a28912ee8e296417aa177711168b89cf63bd774d9701 not found: ID does not exist" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.788797 4945 scope.go:117] "RemoveContainer" containerID="6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1" Jan 08 23:39:12 crc kubenswrapper[4945]: E0108 23:39:12.792133 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1\": container with ID starting with 6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1 not found: ID does not exist" containerID="6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.792227 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1"} err="failed to get container status \"6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1\": rpc error: code = NotFound desc = could not find container \"6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1\": container with ID starting with 6ccd0f46740d1dc58c2c59df8d1c0082ab5fc03664969f5fe1f78361cade28a1 not found: ID does not exist" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.815873 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fe7dee5-62fc-46a9-8247-7d675b504104" (UID: "0fe7dee5-62fc-46a9-8247-7d675b504104"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.831349 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-config-data" (OuterVolumeSpecName: "config-data") pod "0fe7dee5-62fc-46a9-8247-7d675b504104" (UID: "0fe7dee5-62fc-46a9-8247-7d675b504104"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.844277 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0fe7dee5-62fc-46a9-8247-7d675b504104" (UID: "0fe7dee5-62fc-46a9-8247-7d675b504104"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.847400 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.847431 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.847441 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.847451 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.847463 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvwhw\" (UniqueName: \"kubernetes.io/projected/0fe7dee5-62fc-46a9-8247-7d675b504104-kube-api-access-wvwhw\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.847474 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe7dee5-62fc-46a9-8247-7d675b504104-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.874459 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 08 23:39:12 crc kubenswrapper[4945]: I0108 23:39:12.948671 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.026111 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.065681 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.086424 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:39:13 crc kubenswrapper[4945]: E0108 23:39:13.087083 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-log" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.087108 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-log" Jan 08 23:39:13 crc kubenswrapper[4945]: E0108 23:39:13.087123 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-httpd" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.087133 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-httpd" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.087376 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-log" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.087400 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" containerName="glance-httpd" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.088928 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.093792 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.094900 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.110713 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.154105 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-logs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.154173 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.154301 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.154448 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwxs\" (UniqueName: \"kubernetes.io/projected/2eb23b1e-c7b1-465a-a91c-6042942e604a-kube-api-access-zpwxs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.154681 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.154732 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.154756 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.154776 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257087 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257152 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257195 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257218 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257267 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-logs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257312 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257346 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257398 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpwxs\" (UniqueName: \"kubernetes.io/projected/2eb23b1e-c7b1-465a-a91c-6042942e604a-kube-api-access-zpwxs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257588 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257811 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-logs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.257826 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.265622 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.266024 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.267573 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.281639 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.284362 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpwxs\" (UniqueName: \"kubernetes.io/projected/2eb23b1e-c7b1-465a-a91c-6042942e604a-kube-api-access-zpwxs\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.288059 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.421885 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.725567 4945 generic.go:334] "Generic (PLEG): container finished" podID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerID="1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5" exitCode=0 Jan 08 23:39:13 crc kubenswrapper[4945]: I0108 23:39:13.725781 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jqk" event={"ID":"f689d02f-2f97-45fb-9ead-02449cf8f47a","Type":"ContainerDied","Data":"1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5"} Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.012755 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe7dee5-62fc-46a9-8247-7d675b504104" path="/var/lib/kubelet/pods/0fe7dee5-62fc-46a9-8247-7d675b504104/volumes" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.093210 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:39:14 crc kubenswrapper[4945]: W0108 23:39:14.132089 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2eb23b1e_c7b1_465a_a91c_6042942e604a.slice/crio-f21a8fdb04987cf84a6d8456e54051fb39ff64db73069ab1690dc9716b7d0579 WatchSource:0}: Error finding container f21a8fdb04987cf84a6d8456e54051fb39ff64db73069ab1690dc9716b7d0579: Status 404 returned error can't find the container with id f21a8fdb04987cf84a6d8456e54051fb39ff64db73069ab1690dc9716b7d0579 Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.628935 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.690237 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-logs\") pod \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.690284 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-scripts\") pod \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.690355 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-combined-ca-bundle\") pod \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.690398 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-config-data\") pod \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.690438 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-public-tls-certs\") pod \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.690474 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.690539 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-httpd-run\") pod \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.690620 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zztxc\" (UniqueName: \"kubernetes.io/projected/88232648-bf7d-4f3d-83e6-2a5b25b7538c-kube-api-access-zztxc\") pod \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\" (UID: \"88232648-bf7d-4f3d-83e6-2a5b25b7538c\") " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.693255 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "88232648-bf7d-4f3d-83e6-2a5b25b7538c" (UID: "88232648-bf7d-4f3d-83e6-2a5b25b7538c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.693513 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-logs" (OuterVolumeSpecName: "logs") pod "88232648-bf7d-4f3d-83e6-2a5b25b7538c" (UID: "88232648-bf7d-4f3d-83e6-2a5b25b7538c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.697441 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-scripts" (OuterVolumeSpecName: "scripts") pod "88232648-bf7d-4f3d-83e6-2a5b25b7538c" (UID: "88232648-bf7d-4f3d-83e6-2a5b25b7538c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.700569 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88232648-bf7d-4f3d-83e6-2a5b25b7538c-kube-api-access-zztxc" (OuterVolumeSpecName: "kube-api-access-zztxc") pod "88232648-bf7d-4f3d-83e6-2a5b25b7538c" (UID: "88232648-bf7d-4f3d-83e6-2a5b25b7538c"). InnerVolumeSpecName "kube-api-access-zztxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.708953 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "88232648-bf7d-4f3d-83e6-2a5b25b7538c" (UID: "88232648-bf7d-4f3d-83e6-2a5b25b7538c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.749251 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88232648-bf7d-4f3d-83e6-2a5b25b7538c" (UID: "88232648-bf7d-4f3d-83e6-2a5b25b7538c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.766049 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-config-data" (OuterVolumeSpecName: "config-data") pod "88232648-bf7d-4f3d-83e6-2a5b25b7538c" (UID: "88232648-bf7d-4f3d-83e6-2a5b25b7538c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.766099 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jqk" event={"ID":"f689d02f-2f97-45fb-9ead-02449cf8f47a","Type":"ContainerStarted","Data":"a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb"} Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.768371 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2eb23b1e-c7b1-465a-a91c-6042942e604a","Type":"ContainerStarted","Data":"f21a8fdb04987cf84a6d8456e54051fb39ff64db73069ab1690dc9716b7d0579"} Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.773556 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "88232648-bf7d-4f3d-83e6-2a5b25b7538c" (UID: "88232648-bf7d-4f3d-83e6-2a5b25b7538c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.776837 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerStarted","Data":"5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa"} Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.777014 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="ceilometer-central-agent" containerID="cri-o://2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece" gracePeriod=30 Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.777096 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.777144 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="proxy-httpd" containerID="cri-o://5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa" gracePeriod=30 Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.777182 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="sg-core" containerID="cri-o://538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f" gracePeriod=30 Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.777212 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="ceilometer-notification-agent" containerID="cri-o://472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5" gracePeriod=30 Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.781496 4945 generic.go:334] "Generic (PLEG): container finished" podID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerID="976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259" exitCode=0 Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.781534 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88232648-bf7d-4f3d-83e6-2a5b25b7538c","Type":"ContainerDied","Data":"976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259"} Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.781558 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88232648-bf7d-4f3d-83e6-2a5b25b7538c","Type":"ContainerDied","Data":"0b8d75aa88340d67f5224d36fad7ebf4c7bcd460519fbbcf51a7cc386f5f511a"} Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.781578 4945 scope.go:117] "RemoveContainer" containerID="976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.781668 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.793688 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d2jqk" podStartSLOduration=3.236477021 podStartE2EDuration="5.793671594s" podCreationTimestamp="2026-01-08 23:39:09 +0000 UTC" firstStartedPulling="2026-01-08 23:39:11.667865396 +0000 UTC m=+1421.979024342" lastFinishedPulling="2026-01-08 23:39:14.225059969 +0000 UTC m=+1424.536218915" observedRunningTime="2026-01-08 23:39:14.790301203 +0000 UTC m=+1425.101460149" watchObservedRunningTime="2026-01-08 23:39:14.793671594 +0000 UTC m=+1425.104830540" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.795144 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zztxc\" (UniqueName: \"kubernetes.io/projected/88232648-bf7d-4f3d-83e6-2a5b25b7538c-kube-api-access-zztxc\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.795181 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.795190 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.795198 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.795207 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.795218 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88232648-bf7d-4f3d-83e6-2a5b25b7538c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.795246 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.795257 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88232648-bf7d-4f3d-83e6-2a5b25b7538c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.826404 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.4017123209999998 podStartE2EDuration="6.826381681s" podCreationTimestamp="2026-01-08 23:39:08 +0000 UTC" firstStartedPulling="2026-01-08 23:39:10.049651441 +0000 UTC m=+1420.360810387" lastFinishedPulling="2026-01-08 23:39:13.474320801 +0000 UTC m=+1423.785479747" observedRunningTime="2026-01-08 23:39:14.823448121 +0000 UTC m=+1425.134607067" watchObservedRunningTime="2026-01-08 23:39:14.826381681 +0000 UTC m=+1425.137540627" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.828539 4945 scope.go:117] "RemoveContainer" containerID="4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.837271 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.865099 4945 scope.go:117] "RemoveContainer" containerID="976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259" Jan 08 23:39:14 crc kubenswrapper[4945]: E0108 23:39:14.866599 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259\": container with ID starting with 976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259 not found: ID does not exist" containerID="976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.866651 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259"} err="failed to get container status \"976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259\": rpc error: code = NotFound desc = could not find container \"976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259\": container with ID starting with 976ce599d8bf2a0fae3d64f5de111ba1b82a7de4dc6c09144813ddd5624a9259 not found: ID does not exist" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.866682 4945 scope.go:117] "RemoveContainer" containerID="4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6" Jan 08 23:39:14 crc kubenswrapper[4945]: E0108 23:39:14.867513 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6\": container with ID starting with 4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6 not found: ID does not exist" containerID="4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.867545 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6"} err="failed to get container status \"4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6\": rpc error: code = NotFound desc = could not find container \"4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6\": container with ID starting with 4ebbc537247322886a422cbaf27560ae38dad9be6af2a24850f1b6a10f6ffef6 not found: ID does not exist" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.914318 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.923643 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.954112 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.969321 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:39:14 crc kubenswrapper[4945]: E0108 23:39:14.969931 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerName="glance-log" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.969953 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerName="glance-log" Jan 08 23:39:14 crc kubenswrapper[4945]: E0108 23:39:14.969975 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerName="glance-httpd" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.969983 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerName="glance-httpd" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.970171 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerName="glance-httpd" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.970188 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" containerName="glance-log" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.971147 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.975844 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.976096 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 08 23:39:14 crc kubenswrapper[4945]: I0108 23:39:14.984786 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.032967 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.033055 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.033108 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-logs\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.033148 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.033208 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksm5b\" (UniqueName: \"kubernetes.io/projected/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-kube-api-access-ksm5b\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.033244 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.033301 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.033334 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.135309 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.135392 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.135519 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.135777 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.136619 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.136692 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-logs\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.136751 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.136841 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksm5b\" (UniqueName: \"kubernetes.io/projected/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-kube-api-access-ksm5b\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.136883 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.140058 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.140188 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-logs\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.140666 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.143575 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.152736 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.160830 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.174449 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksm5b\" (UniqueName: \"kubernetes.io/projected/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-kube-api-access-ksm5b\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.181289 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.292454 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.793091 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2eb23b1e-c7b1-465a-a91c-6042942e604a","Type":"ContainerStarted","Data":"f67a08265ea88bae6d39299224d2a2604f867f86d99af833fa2c5deefc166ff7"} Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.793812 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2eb23b1e-c7b1-465a-a91c-6042942e604a","Type":"ContainerStarted","Data":"654dfd0dae13b6eca5059e86d0a2d97564f19b2b01579e9aca119bd08b290b5c"} Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.796269 4945 generic.go:334] "Generic (PLEG): container finished" podID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerID="5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa" exitCode=0 Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.796306 4945 generic.go:334] "Generic (PLEG): container finished" podID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerID="538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f" exitCode=2 Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.796319 4945 generic.go:334] "Generic (PLEG): container finished" podID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerID="472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5" exitCode=0 Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.796360 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerDied","Data":"5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa"} Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.796409 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerDied","Data":"538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f"} Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.796422 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerDied","Data":"472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5"} Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.827911 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.827893074 podStartE2EDuration="2.827893074s" podCreationTimestamp="2026-01-08 23:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:39:15.825169729 +0000 UTC m=+1426.136328675" watchObservedRunningTime="2026-01-08 23:39:15.827893074 +0000 UTC m=+1426.139052020" Jan 08 23:39:15 crc kubenswrapper[4945]: W0108 23:39:15.988858 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb7afdb8_52e2_4078_8a6e_5f1fea2acd59.slice/crio-c4b86bccde7fdb0dfc9ae6da6e63a7341e8e8c06f867a7589561600f2c94a74b WatchSource:0}: Error finding container c4b86bccde7fdb0dfc9ae6da6e63a7341e8e8c06f867a7589561600f2c94a74b: Status 404 returned error can't find the container with id c4b86bccde7fdb0dfc9ae6da6e63a7341e8e8c06f867a7589561600f2c94a74b Jan 08 23:39:15 crc kubenswrapper[4945]: I0108 23:39:15.996967 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:39:16 crc kubenswrapper[4945]: I0108 23:39:16.041630 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88232648-bf7d-4f3d-83e6-2a5b25b7538c" path="/var/lib/kubelet/pods/88232648-bf7d-4f3d-83e6-2a5b25b7538c/volumes" Jan 08 23:39:16 crc kubenswrapper[4945]: I0108 23:39:16.815367 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59","Type":"ContainerStarted","Data":"c4b86bccde7fdb0dfc9ae6da6e63a7341e8e8c06f867a7589561600f2c94a74b"} Jan 08 23:39:17 crc kubenswrapper[4945]: I0108 23:39:17.737193 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:17 crc kubenswrapper[4945]: I0108 23:39:17.737757 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:39:18 crc kubenswrapper[4945]: I0108 23:39:18.843845 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59","Type":"ContainerStarted","Data":"15641b43f05f79e74699bfe52baf19315b239ba529af80999ae5807b2745e479"} Jan 08 23:39:19 crc kubenswrapper[4945]: I0108 23:39:19.760288 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:19 crc kubenswrapper[4945]: I0108 23:39:19.760689 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:19 crc kubenswrapper[4945]: I0108 23:39:19.813384 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:19 crc kubenswrapper[4945]: I0108 23:39:19.855875 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59","Type":"ContainerStarted","Data":"a3b7d465ce7932bc7a2c58b3a8d58a6d40a84dc47fe41a15fffd4d2e75e42570"} Jan 08 23:39:19 crc kubenswrapper[4945]: I0108 23:39:19.883101 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.883078769 podStartE2EDuration="5.883078769s" podCreationTimestamp="2026-01-08 23:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:39:19.876928211 +0000 UTC m=+1430.188087157" watchObservedRunningTime="2026-01-08 23:39:19.883078769 +0000 UTC m=+1430.194237715" Jan 08 23:39:19 crc kubenswrapper[4945]: I0108 23:39:19.911149 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:20 crc kubenswrapper[4945]: I0108 23:39:20.057361 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2jqk"] Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.758690 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.765839 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-scripts\") pod \"119ddb53-bb06-4c23-87a1-f51e76baffb6\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.765920 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq4l7\" (UniqueName: \"kubernetes.io/projected/119ddb53-bb06-4c23-87a1-f51e76baffb6-kube-api-access-rq4l7\") pod \"119ddb53-bb06-4c23-87a1-f51e76baffb6\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.765950 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-log-httpd\") pod \"119ddb53-bb06-4c23-87a1-f51e76baffb6\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.766023 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-run-httpd\") pod \"119ddb53-bb06-4c23-87a1-f51e76baffb6\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.766107 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-sg-core-conf-yaml\") pod \"119ddb53-bb06-4c23-87a1-f51e76baffb6\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.766264 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-combined-ca-bundle\") pod \"119ddb53-bb06-4c23-87a1-f51e76baffb6\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.766311 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-config-data\") pod \"119ddb53-bb06-4c23-87a1-f51e76baffb6\" (UID: \"119ddb53-bb06-4c23-87a1-f51e76baffb6\") " Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.767831 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "119ddb53-bb06-4c23-87a1-f51e76baffb6" (UID: "119ddb53-bb06-4c23-87a1-f51e76baffb6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.768199 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "119ddb53-bb06-4c23-87a1-f51e76baffb6" (UID: "119ddb53-bb06-4c23-87a1-f51e76baffb6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.793410 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-scripts" (OuterVolumeSpecName: "scripts") pod "119ddb53-bb06-4c23-87a1-f51e76baffb6" (UID: "119ddb53-bb06-4c23-87a1-f51e76baffb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.804861 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119ddb53-bb06-4c23-87a1-f51e76baffb6-kube-api-access-rq4l7" (OuterVolumeSpecName: "kube-api-access-rq4l7") pod "119ddb53-bb06-4c23-87a1-f51e76baffb6" (UID: "119ddb53-bb06-4c23-87a1-f51e76baffb6"). InnerVolumeSpecName "kube-api-access-rq4l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.813451 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "119ddb53-bb06-4c23-87a1-f51e76baffb6" (UID: "119ddb53-bb06-4c23-87a1-f51e76baffb6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.869064 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.869107 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq4l7\" (UniqueName: \"kubernetes.io/projected/119ddb53-bb06-4c23-87a1-f51e76baffb6-kube-api-access-rq4l7\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.869124 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.869137 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/119ddb53-bb06-4c23-87a1-f51e76baffb6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.869147 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.874057 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "119ddb53-bb06-4c23-87a1-f51e76baffb6" (UID: "119ddb53-bb06-4c23-87a1-f51e76baffb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.876616 4945 generic.go:334] "Generic (PLEG): container finished" podID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerID="2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece" exitCode=0 Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.876825 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d2jqk" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerName="registry-server" containerID="cri-o://a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb" gracePeriod=2 Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.876923 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.877373 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerDied","Data":"2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece"} Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.877404 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"119ddb53-bb06-4c23-87a1-f51e76baffb6","Type":"ContainerDied","Data":"5127c1e603433f815fb4c5abda9ba0b4607dba05557f0878616281e2d2b4cb05"} Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.877420 4945 scope.go:117] "RemoveContainer" containerID="5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.886156 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-config-data" (OuterVolumeSpecName: "config-data") pod "119ddb53-bb06-4c23-87a1-f51e76baffb6" (UID: "119ddb53-bb06-4c23-87a1-f51e76baffb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.903265 4945 scope.go:117] "RemoveContainer" containerID="538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.922822 4945 scope.go:117] "RemoveContainer" containerID="472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.970726 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.970760 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119ddb53-bb06-4c23-87a1-f51e76baffb6-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:21 crc kubenswrapper[4945]: I0108 23:39:21.994686 4945 scope.go:117] "RemoveContainer" containerID="2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.021917 4945 scope.go:117] "RemoveContainer" containerID="5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa" Jan 08 23:39:22 crc kubenswrapper[4945]: E0108 23:39:22.022564 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa\": container with ID starting with 5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa not found: ID does not exist" containerID="5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.022605 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa"} err="failed to get container status \"5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa\": rpc error: code = NotFound desc = could not find container \"5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa\": container with ID starting with 5b99b3ff551ff02060e4006ca1523f1873fd0027b217705befaf242d9e975aaa not found: ID does not exist" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.022631 4945 scope.go:117] "RemoveContainer" containerID="538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f" Jan 08 23:39:22 crc kubenswrapper[4945]: E0108 23:39:22.027180 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f\": container with ID starting with 538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f not found: ID does not exist" containerID="538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.027235 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f"} err="failed to get container status \"538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f\": rpc error: code = NotFound desc = could not find container \"538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f\": container with ID starting with 538e281c2fddca7f6feb32f5bac407a5e230fe0e3683dd12fc4d02ccd6c5c29f not found: ID does not exist" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.027271 4945 scope.go:117] "RemoveContainer" containerID="472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5" Jan 08 23:39:22 crc kubenswrapper[4945]: E0108 23:39:22.028721 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5\": container with ID starting with 472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5 not found: ID does not exist" containerID="472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.028759 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5"} err="failed to get container status \"472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5\": rpc error: code = NotFound desc = could not find container \"472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5\": container with ID starting with 472cd317f7d4c3d773d0261eb944ee181281a6c12aac5530ce6a9012343066b5 not found: ID does not exist" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.028786 4945 scope.go:117] "RemoveContainer" containerID="2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece" Jan 08 23:39:22 crc kubenswrapper[4945]: E0108 23:39:22.029088 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece\": container with ID starting with 2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece not found: ID does not exist" containerID="2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.029113 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece"} err="failed to get container status \"2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece\": rpc error: code = NotFound desc = could not find container \"2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece\": container with ID starting with 2004a20795b7d6d73ed9549bc603b7926e14b612b34b182583abac4e4645fece not found: ID does not exist" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.203582 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.209245 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.229605 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:22 crc kubenswrapper[4945]: E0108 23:39:22.232172 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="ceilometer-central-agent" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.232349 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="ceilometer-central-agent" Jan 08 23:39:22 crc kubenswrapper[4945]: E0108 23:39:22.232442 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="proxy-httpd" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.232517 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="proxy-httpd" Jan 08 23:39:22 crc kubenswrapper[4945]: E0108 23:39:22.232593 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="ceilometer-notification-agent" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.232671 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="ceilometer-notification-agent" Jan 08 23:39:22 crc kubenswrapper[4945]: E0108 23:39:22.232756 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="sg-core" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.232930 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="sg-core" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.233325 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="ceilometer-central-agent" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.233433 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="ceilometer-notification-agent" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.235532 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="proxy-httpd" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.235698 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" containerName="sg-core" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.237887 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.240426 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.240598 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.260222 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.276412 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.276474 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-run-httpd\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.276500 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-scripts\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.276528 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-log-httpd\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.276574 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-config-data\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.276596 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9z5h\" (UniqueName: \"kubernetes.io/projected/8a42cef1-a63c-49f8-840f-d27eb7b6756e-kube-api-access-v9z5h\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.276634 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.295325 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.377487 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7scj9\" (UniqueName: \"kubernetes.io/projected/f689d02f-2f97-45fb-9ead-02449cf8f47a-kube-api-access-7scj9\") pod \"f689d02f-2f97-45fb-9ead-02449cf8f47a\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378125 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-utilities\") pod \"f689d02f-2f97-45fb-9ead-02449cf8f47a\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378217 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-catalog-content\") pod \"f689d02f-2f97-45fb-9ead-02449cf8f47a\" (UID: \"f689d02f-2f97-45fb-9ead-02449cf8f47a\") " Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378369 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378460 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378490 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-run-httpd\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378513 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-scripts\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378538 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-log-httpd\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378582 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-config-data\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378599 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9z5h\" (UniqueName: \"kubernetes.io/projected/8a42cef1-a63c-49f8-840f-d27eb7b6756e-kube-api-access-v9z5h\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.378869 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-utilities" (OuterVolumeSpecName: "utilities") pod "f689d02f-2f97-45fb-9ead-02449cf8f47a" (UID: "f689d02f-2f97-45fb-9ead-02449cf8f47a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.379381 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-log-httpd\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.379471 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-run-httpd\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.381480 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f689d02f-2f97-45fb-9ead-02449cf8f47a-kube-api-access-7scj9" (OuterVolumeSpecName: "kube-api-access-7scj9") pod "f689d02f-2f97-45fb-9ead-02449cf8f47a" (UID: "f689d02f-2f97-45fb-9ead-02449cf8f47a"). InnerVolumeSpecName "kube-api-access-7scj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.383399 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-config-data\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.383897 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.384468 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-scripts\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.385854 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.395482 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9z5h\" (UniqueName: \"kubernetes.io/projected/8a42cef1-a63c-49f8-840f-d27eb7b6756e-kube-api-access-v9z5h\") pod \"ceilometer-0\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.442214 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f689d02f-2f97-45fb-9ead-02449cf8f47a" (UID: "f689d02f-2f97-45fb-9ead-02449cf8f47a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.480494 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7scj9\" (UniqueName: \"kubernetes.io/projected/f689d02f-2f97-45fb-9ead-02449cf8f47a-kube-api-access-7scj9\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.480535 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.480547 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f689d02f-2f97-45fb-9ead-02449cf8f47a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.540970 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.541701 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.890099 4945 generic.go:334] "Generic (PLEG): container finished" podID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerID="a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb" exitCode=0 Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.890448 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jqk" event={"ID":"f689d02f-2f97-45fb-9ead-02449cf8f47a","Type":"ContainerDied","Data":"a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb"} Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.890494 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d2jqk" event={"ID":"f689d02f-2f97-45fb-9ead-02449cf8f47a","Type":"ContainerDied","Data":"87a380d09a1d93d248460ec773b18378f87765898e1e8f75235fe180f54a806d"} Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.890514 4945 scope.go:117] "RemoveContainer" containerID="a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.890681 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d2jqk" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.932227 4945 scope.go:117] "RemoveContainer" containerID="1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5" Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.941564 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d2jqk"] Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.951440 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d2jqk"] Jan 08 23:39:22 crc kubenswrapper[4945]: I0108 23:39:22.964035 4945 scope.go:117] "RemoveContainer" containerID="b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.006770 4945 scope.go:117] "RemoveContainer" containerID="a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb" Jan 08 23:39:23 crc kubenswrapper[4945]: E0108 23:39:23.007252 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb\": container with ID starting with a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb not found: ID does not exist" containerID="a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.007288 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb"} err="failed to get container status \"a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb\": rpc error: code = NotFound desc = could not find container \"a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb\": container with ID starting with a77dd5b5043aa934f0579c40900107909ad6f5b23dbecbfaefa6a660f81827eb not found: ID does not exist" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.007314 4945 scope.go:117] "RemoveContainer" containerID="1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5" Jan 08 23:39:23 crc kubenswrapper[4945]: E0108 23:39:23.008849 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5\": container with ID starting with 1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5 not found: ID does not exist" containerID="1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.008886 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5"} err="failed to get container status \"1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5\": rpc error: code = NotFound desc = could not find container \"1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5\": container with ID starting with 1546181cff5ef8c2c253b256fd9550061dcc9d1219b867fa915e7126bbd200d5 not found: ID does not exist" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.008907 4945 scope.go:117] "RemoveContainer" containerID="b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c" Jan 08 23:39:23 crc kubenswrapper[4945]: E0108 23:39:23.009188 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c\": container with ID starting with b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c not found: ID does not exist" containerID="b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.009210 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c"} err="failed to get container status \"b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c\": rpc error: code = NotFound desc = could not find container \"b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c\": container with ID starting with b2678fdb5bd39861d799574b06fcdfe9c033cd18c075d0639a88cd70ea8d590c not found: ID does not exist" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.174555 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:23 crc kubenswrapper[4945]: W0108 23:39:23.184563 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a42cef1_a63c_49f8_840f_d27eb7b6756e.slice/crio-24658ffd99ba6884e3fd888a64a78388a829e9a140ff2804313dbe160671bcbf WatchSource:0}: Error finding container 24658ffd99ba6884e3fd888a64a78388a829e9a140ff2804313dbe160671bcbf: Status 404 returned error can't find the container with id 24658ffd99ba6884e3fd888a64a78388a829e9a140ff2804313dbe160671bcbf Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.422613 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.422933 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.453845 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.461972 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.914979 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerStarted","Data":"a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45"} Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.915058 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerStarted","Data":"24658ffd99ba6884e3fd888a64a78388a829e9a140ff2804313dbe160671bcbf"} Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.917877 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:23 crc kubenswrapper[4945]: I0108 23:39:23.917917 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:24 crc kubenswrapper[4945]: I0108 23:39:24.014016 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="119ddb53-bb06-4c23-87a1-f51e76baffb6" path="/var/lib/kubelet/pods/119ddb53-bb06-4c23-87a1-f51e76baffb6/volumes" Jan 08 23:39:24 crc kubenswrapper[4945]: I0108 23:39:24.014935 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" path="/var/lib/kubelet/pods/f689d02f-2f97-45fb-9ead-02449cf8f47a/volumes" Jan 08 23:39:24 crc kubenswrapper[4945]: I0108 23:39:24.934475 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerStarted","Data":"dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4"} Jan 08 23:39:25 crc kubenswrapper[4945]: I0108 23:39:25.293062 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 08 23:39:25 crc kubenswrapper[4945]: I0108 23:39:25.293117 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 08 23:39:25 crc kubenswrapper[4945]: I0108 23:39:25.329117 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 08 23:39:25 crc kubenswrapper[4945]: I0108 23:39:25.343487 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 08 23:39:25 crc kubenswrapper[4945]: I0108 23:39:25.945602 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerStarted","Data":"db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6"} Jan 08 23:39:25 crc kubenswrapper[4945]: I0108 23:39:25.945981 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 08 23:39:25 crc kubenswrapper[4945]: I0108 23:39:25.947148 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.207305 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.207823 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.210825 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.956004 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerStarted","Data":"39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331"} Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.956227 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="ceilometer-central-agent" containerID="cri-o://a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45" gracePeriod=30 Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.956362 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="ceilometer-notification-agent" containerID="cri-o://dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4" gracePeriod=30 Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.956294 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="proxy-httpd" containerID="cri-o://39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331" gracePeriod=30 Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.956462 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="sg-core" containerID="cri-o://db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6" gracePeriod=30 Jan 08 23:39:26 crc kubenswrapper[4945]: I0108 23:39:26.987596 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.624280877 podStartE2EDuration="4.98757838s" podCreationTimestamp="2026-01-08 23:39:22 +0000 UTC" firstStartedPulling="2026-01-08 23:39:23.187078285 +0000 UTC m=+1433.498237231" lastFinishedPulling="2026-01-08 23:39:26.550375788 +0000 UTC m=+1436.861534734" observedRunningTime="2026-01-08 23:39:26.979704291 +0000 UTC m=+1437.290863237" watchObservedRunningTime="2026-01-08 23:39:26.98757838 +0000 UTC m=+1437.298737326" Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.974640 4945 generic.go:334] "Generic (PLEG): container finished" podID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerID="39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331" exitCode=0 Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.975004 4945 generic.go:334] "Generic (PLEG): container finished" podID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerID="db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6" exitCode=2 Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.975019 4945 generic.go:334] "Generic (PLEG): container finished" podID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerID="dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4" exitCode=0 Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.974718 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerDied","Data":"39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331"} Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.975125 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerDied","Data":"db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6"} Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.975136 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.975160 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.975141 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerDied","Data":"dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4"} Jan 08 23:39:27 crc kubenswrapper[4945]: I0108 23:39:27.992672 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 08 23:39:28 crc kubenswrapper[4945]: I0108 23:39:28.042614 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.452346 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-rzphm"] Jan 08 23:39:31 crc kubenswrapper[4945]: E0108 23:39:31.453380 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerName="extract-content" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.453400 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerName="extract-content" Jan 08 23:39:31 crc kubenswrapper[4945]: E0108 23:39:31.453422 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerName="registry-server" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.453432 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerName="registry-server" Jan 08 23:39:31 crc kubenswrapper[4945]: E0108 23:39:31.453452 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerName="extract-utilities" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.453461 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerName="extract-utilities" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.453708 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f689d02f-2f97-45fb-9ead-02449cf8f47a" containerName="registry-server" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.454502 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.466463 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-rzphm"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.556699 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/739bfff7-d0fc-41c9-a590-ae8dae65a02c-operator-scripts\") pod \"nova-api-db-create-rzphm\" (UID: \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\") " pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.556790 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqrp\" (UniqueName: \"kubernetes.io/projected/739bfff7-d0fc-41c9-a590-ae8dae65a02c-kube-api-access-vqqrp\") pod \"nova-api-db-create-rzphm\" (UID: \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\") " pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.655711 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-qxmf7"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.660565 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqrp\" (UniqueName: \"kubernetes.io/projected/739bfff7-d0fc-41c9-a590-ae8dae65a02c-kube-api-access-vqqrp\") pod \"nova-api-db-create-rzphm\" (UID: \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\") " pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.660743 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/739bfff7-d0fc-41c9-a590-ae8dae65a02c-operator-scripts\") pod \"nova-api-db-create-rzphm\" (UID: \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\") " pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.661693 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/739bfff7-d0fc-41c9-a590-ae8dae65a02c-operator-scripts\") pod \"nova-api-db-create-rzphm\" (UID: \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\") " pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.666020 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.669766 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2784-account-create-update-nfhmc"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.695929 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqrp\" (UniqueName: \"kubernetes.io/projected/739bfff7-d0fc-41c9-a590-ae8dae65a02c-kube-api-access-vqqrp\") pod \"nova-api-db-create-rzphm\" (UID: \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\") " pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.705345 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qxmf7"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.705503 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.710882 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.744180 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2784-account-create-update-nfhmc"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.764537 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx2hc\" (UniqueName: \"kubernetes.io/projected/c4205b5b-eedb-4e63-9535-452815f376f6-kube-api-access-bx2hc\") pod \"nova-cell0-db-create-qxmf7\" (UID: \"c4205b5b-eedb-4e63-9535-452815f376f6\") " pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.764663 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzkkv\" (UniqueName: \"kubernetes.io/projected/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-kube-api-access-vzkkv\") pod \"nova-api-2784-account-create-update-nfhmc\" (UID: \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\") " pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.764749 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-operator-scripts\") pod \"nova-api-2784-account-create-update-nfhmc\" (UID: \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\") " pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.764860 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4205b5b-eedb-4e63-9535-452815f376f6-operator-scripts\") pod \"nova-cell0-db-create-qxmf7\" (UID: \"c4205b5b-eedb-4e63-9535-452815f376f6\") " pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.771193 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.778807 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7n26t"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.779967 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.795789 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7n26t"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.866613 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4205b5b-eedb-4e63-9535-452815f376f6-operator-scripts\") pod \"nova-cell0-db-create-qxmf7\" (UID: \"c4205b5b-eedb-4e63-9535-452815f376f6\") " pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.866671 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e9f018-13fb-40ef-bc38-0453684d5e6c-operator-scripts\") pod \"nova-cell1-db-create-7n26t\" (UID: \"58e9f018-13fb-40ef-bc38-0453684d5e6c\") " pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.866726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx2hc\" (UniqueName: \"kubernetes.io/projected/c4205b5b-eedb-4e63-9535-452815f376f6-kube-api-access-bx2hc\") pod \"nova-cell0-db-create-qxmf7\" (UID: \"c4205b5b-eedb-4e63-9535-452815f376f6\") " pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.866745 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44pvw\" (UniqueName: \"kubernetes.io/projected/58e9f018-13fb-40ef-bc38-0453684d5e6c-kube-api-access-44pvw\") pod \"nova-cell1-db-create-7n26t\" (UID: \"58e9f018-13fb-40ef-bc38-0453684d5e6c\") " pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.866789 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzkkv\" (UniqueName: \"kubernetes.io/projected/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-kube-api-access-vzkkv\") pod \"nova-api-2784-account-create-update-nfhmc\" (UID: \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\") " pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.866829 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-operator-scripts\") pod \"nova-api-2784-account-create-update-nfhmc\" (UID: \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\") " pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.868786 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4205b5b-eedb-4e63-9535-452815f376f6-operator-scripts\") pod \"nova-cell0-db-create-qxmf7\" (UID: \"c4205b5b-eedb-4e63-9535-452815f376f6\") " pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.873096 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w2dkh"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.874335 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.881349 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.882876 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w2dkh"] Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.886418 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzkkv\" (UniqueName: \"kubernetes.io/projected/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-kube-api-access-vzkkv\") pod \"nova-api-2784-account-create-update-nfhmc\" (UID: \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\") " pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.886645 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx2hc\" (UniqueName: \"kubernetes.io/projected/c4205b5b-eedb-4e63-9535-452815f376f6-kube-api-access-bx2hc\") pod \"nova-cell0-db-create-qxmf7\" (UID: \"c4205b5b-eedb-4e63-9535-452815f376f6\") " pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.900648 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-operator-scripts\") pod \"nova-api-2784-account-create-update-nfhmc\" (UID: \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\") " pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.968473 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e9f018-13fb-40ef-bc38-0453684d5e6c-operator-scripts\") pod \"nova-cell1-db-create-7n26t\" (UID: \"58e9f018-13fb-40ef-bc38-0453684d5e6c\") " pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.968517 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64fc29fe-804c-4553-811c-014595972fbd-operator-scripts\") pod \"nova-cell0-9ea6-account-create-update-w2dkh\" (UID: \"64fc29fe-804c-4553-811c-014595972fbd\") " pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.968581 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44pvw\" (UniqueName: \"kubernetes.io/projected/58e9f018-13fb-40ef-bc38-0453684d5e6c-kube-api-access-44pvw\") pod \"nova-cell1-db-create-7n26t\" (UID: \"58e9f018-13fb-40ef-bc38-0453684d5e6c\") " pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.968619 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddbcl\" (UniqueName: \"kubernetes.io/projected/64fc29fe-804c-4553-811c-014595972fbd-kube-api-access-ddbcl\") pod \"nova-cell0-9ea6-account-create-update-w2dkh\" (UID: \"64fc29fe-804c-4553-811c-014595972fbd\") " pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.969657 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e9f018-13fb-40ef-bc38-0453684d5e6c-operator-scripts\") pod \"nova-cell1-db-create-7n26t\" (UID: \"58e9f018-13fb-40ef-bc38-0453684d5e6c\") " pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.989386 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44pvw\" (UniqueName: \"kubernetes.io/projected/58e9f018-13fb-40ef-bc38-0453684d5e6c-kube-api-access-44pvw\") pod \"nova-cell1-db-create-7n26t\" (UID: \"58e9f018-13fb-40ef-bc38-0453684d5e6c\") " pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:31 crc kubenswrapper[4945]: I0108 23:39:31.995080 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.069576 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.071578 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ae6f-account-create-update-d4jd4"] Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.073740 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64fc29fe-804c-4553-811c-014595972fbd-operator-scripts\") pod \"nova-cell0-9ea6-account-create-update-w2dkh\" (UID: \"64fc29fe-804c-4553-811c-014595972fbd\") " pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.073824 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddbcl\" (UniqueName: \"kubernetes.io/projected/64fc29fe-804c-4553-811c-014595972fbd-kube-api-access-ddbcl\") pod \"nova-cell0-9ea6-account-create-update-w2dkh\" (UID: \"64fc29fe-804c-4553-811c-014595972fbd\") " pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.075290 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.075531 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64fc29fe-804c-4553-811c-014595972fbd-operator-scripts\") pod \"nova-cell0-9ea6-account-create-update-w2dkh\" (UID: \"64fc29fe-804c-4553-811c-014595972fbd\") " pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.081217 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.084786 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ae6f-account-create-update-d4jd4"] Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.098338 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddbcl\" (UniqueName: \"kubernetes.io/projected/64fc29fe-804c-4553-811c-014595972fbd-kube-api-access-ddbcl\") pod \"nova-cell0-9ea6-account-create-update-w2dkh\" (UID: \"64fc29fe-804c-4553-811c-014595972fbd\") " pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.175697 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30241cbb-d52a-4c7f-9d0c-2d44522952f7-operator-scripts\") pod \"nova-cell1-ae6f-account-create-update-d4jd4\" (UID: \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\") " pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.175772 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c79mv\" (UniqueName: \"kubernetes.io/projected/30241cbb-d52a-4c7f-9d0c-2d44522952f7-kube-api-access-c79mv\") pod \"nova-cell1-ae6f-account-create-update-d4jd4\" (UID: \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\") " pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.268370 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.278031 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30241cbb-d52a-4c7f-9d0c-2d44522952f7-operator-scripts\") pod \"nova-cell1-ae6f-account-create-update-d4jd4\" (UID: \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\") " pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.278104 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c79mv\" (UniqueName: \"kubernetes.io/projected/30241cbb-d52a-4c7f-9d0c-2d44522952f7-kube-api-access-c79mv\") pod \"nova-cell1-ae6f-account-create-update-d4jd4\" (UID: \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\") " pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.278763 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30241cbb-d52a-4c7f-9d0c-2d44522952f7-operator-scripts\") pod \"nova-cell1-ae6f-account-create-update-d4jd4\" (UID: \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\") " pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.280330 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.293595 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c79mv\" (UniqueName: \"kubernetes.io/projected/30241cbb-d52a-4c7f-9d0c-2d44522952f7-kube-api-access-c79mv\") pod \"nova-cell1-ae6f-account-create-update-d4jd4\" (UID: \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\") " pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.398276 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.412046 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-rzphm"] Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.530843 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-qxmf7"] Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.555796 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2784-account-create-update-nfhmc"] Jan 08 23:39:32 crc kubenswrapper[4945]: I0108 23:39:32.905659 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7n26t"] Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.015435 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w2dkh"] Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.024005 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-rzphm" event={"ID":"739bfff7-d0fc-41c9-a590-ae8dae65a02c","Type":"ContainerStarted","Data":"6f17b0b722fa6f30d2fca7b5200e82aaf9a3bc45f98bb1e38f697f910cb272f7"} Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.024085 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-rzphm" event={"ID":"739bfff7-d0fc-41c9-a590-ae8dae65a02c","Type":"ContainerStarted","Data":"f9f3f23f6b1fe94a6976169b6b400648f2e993b22318b37ce29fe14ece3e5d85"} Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.031407 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2784-account-create-update-nfhmc" event={"ID":"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8","Type":"ContainerStarted","Data":"c295b5dc0e462b17060d5d6d27d7bd1a35a24144841937e619dc1ab66cd8fd48"} Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.033573 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7n26t" event={"ID":"58e9f018-13fb-40ef-bc38-0453684d5e6c","Type":"ContainerStarted","Data":"b3becf2fb6a4dc50328a861001bef458c41f7c23fe1ff915aef15aa4922c3556"} Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.043804 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qxmf7" event={"ID":"c4205b5b-eedb-4e63-9535-452815f376f6","Type":"ContainerStarted","Data":"30c511260ed86f2ae0035a3b1b0f8391850c39679881c6648ca7e4cd3152fef5"} Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.043853 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qxmf7" event={"ID":"c4205b5b-eedb-4e63-9535-452815f376f6","Type":"ContainerStarted","Data":"e22b5422cb18454b42557435398769e86efcfb58878771cfeacbb3bf5465866c"} Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.055715 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-rzphm" podStartSLOduration=2.055686859 podStartE2EDuration="2.055686859s" podCreationTimestamp="2026-01-08 23:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:39:33.044435438 +0000 UTC m=+1443.355594404" watchObservedRunningTime="2026-01-08 23:39:33.055686859 +0000 UTC m=+1443.366845805" Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.066030 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-qxmf7" podStartSLOduration=2.065983547 podStartE2EDuration="2.065983547s" podCreationTimestamp="2026-01-08 23:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:39:33.056208761 +0000 UTC m=+1443.367367707" watchObservedRunningTime="2026-01-08 23:39:33.065983547 +0000 UTC m=+1443.377142503" Jan 08 23:39:33 crc kubenswrapper[4945]: I0108 23:39:33.120862 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ae6f-account-create-update-d4jd4"] Jan 08 23:39:33 crc kubenswrapper[4945]: W0108 23:39:33.564266 4945 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58e9f018_13fb_40ef_bc38_0453684d5e6c.slice/crio-e9530af93f88c2e40ecb927cafcfe87968eb9889ade8ee091f469a28c020a331.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58e9f018_13fb_40ef_bc38_0453684d5e6c.slice/crio-e9530af93f88c2e40ecb927cafcfe87968eb9889ade8ee091f469a28c020a331.scope: no such file or directory Jan 08 23:39:33 crc kubenswrapper[4945]: W0108 23:39:33.564934 4945 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64fc29fe_804c_4553_811c_014595972fbd.slice/crio-conmon-a7c4378b65d96c48e0e459a696a2e715a7cd896621e7b23334ef422367eb67d9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64fc29fe_804c_4553_811c_014595972fbd.slice/crio-conmon-a7c4378b65d96c48e0e459a696a2e715a7cd896621e7b23334ef422367eb67d9.scope: no such file or directory Jan 08 23:39:33 crc kubenswrapper[4945]: W0108 23:39:33.565084 4945 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64fc29fe_804c_4553_811c_014595972fbd.slice/crio-a7c4378b65d96c48e0e459a696a2e715a7cd896621e7b23334ef422367eb67d9.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64fc29fe_804c_4553_811c_014595972fbd.slice/crio-a7c4378b65d96c48e0e459a696a2e715a7cd896621e7b23334ef422367eb67d9.scope: no such file or directory Jan 08 23:39:33 crc kubenswrapper[4945]: E0108 23:39:33.793167 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a42cef1_a63c_49f8_840f_d27eb7b6756e.slice/crio-conmon-a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod739bfff7_d0fc_41c9_a590_ae8dae65a02c.slice/crio-conmon-6f17b0b722fa6f30d2fca7b5200e82aaf9a3bc45f98bb1e38f697f910cb272f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a42cef1_a63c_49f8_840f_d27eb7b6756e.slice/crio-a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d8ce681_eb6b_419d_ba7e_ab78f58c08a8.slice/crio-conmon-d8bca21aed5c1e2f7ef4338d1b56a3c8c184051de9f2c3af10a793a5c16ee47e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58e9f018_13fb_40ef_bc38_0453684d5e6c.slice/crio-conmon-e9530af93f88c2e40ecb927cafcfe87968eb9889ade8ee091f469a28c020a331.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod739bfff7_d0fc_41c9_a590_ae8dae65a02c.slice/crio-6f17b0b722fa6f30d2fca7b5200e82aaf9a3bc45f98bb1e38f697f910cb272f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4205b5b_eedb_4e63_9535_452815f376f6.slice/crio-30c511260ed86f2ae0035a3b1b0f8391850c39679881c6648ca7e4cd3152fef5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d8ce681_eb6b_419d_ba7e_ab78f58c08a8.slice/crio-d8bca21aed5c1e2f7ef4338d1b56a3c8c184051de9f2c3af10a793a5c16ee47e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4205b5b_eedb_4e63_9535_452815f376f6.slice/crio-conmon-30c511260ed86f2ae0035a3b1b0f8391850c39679881c6648ca7e4cd3152fef5.scope\": RecentStats: unable to find data in memory cache]" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.027532 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.072253 4945 generic.go:334] "Generic (PLEG): container finished" podID="0d8ce681-eb6b-419d-ba7e-ab78f58c08a8" containerID="d8bca21aed5c1e2f7ef4338d1b56a3c8c184051de9f2c3af10a793a5c16ee47e" exitCode=0 Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.072605 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2784-account-create-update-nfhmc" event={"ID":"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8","Type":"ContainerDied","Data":"d8bca21aed5c1e2f7ef4338d1b56a3c8c184051de9f2c3af10a793a5c16ee47e"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.078748 4945 generic.go:334] "Generic (PLEG): container finished" podID="58e9f018-13fb-40ef-bc38-0453684d5e6c" containerID="e9530af93f88c2e40ecb927cafcfe87968eb9889ade8ee091f469a28c020a331" exitCode=0 Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.078827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7n26t" event={"ID":"58e9f018-13fb-40ef-bc38-0453684d5e6c","Type":"ContainerDied","Data":"e9530af93f88c2e40ecb927cafcfe87968eb9889ade8ee091f469a28c020a331"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.080959 4945 generic.go:334] "Generic (PLEG): container finished" podID="64fc29fe-804c-4553-811c-014595972fbd" containerID="a7c4378b65d96c48e0e459a696a2e715a7cd896621e7b23334ef422367eb67d9" exitCode=0 Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.081050 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" event={"ID":"64fc29fe-804c-4553-811c-014595972fbd","Type":"ContainerDied","Data":"a7c4378b65d96c48e0e459a696a2e715a7cd896621e7b23334ef422367eb67d9"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.081075 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" event={"ID":"64fc29fe-804c-4553-811c-014595972fbd","Type":"ContainerStarted","Data":"cd74d00b7fc3c80a99e0c51136b6c13dc03ce52c7588520994835f29c2194969"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.082511 4945 generic.go:334] "Generic (PLEG): container finished" podID="c4205b5b-eedb-4e63-9535-452815f376f6" containerID="30c511260ed86f2ae0035a3b1b0f8391850c39679881c6648ca7e4cd3152fef5" exitCode=0 Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.082570 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qxmf7" event={"ID":"c4205b5b-eedb-4e63-9535-452815f376f6","Type":"ContainerDied","Data":"30c511260ed86f2ae0035a3b1b0f8391850c39679881c6648ca7e4cd3152fef5"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.087573 4945 generic.go:334] "Generic (PLEG): container finished" podID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerID="a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45" exitCode=0 Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.087823 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerDied","Data":"a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.087863 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a42cef1-a63c-49f8-840f-d27eb7b6756e","Type":"ContainerDied","Data":"24658ffd99ba6884e3fd888a64a78388a829e9a140ff2804313dbe160671bcbf"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.087891 4945 scope.go:117] "RemoveContainer" containerID="39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.088106 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.095687 4945 generic.go:334] "Generic (PLEG): container finished" podID="739bfff7-d0fc-41c9-a590-ae8dae65a02c" containerID="6f17b0b722fa6f30d2fca7b5200e82aaf9a3bc45f98bb1e38f697f910cb272f7" exitCode=0 Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.095901 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-rzphm" event={"ID":"739bfff7-d0fc-41c9-a590-ae8dae65a02c","Type":"ContainerDied","Data":"6f17b0b722fa6f30d2fca7b5200e82aaf9a3bc45f98bb1e38f697f910cb272f7"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.098707 4945 generic.go:334] "Generic (PLEG): container finished" podID="30241cbb-d52a-4c7f-9d0c-2d44522952f7" containerID="51776af5292a91f8828255736d607dd27da9610cd5dd1bc6a3f0b5f8b09b65f8" exitCode=0 Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.098759 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" event={"ID":"30241cbb-d52a-4c7f-9d0c-2d44522952f7","Type":"ContainerDied","Data":"51776af5292a91f8828255736d607dd27da9610cd5dd1bc6a3f0b5f8b09b65f8"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.098787 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" event={"ID":"30241cbb-d52a-4c7f-9d0c-2d44522952f7","Type":"ContainerStarted","Data":"7d6c02e177e32008e30f28f52d506230f0fb467fad6fb73d0d26b75829142552"} Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.121313 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-run-httpd\") pod \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.121831 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8a42cef1-a63c-49f8-840f-d27eb7b6756e" (UID: "8a42cef1-a63c-49f8-840f-d27eb7b6756e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.121391 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-scripts\") pod \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.121903 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-combined-ca-bundle\") pod \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.121977 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9z5h\" (UniqueName: \"kubernetes.io/projected/8a42cef1-a63c-49f8-840f-d27eb7b6756e-kube-api-access-v9z5h\") pod \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.122024 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-sg-core-conf-yaml\") pod \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.122139 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-log-httpd\") pod \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.122273 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-config-data\") pod \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\" (UID: \"8a42cef1-a63c-49f8-840f-d27eb7b6756e\") " Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.124556 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8a42cef1-a63c-49f8-840f-d27eb7b6756e" (UID: "8a42cef1-a63c-49f8-840f-d27eb7b6756e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.124783 4945 scope.go:117] "RemoveContainer" containerID="db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.125860 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.125881 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a42cef1-a63c-49f8-840f-d27eb7b6756e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.131164 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a42cef1-a63c-49f8-840f-d27eb7b6756e-kube-api-access-v9z5h" (OuterVolumeSpecName: "kube-api-access-v9z5h") pod "8a42cef1-a63c-49f8-840f-d27eb7b6756e" (UID: "8a42cef1-a63c-49f8-840f-d27eb7b6756e"). InnerVolumeSpecName "kube-api-access-v9z5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.131847 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-scripts" (OuterVolumeSpecName: "scripts") pod "8a42cef1-a63c-49f8-840f-d27eb7b6756e" (UID: "8a42cef1-a63c-49f8-840f-d27eb7b6756e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.178755 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8a42cef1-a63c-49f8-840f-d27eb7b6756e" (UID: "8a42cef1-a63c-49f8-840f-d27eb7b6756e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.227757 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.227793 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9z5h\" (UniqueName: \"kubernetes.io/projected/8a42cef1-a63c-49f8-840f-d27eb7b6756e-kube-api-access-v9z5h\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.227803 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.230478 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a42cef1-a63c-49f8-840f-d27eb7b6756e" (UID: "8a42cef1-a63c-49f8-840f-d27eb7b6756e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.252014 4945 scope.go:117] "RemoveContainer" containerID="dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.272863 4945 scope.go:117] "RemoveContainer" containerID="a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.275064 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-config-data" (OuterVolumeSpecName: "config-data") pod "8a42cef1-a63c-49f8-840f-d27eb7b6756e" (UID: "8a42cef1-a63c-49f8-840f-d27eb7b6756e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.291274 4945 scope.go:117] "RemoveContainer" containerID="39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331" Jan 08 23:39:34 crc kubenswrapper[4945]: E0108 23:39:34.291815 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331\": container with ID starting with 39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331 not found: ID does not exist" containerID="39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.291858 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331"} err="failed to get container status \"39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331\": rpc error: code = NotFound desc = could not find container \"39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331\": container with ID starting with 39fbd88b80e8855d93242d4301e95fcb2b0abc99139c17af176009427e790331 not found: ID does not exist" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.291887 4945 scope.go:117] "RemoveContainer" containerID="db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6" Jan 08 23:39:34 crc kubenswrapper[4945]: E0108 23:39:34.292255 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6\": container with ID starting with db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6 not found: ID does not exist" containerID="db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.292315 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6"} err="failed to get container status \"db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6\": rpc error: code = NotFound desc = could not find container \"db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6\": container with ID starting with db79605c2ed80bd2d548bf277631f03577b8cbe9e011f1cd84ad907c9a6a3fc6 not found: ID does not exist" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.292348 4945 scope.go:117] "RemoveContainer" containerID="dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4" Jan 08 23:39:34 crc kubenswrapper[4945]: E0108 23:39:34.292678 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4\": container with ID starting with dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4 not found: ID does not exist" containerID="dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.292703 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4"} err="failed to get container status \"dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4\": rpc error: code = NotFound desc = could not find container \"dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4\": container with ID starting with dade63dfaac1d628bd94d9a9591991960cbf6b620c6590650bc8f4ecfb5d93b4 not found: ID does not exist" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.292718 4945 scope.go:117] "RemoveContainer" containerID="a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45" Jan 08 23:39:34 crc kubenswrapper[4945]: E0108 23:39:34.292928 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45\": container with ID starting with a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45 not found: ID does not exist" containerID="a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.292974 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45"} err="failed to get container status \"a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45\": rpc error: code = NotFound desc = could not find container \"a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45\": container with ID starting with a331e99d9f4b1d9369b6f4d373033babf4c0eba723c97b10344aec02bfebdf45 not found: ID does not exist" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.333262 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.333307 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a42cef1-a63c-49f8-840f-d27eb7b6756e-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.431771 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.503081 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.547922 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:34 crc kubenswrapper[4945]: E0108 23:39:34.558619 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="ceilometer-notification-agent" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.559383 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="ceilometer-notification-agent" Jan 08 23:39:34 crc kubenswrapper[4945]: E0108 23:39:34.559474 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="ceilometer-central-agent" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.559537 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="ceilometer-central-agent" Jan 08 23:39:34 crc kubenswrapper[4945]: E0108 23:39:34.559609 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="sg-core" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.559671 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="sg-core" Jan 08 23:39:34 crc kubenswrapper[4945]: E0108 23:39:34.559751 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="proxy-httpd" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.559812 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="proxy-httpd" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.560194 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="ceilometer-notification-agent" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.562168 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="ceilometer-central-agent" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.562194 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="sg-core" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.562210 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" containerName="proxy-httpd" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.564439 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.570467 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.571297 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.598888 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.652229 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-config-data\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.652289 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-run-httpd\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.652358 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.652381 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-scripts\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.652399 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.652448 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwf6\" (UniqueName: \"kubernetes.io/projected/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-kube-api-access-tmwf6\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.652479 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-log-httpd\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.754113 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-scripts\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.754194 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.754264 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwf6\" (UniqueName: \"kubernetes.io/projected/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-kube-api-access-tmwf6\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.754312 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-log-httpd\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.754372 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-config-data\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.754410 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-run-httpd\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.754472 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.755586 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-log-httpd\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.756029 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-run-httpd\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.759967 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-scripts\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.760870 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-config-data\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.760939 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.761751 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.772373 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwf6\" (UniqueName: \"kubernetes.io/projected/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-kube-api-access-tmwf6\") pod \"ceilometer-0\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " pod="openstack/ceilometer-0" Jan 08 23:39:34 crc kubenswrapper[4945]: I0108 23:39:34.897762 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.686350 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.725192 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.778258 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/739bfff7-d0fc-41c9-a590-ae8dae65a02c-operator-scripts\") pod \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\" (UID: \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.778388 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx2hc\" (UniqueName: \"kubernetes.io/projected/c4205b5b-eedb-4e63-9535-452815f376f6-kube-api-access-bx2hc\") pod \"c4205b5b-eedb-4e63-9535-452815f376f6\" (UID: \"c4205b5b-eedb-4e63-9535-452815f376f6\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.778562 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4205b5b-eedb-4e63-9535-452815f376f6-operator-scripts\") pod \"c4205b5b-eedb-4e63-9535-452815f376f6\" (UID: \"c4205b5b-eedb-4e63-9535-452815f376f6\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.778624 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqqrp\" (UniqueName: \"kubernetes.io/projected/739bfff7-d0fc-41c9-a590-ae8dae65a02c-kube-api-access-vqqrp\") pod \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\" (UID: \"739bfff7-d0fc-41c9-a590-ae8dae65a02c\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.780213 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4205b5b-eedb-4e63-9535-452815f376f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4205b5b-eedb-4e63-9535-452815f376f6" (UID: "c4205b5b-eedb-4e63-9535-452815f376f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.780299 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739bfff7-d0fc-41c9-a590-ae8dae65a02c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "739bfff7-d0fc-41c9-a590-ae8dae65a02c" (UID: "739bfff7-d0fc-41c9-a590-ae8dae65a02c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.788373 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739bfff7-d0fc-41c9-a590-ae8dae65a02c-kube-api-access-vqqrp" (OuterVolumeSpecName: "kube-api-access-vqqrp") pod "739bfff7-d0fc-41c9-a590-ae8dae65a02c" (UID: "739bfff7-d0fc-41c9-a590-ae8dae65a02c"). InnerVolumeSpecName "kube-api-access-vqqrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.793185 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4205b5b-eedb-4e63-9535-452815f376f6-kube-api-access-bx2hc" (OuterVolumeSpecName: "kube-api-access-bx2hc") pod "c4205b5b-eedb-4e63-9535-452815f376f6" (UID: "c4205b5b-eedb-4e63-9535-452815f376f6"). InnerVolumeSpecName "kube-api-access-bx2hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.856761 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:39:35 crc kubenswrapper[4945]: W0108 23:39:35.864200 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aaff1fb_a539_4e1d_8634_5d6ae7d90bd2.slice/crio-527ede25cb4345a3762fc179233b3ceb4987ece4683e1a054d96f9a95a0f105c WatchSource:0}: Error finding container 527ede25cb4345a3762fc179233b3ceb4987ece4683e1a054d96f9a95a0f105c: Status 404 returned error can't find the container with id 527ede25cb4345a3762fc179233b3ceb4987ece4683e1a054d96f9a95a0f105c Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.881072 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqqrp\" (UniqueName: \"kubernetes.io/projected/739bfff7-d0fc-41c9-a590-ae8dae65a02c-kube-api-access-vqqrp\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.881111 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/739bfff7-d0fc-41c9-a590-ae8dae65a02c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.881121 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx2hc\" (UniqueName: \"kubernetes.io/projected/c4205b5b-eedb-4e63-9535-452815f376f6-kube-api-access-bx2hc\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.881131 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4205b5b-eedb-4e63-9535-452815f376f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.903961 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.914015 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.922711 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.954076 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.982159 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64fc29fe-804c-4553-811c-014595972fbd-operator-scripts\") pod \"64fc29fe-804c-4553-811c-014595972fbd\" (UID: \"64fc29fe-804c-4553-811c-014595972fbd\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.982366 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c79mv\" (UniqueName: \"kubernetes.io/projected/30241cbb-d52a-4c7f-9d0c-2d44522952f7-kube-api-access-c79mv\") pod \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\" (UID: \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.982393 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44pvw\" (UniqueName: \"kubernetes.io/projected/58e9f018-13fb-40ef-bc38-0453684d5e6c-kube-api-access-44pvw\") pod \"58e9f018-13fb-40ef-bc38-0453684d5e6c\" (UID: \"58e9f018-13fb-40ef-bc38-0453684d5e6c\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.982427 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30241cbb-d52a-4c7f-9d0c-2d44522952f7-operator-scripts\") pod \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\" (UID: \"30241cbb-d52a-4c7f-9d0c-2d44522952f7\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.982454 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e9f018-13fb-40ef-bc38-0453684d5e6c-operator-scripts\") pod \"58e9f018-13fb-40ef-bc38-0453684d5e6c\" (UID: \"58e9f018-13fb-40ef-bc38-0453684d5e6c\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.982520 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddbcl\" (UniqueName: \"kubernetes.io/projected/64fc29fe-804c-4553-811c-014595972fbd-kube-api-access-ddbcl\") pod \"64fc29fe-804c-4553-811c-014595972fbd\" (UID: \"64fc29fe-804c-4553-811c-014595972fbd\") " Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.982910 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64fc29fe-804c-4553-811c-014595972fbd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64fc29fe-804c-4553-811c-014595972fbd" (UID: "64fc29fe-804c-4553-811c-014595972fbd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.983015 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30241cbb-d52a-4c7f-9d0c-2d44522952f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "30241cbb-d52a-4c7f-9d0c-2d44522952f7" (UID: "30241cbb-d52a-4c7f-9d0c-2d44522952f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.983319 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e9f018-13fb-40ef-bc38-0453684d5e6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58e9f018-13fb-40ef-bc38-0453684d5e6c" (UID: "58e9f018-13fb-40ef-bc38-0453684d5e6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.987169 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30241cbb-d52a-4c7f-9d0c-2d44522952f7-kube-api-access-c79mv" (OuterVolumeSpecName: "kube-api-access-c79mv") pod "30241cbb-d52a-4c7f-9d0c-2d44522952f7" (UID: "30241cbb-d52a-4c7f-9d0c-2d44522952f7"). InnerVolumeSpecName "kube-api-access-c79mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.987260 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e9f018-13fb-40ef-bc38-0453684d5e6c-kube-api-access-44pvw" (OuterVolumeSpecName: "kube-api-access-44pvw") pod "58e9f018-13fb-40ef-bc38-0453684d5e6c" (UID: "58e9f018-13fb-40ef-bc38-0453684d5e6c"). InnerVolumeSpecName "kube-api-access-44pvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:35 crc kubenswrapper[4945]: I0108 23:39:35.988113 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64fc29fe-804c-4553-811c-014595972fbd-kube-api-access-ddbcl" (OuterVolumeSpecName: "kube-api-access-ddbcl") pod "64fc29fe-804c-4553-811c-014595972fbd" (UID: "64fc29fe-804c-4553-811c-014595972fbd"). InnerVolumeSpecName "kube-api-access-ddbcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.015472 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a42cef1-a63c-49f8-840f-d27eb7b6756e" path="/var/lib/kubelet/pods/8a42cef1-a63c-49f8-840f-d27eb7b6756e/volumes" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.084646 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzkkv\" (UniqueName: \"kubernetes.io/projected/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-kube-api-access-vzkkv\") pod \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\" (UID: \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\") " Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.084725 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-operator-scripts\") pod \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\" (UID: \"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8\") " Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.086131 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c79mv\" (UniqueName: \"kubernetes.io/projected/30241cbb-d52a-4c7f-9d0c-2d44522952f7-kube-api-access-c79mv\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.086162 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44pvw\" (UniqueName: \"kubernetes.io/projected/58e9f018-13fb-40ef-bc38-0453684d5e6c-kube-api-access-44pvw\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.086175 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/30241cbb-d52a-4c7f-9d0c-2d44522952f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.086189 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e9f018-13fb-40ef-bc38-0453684d5e6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.086204 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddbcl\" (UniqueName: \"kubernetes.io/projected/64fc29fe-804c-4553-811c-014595972fbd-kube-api-access-ddbcl\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.086217 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64fc29fe-804c-4553-811c-014595972fbd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.090725 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d8ce681-eb6b-419d-ba7e-ab78f58c08a8" (UID: "0d8ce681-eb6b-419d-ba7e-ab78f58c08a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.096242 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-kube-api-access-vzkkv" (OuterVolumeSpecName: "kube-api-access-vzkkv") pod "0d8ce681-eb6b-419d-ba7e-ab78f58c08a8" (UID: "0d8ce681-eb6b-419d-ba7e-ab78f58c08a8"). InnerVolumeSpecName "kube-api-access-vzkkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.125059 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.125571 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ea6-account-create-update-w2dkh" event={"ID":"64fc29fe-804c-4553-811c-014595972fbd","Type":"ContainerDied","Data":"cd74d00b7fc3c80a99e0c51136b6c13dc03ce52c7588520994835f29c2194969"} Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.125652 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd74d00b7fc3c80a99e0c51136b6c13dc03ce52c7588520994835f29c2194969" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.128201 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-qxmf7" event={"ID":"c4205b5b-eedb-4e63-9535-452815f376f6","Type":"ContainerDied","Data":"e22b5422cb18454b42557435398769e86efcfb58878771cfeacbb3bf5465866c"} Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.128309 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22b5422cb18454b42557435398769e86efcfb58878771cfeacbb3bf5465866c" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.128505 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-qxmf7" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.130345 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-rzphm" event={"ID":"739bfff7-d0fc-41c9-a590-ae8dae65a02c","Type":"ContainerDied","Data":"f9f3f23f6b1fe94a6976169b6b400648f2e993b22318b37ce29fe14ece3e5d85"} Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.130388 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9f3f23f6b1fe94a6976169b6b400648f2e993b22318b37ce29fe14ece3e5d85" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.130364 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-rzphm" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.133182 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.133256 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ae6f-account-create-update-d4jd4" event={"ID":"30241cbb-d52a-4c7f-9d0c-2d44522952f7","Type":"ContainerDied","Data":"7d6c02e177e32008e30f28f52d506230f0fb467fad6fb73d0d26b75829142552"} Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.133314 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d6c02e177e32008e30f28f52d506230f0fb467fad6fb73d0d26b75829142552" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.135791 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2784-account-create-update-nfhmc" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.135829 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2784-account-create-update-nfhmc" event={"ID":"0d8ce681-eb6b-419d-ba7e-ab78f58c08a8","Type":"ContainerDied","Data":"c295b5dc0e462b17060d5d6d27d7bd1a35a24144841937e619dc1ab66cd8fd48"} Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.135879 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c295b5dc0e462b17060d5d6d27d7bd1a35a24144841937e619dc1ab66cd8fd48" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.137739 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7n26t" event={"ID":"58e9f018-13fb-40ef-bc38-0453684d5e6c","Type":"ContainerDied","Data":"b3becf2fb6a4dc50328a861001bef458c41f7c23fe1ff915aef15aa4922c3556"} Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.137781 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3becf2fb6a4dc50328a861001bef458c41f7c23fe1ff915aef15aa4922c3556" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.137750 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7n26t" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.138869 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerStarted","Data":"527ede25cb4345a3762fc179233b3ceb4987ece4683e1a054d96f9a95a0f105c"} Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.188316 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzkkv\" (UniqueName: \"kubernetes.io/projected/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-kube-api-access-vzkkv\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:36 crc kubenswrapper[4945]: I0108 23:39:36.188355 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:39:37 crc kubenswrapper[4945]: I0108 23:39:37.169225 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerStarted","Data":"4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff"} Jan 08 23:39:38 crc kubenswrapper[4945]: I0108 23:39:38.184829 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerStarted","Data":"4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416"} Jan 08 23:39:38 crc kubenswrapper[4945]: I0108 23:39:38.185942 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerStarted","Data":"d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594"} Jan 08 23:39:39 crc kubenswrapper[4945]: I0108 23:39:39.200515 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerStarted","Data":"98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f"} Jan 08 23:39:39 crc kubenswrapper[4945]: I0108 23:39:39.202067 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 08 23:39:39 crc kubenswrapper[4945]: I0108 23:39:39.231375 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.414467963 podStartE2EDuration="5.231347076s" podCreationTimestamp="2026-01-08 23:39:34 +0000 UTC" firstStartedPulling="2026-01-08 23:39:35.867455368 +0000 UTC m=+1446.178614314" lastFinishedPulling="2026-01-08 23:39:38.684334461 +0000 UTC m=+1448.995493427" observedRunningTime="2026-01-08 23:39:39.228812185 +0000 UTC m=+1449.539971151" watchObservedRunningTime="2026-01-08 23:39:39.231347076 +0000 UTC m=+1449.542506022" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.256632 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsz4z"] Jan 08 23:39:42 crc kubenswrapper[4945]: E0108 23:39:42.257396 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4205b5b-eedb-4e63-9535-452815f376f6" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257413 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4205b5b-eedb-4e63-9535-452815f376f6" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: E0108 23:39:42.257439 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8ce681-eb6b-419d-ba7e-ab78f58c08a8" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257447 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8ce681-eb6b-419d-ba7e-ab78f58c08a8" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: E0108 23:39:42.257460 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e9f018-13fb-40ef-bc38-0453684d5e6c" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257468 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e9f018-13fb-40ef-bc38-0453684d5e6c" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: E0108 23:39:42.257482 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64fc29fe-804c-4553-811c-014595972fbd" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257491 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="64fc29fe-804c-4553-811c-014595972fbd" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: E0108 23:39:42.257502 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30241cbb-d52a-4c7f-9d0c-2d44522952f7" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257510 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="30241cbb-d52a-4c7f-9d0c-2d44522952f7" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: E0108 23:39:42.257522 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739bfff7-d0fc-41c9-a590-ae8dae65a02c" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257529 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="739bfff7-d0fc-41c9-a590-ae8dae65a02c" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257700 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4205b5b-eedb-4e63-9535-452815f376f6" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257712 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="30241cbb-d52a-4c7f-9d0c-2d44522952f7" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257723 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="739bfff7-d0fc-41c9-a590-ae8dae65a02c" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257741 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="64fc29fe-804c-4553-811c-014595972fbd" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257752 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8ce681-eb6b-419d-ba7e-ab78f58c08a8" containerName="mariadb-account-create-update" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.257765 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e9f018-13fb-40ef-bc38-0453684d5e6c" containerName="mariadb-database-create" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.258467 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.265982 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fnqwh" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.266012 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.266276 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.294760 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsz4z"] Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.345838 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.345916 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-config-data\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.345950 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttssh\" (UniqueName: \"kubernetes.io/projected/51598250-e998-4bd9-8846-179741f8c0b9-kube-api-access-ttssh\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.346017 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-scripts\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.448140 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.448218 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-config-data\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.448244 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttssh\" (UniqueName: \"kubernetes.io/projected/51598250-e998-4bd9-8846-179741f8c0b9-kube-api-access-ttssh\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.448284 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-scripts\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.473361 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-scripts\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.473629 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.473639 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-config-data\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.479215 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttssh\" (UniqueName: \"kubernetes.io/projected/51598250-e998-4bd9-8846-179741f8c0b9-kube-api-access-ttssh\") pod \"nova-cell0-conductor-db-sync-xsz4z\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:42 crc kubenswrapper[4945]: I0108 23:39:42.581236 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:39:43 crc kubenswrapper[4945]: I0108 23:39:43.075605 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsz4z"] Jan 08 23:39:43 crc kubenswrapper[4945]: W0108 23:39:43.077158 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51598250_e998_4bd9_8846_179741f8c0b9.slice/crio-e955bae85e7da1c8ec2044f3a728e3fda615f112fc055a17b0ae5bcf4a1db444 WatchSource:0}: Error finding container e955bae85e7da1c8ec2044f3a728e3fda615f112fc055a17b0ae5bcf4a1db444: Status 404 returned error can't find the container with id e955bae85e7da1c8ec2044f3a728e3fda615f112fc055a17b0ae5bcf4a1db444 Jan 08 23:39:43 crc kubenswrapper[4945]: I0108 23:39:43.239915 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" event={"ID":"51598250-e998-4bd9-8846-179741f8c0b9","Type":"ContainerStarted","Data":"e955bae85e7da1c8ec2044f3a728e3fda615f112fc055a17b0ae5bcf4a1db444"} Jan 08 23:39:51 crc kubenswrapper[4945]: I0108 23:39:51.327084 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" event={"ID":"51598250-e998-4bd9-8846-179741f8c0b9","Type":"ContainerStarted","Data":"a23c2fb78905e6eefe47c919eaab8d235a726c00827e870bd1d89890f26ddd78"} Jan 08 23:39:51 crc kubenswrapper[4945]: I0108 23:39:51.359646 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" podStartSLOduration=1.9296227080000001 podStartE2EDuration="9.359619183s" podCreationTimestamp="2026-01-08 23:39:42 +0000 UTC" firstStartedPulling="2026-01-08 23:39:43.079266872 +0000 UTC m=+1453.390425818" lastFinishedPulling="2026-01-08 23:39:50.509263347 +0000 UTC m=+1460.820422293" observedRunningTime="2026-01-08 23:39:51.34953417 +0000 UTC m=+1461.660693136" watchObservedRunningTime="2026-01-08 23:39:51.359619183 +0000 UTC m=+1461.670778139" Jan 08 23:40:04 crc kubenswrapper[4945]: I0108 23:40:04.905706 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 08 23:40:06 crc kubenswrapper[4945]: I0108 23:40:06.486574 4945 generic.go:334] "Generic (PLEG): container finished" podID="51598250-e998-4bd9-8846-179741f8c0b9" containerID="a23c2fb78905e6eefe47c919eaab8d235a726c00827e870bd1d89890f26ddd78" exitCode=0 Jan 08 23:40:06 crc kubenswrapper[4945]: I0108 23:40:06.486679 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" event={"ID":"51598250-e998-4bd9-8846-179741f8c0b9","Type":"ContainerDied","Data":"a23c2fb78905e6eefe47c919eaab8d235a726c00827e870bd1d89890f26ddd78"} Jan 08 23:40:07 crc kubenswrapper[4945]: I0108 23:40:07.894796 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.075372 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttssh\" (UniqueName: \"kubernetes.io/projected/51598250-e998-4bd9-8846-179741f8c0b9-kube-api-access-ttssh\") pod \"51598250-e998-4bd9-8846-179741f8c0b9\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.075889 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-combined-ca-bundle\") pod \"51598250-e998-4bd9-8846-179741f8c0b9\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.076452 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-scripts\") pod \"51598250-e998-4bd9-8846-179741f8c0b9\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.076982 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-config-data\") pod \"51598250-e998-4bd9-8846-179741f8c0b9\" (UID: \"51598250-e998-4bd9-8846-179741f8c0b9\") " Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.082090 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-scripts" (OuterVolumeSpecName: "scripts") pod "51598250-e998-4bd9-8846-179741f8c0b9" (UID: "51598250-e998-4bd9-8846-179741f8c0b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.084907 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51598250-e998-4bd9-8846-179741f8c0b9-kube-api-access-ttssh" (OuterVolumeSpecName: "kube-api-access-ttssh") pod "51598250-e998-4bd9-8846-179741f8c0b9" (UID: "51598250-e998-4bd9-8846-179741f8c0b9"). InnerVolumeSpecName "kube-api-access-ttssh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.105411 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-config-data" (OuterVolumeSpecName: "config-data") pod "51598250-e998-4bd9-8846-179741f8c0b9" (UID: "51598250-e998-4bd9-8846-179741f8c0b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.118090 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51598250-e998-4bd9-8846-179741f8c0b9" (UID: "51598250-e998-4bd9-8846-179741f8c0b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.178856 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttssh\" (UniqueName: \"kubernetes.io/projected/51598250-e998-4bd9-8846-179741f8c0b9-kube-api-access-ttssh\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.178888 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.178898 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.178909 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51598250-e998-4bd9-8846-179741f8c0b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.511932 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" event={"ID":"51598250-e998-4bd9-8846-179741f8c0b9","Type":"ContainerDied","Data":"e955bae85e7da1c8ec2044f3a728e3fda615f112fc055a17b0ae5bcf4a1db444"} Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.511978 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e955bae85e7da1c8ec2044f3a728e3fda615f112fc055a17b0ae5bcf4a1db444" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.512034 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-xsz4z" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.617022 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 08 23:40:08 crc kubenswrapper[4945]: E0108 23:40:08.617650 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51598250-e998-4bd9-8846-179741f8c0b9" containerName="nova-cell0-conductor-db-sync" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.617678 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="51598250-e998-4bd9-8846-179741f8c0b9" containerName="nova-cell0-conductor-db-sync" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.618326 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="51598250-e998-4bd9-8846-179741f8c0b9" containerName="nova-cell0-conductor-db-sync" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.619123 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.621415 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.622736 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fnqwh" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.636532 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.790371 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.791302 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndqg2\" (UniqueName: \"kubernetes.io/projected/37125f43-8fb6-4625-a260-8d43cdbe167a-kube-api-access-ndqg2\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.791476 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.893692 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.893813 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.893902 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndqg2\" (UniqueName: \"kubernetes.io/projected/37125f43-8fb6-4625-a260-8d43cdbe167a-kube-api-access-ndqg2\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.900159 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.900319 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.915472 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndqg2\" (UniqueName: \"kubernetes.io/projected/37125f43-8fb6-4625-a260-8d43cdbe167a-kube-api-access-ndqg2\") pod \"nova-cell0-conductor-0\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:08 crc kubenswrapper[4945]: I0108 23:40:08.966949 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.016150 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.016844 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="84d4963b-6a9f-4053-8944-d1c7e61256b9" containerName="kube-state-metrics" containerID="cri-o://5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6" gracePeriod=30 Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.466764 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 08 23:40:09 crc kubenswrapper[4945]: W0108 23:40:09.466958 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37125f43_8fb6_4625_a260_8d43cdbe167a.slice/crio-38f6edcfeff92d06f3873899b55e6209e9e1cfe81f7f395e7ccbe1da1f8cea93 WatchSource:0}: Error finding container 38f6edcfeff92d06f3873899b55e6209e9e1cfe81f7f395e7ccbe1da1f8cea93: Status 404 returned error can't find the container with id 38f6edcfeff92d06f3873899b55e6209e9e1cfe81f7f395e7ccbe1da1f8cea93 Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.479480 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.527626 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"37125f43-8fb6-4625-a260-8d43cdbe167a","Type":"ContainerStarted","Data":"38f6edcfeff92d06f3873899b55e6209e9e1cfe81f7f395e7ccbe1da1f8cea93"} Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.531663 4945 generic.go:334] "Generic (PLEG): container finished" podID="84d4963b-6a9f-4053-8944-d1c7e61256b9" containerID="5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6" exitCode=2 Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.531737 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"84d4963b-6a9f-4053-8944-d1c7e61256b9","Type":"ContainerDied","Data":"5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6"} Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.531793 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"84d4963b-6a9f-4053-8944-d1c7e61256b9","Type":"ContainerDied","Data":"3f5c065693a6416884e3a2e2438b81e97773cd1dc46ebf24c1dca8b02a731a27"} Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.531818 4945 scope.go:117] "RemoveContainer" containerID="5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6" Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.532292 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.586500 4945 scope.go:117] "RemoveContainer" containerID="5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6" Jan 08 23:40:09 crc kubenswrapper[4945]: E0108 23:40:09.587282 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6\": container with ID starting with 5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6 not found: ID does not exist" containerID="5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6" Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.587315 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6"} err="failed to get container status \"5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6\": rpc error: code = NotFound desc = could not find container \"5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6\": container with ID starting with 5aa019d5911886256f9870cf2956bb8d687d86a5b25cb9022b681160e0cca5f6 not found: ID does not exist" Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.612211 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj9lj\" (UniqueName: \"kubernetes.io/projected/84d4963b-6a9f-4053-8944-d1c7e61256b9-kube-api-access-nj9lj\") pod \"84d4963b-6a9f-4053-8944-d1c7e61256b9\" (UID: \"84d4963b-6a9f-4053-8944-d1c7e61256b9\") " Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.618964 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d4963b-6a9f-4053-8944-d1c7e61256b9-kube-api-access-nj9lj" (OuterVolumeSpecName: "kube-api-access-nj9lj") pod "84d4963b-6a9f-4053-8944-d1c7e61256b9" (UID: "84d4963b-6a9f-4053-8944-d1c7e61256b9"). InnerVolumeSpecName "kube-api-access-nj9lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.715243 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj9lj\" (UniqueName: \"kubernetes.io/projected/84d4963b-6a9f-4053-8944-d1c7e61256b9-kube-api-access-nj9lj\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.920398 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:40:09 crc kubenswrapper[4945]: I0108 23:40:09.971421 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.016428 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d4963b-6a9f-4053-8944-d1c7e61256b9" path="/var/lib/kubelet/pods/84d4963b-6a9f-4053-8944-d1c7e61256b9/volumes" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.017120 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:40:10 crc kubenswrapper[4945]: E0108 23:40:10.017530 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d4963b-6a9f-4053-8944-d1c7e61256b9" containerName="kube-state-metrics" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.017554 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d4963b-6a9f-4053-8944-d1c7e61256b9" containerName="kube-state-metrics" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.017906 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d4963b-6a9f-4053-8944-d1c7e61256b9" containerName="kube-state-metrics" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.019431 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.020091 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.022926 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.022951 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.127965 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6nlx\" (UniqueName: \"kubernetes.io/projected/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-api-access-x6nlx\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.128450 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.128590 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.128656 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.230972 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6nlx\" (UniqueName: \"kubernetes.io/projected/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-api-access-x6nlx\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.231126 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.232189 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.232329 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.238058 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.248668 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.249307 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.249414 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6nlx\" (UniqueName: \"kubernetes.io/projected/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-api-access-x6nlx\") pod \"kube-state-metrics-0\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.344874 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.545042 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"37125f43-8fb6-4625-a260-8d43cdbe167a","Type":"ContainerStarted","Data":"a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12"} Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.545135 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.564892 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.564869248 podStartE2EDuration="2.564869248s" podCreationTimestamp="2026-01-08 23:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:10.560727609 +0000 UTC m=+1480.871886565" watchObservedRunningTime="2026-01-08 23:40:10.564869248 +0000 UTC m=+1480.876028184" Jan 08 23:40:10 crc kubenswrapper[4945]: I0108 23:40:10.805975 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:40:10 crc kubenswrapper[4945]: W0108 23:40:10.806862 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4a4044_c9b6_49c9_98ed_446af4a3fe1f.slice/crio-636798b16a7f8c3cac6ee9996c87f0b424866c21bc5ca5bf4e07577e3c9209b7 WatchSource:0}: Error finding container 636798b16a7f8c3cac6ee9996c87f0b424866c21bc5ca5bf4e07577e3c9209b7: Status 404 returned error can't find the container with id 636798b16a7f8c3cac6ee9996c87f0b424866c21bc5ca5bf4e07577e3c9209b7 Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.212900 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.213254 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="ceilometer-central-agent" containerID="cri-o://4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff" gracePeriod=30 Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.213625 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="ceilometer-notification-agent" containerID="cri-o://d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594" gracePeriod=30 Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.213653 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="sg-core" containerID="cri-o://4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416" gracePeriod=30 Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.213819 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="proxy-httpd" containerID="cri-o://98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f" gracePeriod=30 Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.255830 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zcr"] Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.258252 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.282488 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zcr"] Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.452595 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-422f5\" (UniqueName: \"kubernetes.io/projected/8dab4475-9aef-40ea-8c66-6f14c30e545c-kube-api-access-422f5\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.453005 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-catalog-content\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.453033 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-utilities\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.554915 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-422f5\" (UniqueName: \"kubernetes.io/projected/8dab4475-9aef-40ea-8c66-6f14c30e545c-kube-api-access-422f5\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.555004 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-catalog-content\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.555032 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-utilities\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.555568 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-utilities\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.556166 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-catalog-content\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.560227 4945 generic.go:334] "Generic (PLEG): container finished" podID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerID="4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416" exitCode=2 Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.560403 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerDied","Data":"4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416"} Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.562354 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f","Type":"ContainerStarted","Data":"a01a9cdaf1e6dcbf50fb7c8fdd46ee8450d6a7d635fd11e06d3b74c301d9e2af"} Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.562378 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f","Type":"ContainerStarted","Data":"636798b16a7f8c3cac6ee9996c87f0b424866c21bc5ca5bf4e07577e3c9209b7"} Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.576725 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-422f5\" (UniqueName: \"kubernetes.io/projected/8dab4475-9aef-40ea-8c66-6f14c30e545c-kube-api-access-422f5\") pod \"redhat-marketplace-p2zcr\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.588614 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.229499734 podStartE2EDuration="2.588593976s" podCreationTimestamp="2026-01-08 23:40:09 +0000 UTC" firstStartedPulling="2026-01-08 23:40:10.810613743 +0000 UTC m=+1481.121772689" lastFinishedPulling="2026-01-08 23:40:11.169707995 +0000 UTC m=+1481.480866931" observedRunningTime="2026-01-08 23:40:11.580769708 +0000 UTC m=+1481.891928654" watchObservedRunningTime="2026-01-08 23:40:11.588593976 +0000 UTC m=+1481.899752912" Jan 08 23:40:11 crc kubenswrapper[4945]: I0108 23:40:11.626921 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.173947 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zcr"] Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.571392 4945 generic.go:334] "Generic (PLEG): container finished" podID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerID="a7cebbf61d952cff91f5ee6588b1b635d321e98cce2e1a6b15642a75695206b3" exitCode=0 Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.572524 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zcr" event={"ID":"8dab4475-9aef-40ea-8c66-6f14c30e545c","Type":"ContainerDied","Data":"a7cebbf61d952cff91f5ee6588b1b635d321e98cce2e1a6b15642a75695206b3"} Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.572547 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zcr" event={"ID":"8dab4475-9aef-40ea-8c66-6f14c30e545c","Type":"ContainerStarted","Data":"9a9b89a7cebd662ba02b5f48190722ec9a8026f216e2ae24a06999a658c16a66"} Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.577976 4945 generic.go:334] "Generic (PLEG): container finished" podID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerID="98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f" exitCode=0 Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.578019 4945 generic.go:334] "Generic (PLEG): container finished" podID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerID="4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff" exitCode=0 Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.578020 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerDied","Data":"98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f"} Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.578058 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerDied","Data":"4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff"} Jan 08 23:40:12 crc kubenswrapper[4945]: I0108 23:40:12.578204 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 08 23:40:13 crc kubenswrapper[4945]: I0108 23:40:13.578349 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:40:13 crc kubenswrapper[4945]: I0108 23:40:13.578665 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.436186 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.525986 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-config-data\") pod \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.526055 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-scripts\") pod \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.526114 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmwf6\" (UniqueName: \"kubernetes.io/projected/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-kube-api-access-tmwf6\") pod \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.526156 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-combined-ca-bundle\") pod \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.526285 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-run-httpd\") pod \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.526315 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-sg-core-conf-yaml\") pod \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.526349 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-log-httpd\") pod \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\" (UID: \"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2\") " Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.526946 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" (UID: "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.527307 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" (UID: "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.532165 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-kube-api-access-tmwf6" (OuterVolumeSpecName: "kube-api-access-tmwf6") pod "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" (UID: "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2"). InnerVolumeSpecName "kube-api-access-tmwf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.538217 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-scripts" (OuterVolumeSpecName: "scripts") pod "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" (UID: "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.591276 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" (UID: "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.602856 4945 generic.go:334] "Generic (PLEG): container finished" podID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerID="64c403401e059d748adcee7457210a568b6d3ebcdb38757511cf8b6054b7456b" exitCode=0 Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.602949 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zcr" event={"ID":"8dab4475-9aef-40ea-8c66-6f14c30e545c","Type":"ContainerDied","Data":"64c403401e059d748adcee7457210a568b6d3ebcdb38757511cf8b6054b7456b"} Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.609041 4945 generic.go:334] "Generic (PLEG): container finished" podID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerID="d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594" exitCode=0 Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.609084 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerDied","Data":"d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594"} Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.609110 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2","Type":"ContainerDied","Data":"527ede25cb4345a3762fc179233b3ceb4987ece4683e1a054d96f9a95a0f105c"} Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.609129 4945 scope.go:117] "RemoveContainer" containerID="98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.609191 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.628509 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.628539 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.628550 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.628558 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.628568 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmwf6\" (UniqueName: \"kubernetes.io/projected/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-kube-api-access-tmwf6\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.659303 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" (UID: "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.666312 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-config-data" (OuterVolumeSpecName: "config-data") pod "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" (UID: "6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.730331 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.730358 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.743048 4945 scope.go:117] "RemoveContainer" containerID="4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.766494 4945 scope.go:117] "RemoveContainer" containerID="d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.790865 4945 scope.go:117] "RemoveContainer" containerID="4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.810513 4945 scope.go:117] "RemoveContainer" containerID="98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f" Jan 08 23:40:14 crc kubenswrapper[4945]: E0108 23:40:14.810835 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f\": container with ID starting with 98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f not found: ID does not exist" containerID="98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.810869 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f"} err="failed to get container status \"98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f\": rpc error: code = NotFound desc = could not find container \"98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f\": container with ID starting with 98713816a2189f4f905bf3a8893efb2b59bccb028cc108368b58a4f47af1395f not found: ID does not exist" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.810889 4945 scope.go:117] "RemoveContainer" containerID="4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416" Jan 08 23:40:14 crc kubenswrapper[4945]: E0108 23:40:14.811090 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416\": container with ID starting with 4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416 not found: ID does not exist" containerID="4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.811106 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416"} err="failed to get container status \"4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416\": rpc error: code = NotFound desc = could not find container \"4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416\": container with ID starting with 4204131ac4aef0f50da97d12b9abfc3151980ef915332a14fe5bec16cf447416 not found: ID does not exist" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.811120 4945 scope.go:117] "RemoveContainer" containerID="d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594" Jan 08 23:40:14 crc kubenswrapper[4945]: E0108 23:40:14.811284 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594\": container with ID starting with d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594 not found: ID does not exist" containerID="d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.811299 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594"} err="failed to get container status \"d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594\": rpc error: code = NotFound desc = could not find container \"d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594\": container with ID starting with d2b6c5d8efa9daf4218845be10f6933e161c5993a18dc8706b3a193fb0e55594 not found: ID does not exist" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.811310 4945 scope.go:117] "RemoveContainer" containerID="4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff" Jan 08 23:40:14 crc kubenswrapper[4945]: E0108 23:40:14.811474 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff\": container with ID starting with 4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff not found: ID does not exist" containerID="4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.811498 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff"} err="failed to get container status \"4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff\": rpc error: code = NotFound desc = could not find container \"4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff\": container with ID starting with 4bde91cf3e0a5525ca3e9e4af364a4f91188868c857dd11824c057a64b0297ff not found: ID does not exist" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.942797 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.951917 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.961797 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:40:14 crc kubenswrapper[4945]: E0108 23:40:14.962186 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="sg-core" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.962217 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="sg-core" Jan 08 23:40:14 crc kubenswrapper[4945]: E0108 23:40:14.962249 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="ceilometer-central-agent" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.962257 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="ceilometer-central-agent" Jan 08 23:40:14 crc kubenswrapper[4945]: E0108 23:40:14.962270 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="proxy-httpd" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.962276 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="proxy-httpd" Jan 08 23:40:14 crc kubenswrapper[4945]: E0108 23:40:14.962301 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="ceilometer-notification-agent" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.962308 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="ceilometer-notification-agent" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.962482 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="ceilometer-central-agent" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.962500 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="ceilometer-notification-agent" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.962519 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="proxy-httpd" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.962532 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" containerName="sg-core" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.964199 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.966229 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.970524 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.970616 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 08 23:40:14 crc kubenswrapper[4945]: I0108 23:40:14.986052 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.138880 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-log-httpd\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.138968 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxhc\" (UniqueName: \"kubernetes.io/projected/f4859737-48c6-4939-a3d3-68b93075c72d-kube-api-access-fdxhc\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.139378 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-run-httpd\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.139471 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-config-data\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.139717 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-scripts\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.139868 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.139948 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.140200 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.242865 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdxhc\" (UniqueName: \"kubernetes.io/projected/f4859737-48c6-4939-a3d3-68b93075c72d-kube-api-access-fdxhc\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.243053 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-run-httpd\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.243120 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-config-data\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.243573 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-run-httpd\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.244536 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-scripts\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.244618 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.244681 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.244777 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.244831 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-log-httpd\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.245414 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-log-httpd\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.250572 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-scripts\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.251026 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.252923 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-config-data\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.255024 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.260964 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.263761 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdxhc\" (UniqueName: \"kubernetes.io/projected/f4859737-48c6-4939-a3d3-68b93075c72d-kube-api-access-fdxhc\") pod \"ceilometer-0\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.314327 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.582127 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:40:15 crc kubenswrapper[4945]: W0108 23:40:15.588386 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4859737_48c6_4939_a3d3_68b93075c72d.slice/crio-6e566e558630dfb3f88dbf56f8a11b92b2c028436475318eb8f8cee1eb25f9b0 WatchSource:0}: Error finding container 6e566e558630dfb3f88dbf56f8a11b92b2c028436475318eb8f8cee1eb25f9b0: Status 404 returned error can't find the container with id 6e566e558630dfb3f88dbf56f8a11b92b2c028436475318eb8f8cee1eb25f9b0 Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.617980 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerStarted","Data":"6e566e558630dfb3f88dbf56f8a11b92b2c028436475318eb8f8cee1eb25f9b0"} Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.620604 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zcr" event={"ID":"8dab4475-9aef-40ea-8c66-6f14c30e545c","Type":"ContainerStarted","Data":"ce78099f6b76ee58269e6368ddf967c6a73172acec8fdcb8b43ccdba69ca8a5d"} Jan 08 23:40:15 crc kubenswrapper[4945]: I0108 23:40:15.646890 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p2zcr" podStartSLOduration=2.168855999 podStartE2EDuration="4.646869745s" podCreationTimestamp="2026-01-08 23:40:11 +0000 UTC" firstStartedPulling="2026-01-08 23:40:12.57390421 +0000 UTC m=+1482.885063156" lastFinishedPulling="2026-01-08 23:40:15.051917956 +0000 UTC m=+1485.363076902" observedRunningTime="2026-01-08 23:40:15.644457697 +0000 UTC m=+1485.955616663" watchObservedRunningTime="2026-01-08 23:40:15.646869745 +0000 UTC m=+1485.958028691" Jan 08 23:40:16 crc kubenswrapper[4945]: I0108 23:40:16.011498 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2" path="/var/lib/kubelet/pods/6aaff1fb-a539-4e1d-8634-5d6ae7d90bd2/volumes" Jan 08 23:40:16 crc kubenswrapper[4945]: I0108 23:40:16.638679 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerStarted","Data":"333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc"} Jan 08 23:40:17 crc kubenswrapper[4945]: I0108 23:40:17.653224 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerStarted","Data":"324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315"} Jan 08 23:40:18 crc kubenswrapper[4945]: I0108 23:40:18.662709 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerStarted","Data":"1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5"} Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.002481 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.514432 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lrvhj"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.515937 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.520143 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.520224 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.537495 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lrvhj"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.643265 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbsl\" (UniqueName: \"kubernetes.io/projected/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-kube-api-access-kwbsl\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.643733 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.643788 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-scripts\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.643816 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-config-data\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.713561 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.720087 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.726121 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.745728 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbsl\" (UniqueName: \"kubernetes.io/projected/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-kube-api-access-kwbsl\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.745826 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.745868 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-scripts\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.745886 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-config-data\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.748078 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.760130 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-scripts\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.777081 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.778873 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.781899 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-config-data\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.782401 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.782502 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.783141 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbsl\" (UniqueName: \"kubernetes.io/projected/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-kube-api-access-kwbsl\") pod \"nova-cell0-cell-mapping-lrvhj\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.813407 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.841634 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.847064 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.847131 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b74afd-664c-4549-97bf-041b2bc152c8-logs\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.847164 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6lx2\" (UniqueName: \"kubernetes.io/projected/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-kube-api-access-m6lx2\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.847197 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.847223 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.847283 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9gtw\" (UniqueName: \"kubernetes.io/projected/b2b74afd-664c-4549-97bf-041b2bc152c8-kube-api-access-w9gtw\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.847321 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-config-data\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.871488 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.872725 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.878033 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.889385 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.919662 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.921509 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.927521 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.951418 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b74afd-664c-4549-97bf-041b2bc152c8-logs\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.954222 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6lx2\" (UniqueName: \"kubernetes.io/projected/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-kube-api-access-m6lx2\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.954548 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.953353 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b74afd-664c-4549-97bf-041b2bc152c8-logs\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.956182 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.960330 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zngm6\" (UniqueName: \"kubernetes.io/projected/2388c4d5-c56c-431e-af61-4294a629c1fd-kube-api-access-zngm6\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.960609 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.960727 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.962099 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9gtw\" (UniqueName: \"kubernetes.io/projected/b2b74afd-664c-4549-97bf-041b2bc152c8-kube-api-access-w9gtw\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.962307 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-config-data\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.962450 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.966686 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.967547 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.977175 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-config-data\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.986055 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:19 crc kubenswrapper[4945]: I0108 23:40:19.989785 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6lx2\" (UniqueName: \"kubernetes.io/projected/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-kube-api-access-m6lx2\") pod \"nova-cell1-novncproxy-0\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.011484 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9gtw\" (UniqueName: \"kubernetes.io/projected/b2b74afd-664c-4549-97bf-041b2bc152c8-kube-api-access-w9gtw\") pod \"nova-metadata-0\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " pod="openstack/nova-metadata-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.045474 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.045752 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-wvp9f"] Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.056552 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.062498 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.064478 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.064530 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zngm6\" (UniqueName: \"kubernetes.io/projected/2388c4d5-c56c-431e-af61-4294a629c1fd-kube-api-access-zngm6\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.064554 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.064584 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.064646 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njzl2\" (UniqueName: \"kubernetes.io/projected/58af371a-afdf-47cc-8ac5-d78a9db4703b-kube-api-access-njzl2\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.064663 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58af371a-afdf-47cc-8ac5-d78a9db4703b-logs\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.064693 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-config-data\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.070803 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.071726 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.084204 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-wvp9f"] Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.091413 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zngm6\" (UniqueName: \"kubernetes.io/projected/2388c4d5-c56c-431e-af61-4294a629c1fd-kube-api-access-zngm6\") pod \"nova-scheduler-0\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.157206 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169354 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njzl2\" (UniqueName: \"kubernetes.io/projected/58af371a-afdf-47cc-8ac5-d78a9db4703b-kube-api-access-njzl2\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169403 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58af371a-afdf-47cc-8ac5-d78a9db4703b-logs\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169455 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-config-data\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169478 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169507 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169571 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-config\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169630 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbzp\" (UniqueName: \"kubernetes.io/projected/994dbeee-49d8-4572-975d-727360fff33c-kube-api-access-9fbzp\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169661 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169694 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.169742 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.204754 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.253832 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-config-data\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.254323 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58af371a-afdf-47cc-8ac5-d78a9db4703b-logs\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.264578 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njzl2\" (UniqueName: \"kubernetes.io/projected/58af371a-afdf-47cc-8ac5-d78a9db4703b-kube-api-access-njzl2\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.268751 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.271266 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.271310 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.271349 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-config\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.271379 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbzp\" (UniqueName: \"kubernetes.io/projected/994dbeee-49d8-4572-975d-727360fff33c-kube-api-access-9fbzp\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.271407 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.271423 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.272289 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.272585 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-config\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.273077 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.273158 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.273566 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-svc\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.302743 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbzp\" (UniqueName: \"kubernetes.io/projected/994dbeee-49d8-4572-975d-727360fff33c-kube-api-access-9fbzp\") pod \"dnsmasq-dns-bccf8f775-wvp9f\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.376124 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.397165 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.555299 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.747440 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lrvhj"] Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.758577 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerStarted","Data":"da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875"} Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.758799 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 08 23:40:20 crc kubenswrapper[4945]: W0108 23:40:20.759733 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e9d1095_f2ce_463c_9f99_f4d8a10b834b.slice/crio-7c32944936300d55a01d05f1ea761e73211185485368c3248868eea37e1adc03 WatchSource:0}: Error finding container 7c32944936300d55a01d05f1ea761e73211185485368c3248868eea37e1adc03: Status 404 returned error can't find the container with id 7c32944936300d55a01d05f1ea761e73211185485368c3248868eea37e1adc03 Jan 08 23:40:20 crc kubenswrapper[4945]: I0108 23:40:20.821881 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.080381624 podStartE2EDuration="6.821835209s" podCreationTimestamp="2026-01-08 23:40:14 +0000 UTC" firstStartedPulling="2026-01-08 23:40:15.591033721 +0000 UTC m=+1485.902192667" lastFinishedPulling="2026-01-08 23:40:19.332487306 +0000 UTC m=+1489.643646252" observedRunningTime="2026-01-08 23:40:20.811534201 +0000 UTC m=+1491.122693147" watchObservedRunningTime="2026-01-08 23:40:20.821835209 +0000 UTC m=+1491.132994155" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.022828 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:21 crc kubenswrapper[4945]: W0108 23:40:21.063221 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a8edaf_32ec_42c4_9b6d_6dbe9d07ae87.slice/crio-12ca1750d0096bc804839897e71555d84e46b43e7c677d6439fa5813047ac67b WatchSource:0}: Error finding container 12ca1750d0096bc804839897e71555d84e46b43e7c677d6439fa5813047ac67b: Status 404 returned error can't find the container with id 12ca1750d0096bc804839897e71555d84e46b43e7c677d6439fa5813047ac67b Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.183579 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-wvp9f"] Jan 08 23:40:21 crc kubenswrapper[4945]: W0108 23:40:21.247230 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2b74afd_664c_4549_97bf_041b2bc152c8.slice/crio-48a98e4cb4ccf8c8e10e453e110b1dad778f5d2f010beebc36de3323c35c71be WatchSource:0}: Error finding container 48a98e4cb4ccf8c8e10e453e110b1dad778f5d2f010beebc36de3323c35c71be: Status 404 returned error can't find the container with id 48a98e4cb4ccf8c8e10e453e110b1dad778f5d2f010beebc36de3323c35c71be Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.247284 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:21 crc kubenswrapper[4945]: W0108 23:40:21.257095 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2388c4d5_c56c_431e_af61_4294a629c1fd.slice/crio-0a20d03c1ee39bc67bc0808cc0a7bf9f40cae50bdded1061a2f035147957d9ea WatchSource:0}: Error finding container 0a20d03c1ee39bc67bc0808cc0a7bf9f40cae50bdded1061a2f035147957d9ea: Status 404 returned error can't find the container with id 0a20d03c1ee39bc67bc0808cc0a7bf9f40cae50bdded1061a2f035147957d9ea Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.266032 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:21 crc kubenswrapper[4945]: W0108 23:40:21.280744 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58af371a_afdf_47cc_8ac5_d78a9db4703b.slice/crio-82bbfd16cd9df69fb7af73773f124e8f5f167f34a6697c131fcb5d857578565a WatchSource:0}: Error finding container 82bbfd16cd9df69fb7af73773f124e8f5f167f34a6697c131fcb5d857578565a: Status 404 returned error can't find the container with id 82bbfd16cd9df69fb7af73773f124e8f5f167f34a6697c131fcb5d857578565a Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.280777 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.467879 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rpppg"] Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.469032 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.471524 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.474232 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.485581 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rpppg"] Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.530763 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-config-data\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.530824 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7sn9\" (UniqueName: \"kubernetes.io/projected/db3e91f4-16b7-4a04-a23d-f3299d6781e5-kube-api-access-g7sn9\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.530864 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.530889 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-scripts\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.627050 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.627103 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.632957 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-config-data\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.633024 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7sn9\" (UniqueName: \"kubernetes.io/projected/db3e91f4-16b7-4a04-a23d-f3299d6781e5-kube-api-access-g7sn9\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.633055 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.633074 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-scripts\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.640759 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-scripts\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.642647 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-config-data\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.651550 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7sn9\" (UniqueName: \"kubernetes.io/projected/db3e91f4-16b7-4a04-a23d-f3299d6781e5-kube-api-access-g7sn9\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.661835 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rpppg\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.692354 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.772618 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lrvhj" event={"ID":"5e9d1095-f2ce-463c-9f99-f4d8a10b834b","Type":"ContainerStarted","Data":"1f410de8214b57d3744bd57097664e1e1a6c42f3f45a62aa5538c148ff68273e"} Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.773821 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lrvhj" event={"ID":"5e9d1095-f2ce-463c-9f99-f4d8a10b834b","Type":"ContainerStarted","Data":"7c32944936300d55a01d05f1ea761e73211185485368c3248868eea37e1adc03"} Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.775024 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87","Type":"ContainerStarted","Data":"12ca1750d0096bc804839897e71555d84e46b43e7c677d6439fa5813047ac67b"} Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.777264 4945 generic.go:334] "Generic (PLEG): container finished" podID="994dbeee-49d8-4572-975d-727360fff33c" containerID="e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a" exitCode=0 Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.777324 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" event={"ID":"994dbeee-49d8-4572-975d-727360fff33c","Type":"ContainerDied","Data":"e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a"} Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.777358 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" event={"ID":"994dbeee-49d8-4572-975d-727360fff33c","Type":"ContainerStarted","Data":"e0b7088116a2a54f1efb3083704913ac57efae7e4266b939b7a0fc89e02ca896"} Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.779880 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58af371a-afdf-47cc-8ac5-d78a9db4703b","Type":"ContainerStarted","Data":"82bbfd16cd9df69fb7af73773f124e8f5f167f34a6697c131fcb5d857578565a"} Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.782573 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2b74afd-664c-4549-97bf-041b2bc152c8","Type":"ContainerStarted","Data":"48a98e4cb4ccf8c8e10e453e110b1dad778f5d2f010beebc36de3323c35c71be"} Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.784564 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2388c4d5-c56c-431e-af61-4294a629c1fd","Type":"ContainerStarted","Data":"0a20d03c1ee39bc67bc0808cc0a7bf9f40cae50bdded1061a2f035147957d9ea"} Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.786120 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.806611 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lrvhj" podStartSLOduration=2.806592228 podStartE2EDuration="2.806592228s" podCreationTimestamp="2026-01-08 23:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:21.798197796 +0000 UTC m=+1492.109356732" watchObservedRunningTime="2026-01-08 23:40:21.806592228 +0000 UTC m=+1492.117751174" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.901848 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:21 crc kubenswrapper[4945]: I0108 23:40:21.996187 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zcr"] Jan 08 23:40:22 crc kubenswrapper[4945]: I0108 23:40:22.328192 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rpppg"] Jan 08 23:40:22 crc kubenswrapper[4945]: I0108 23:40:22.806898 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rpppg" event={"ID":"db3e91f4-16b7-4a04-a23d-f3299d6781e5","Type":"ContainerStarted","Data":"db272f3a2cb229ff56011cfe4fa4d375bf8c3d30ed815cac7cd6b0c9c56a3120"} Jan 08 23:40:22 crc kubenswrapper[4945]: I0108 23:40:22.807364 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rpppg" event={"ID":"db3e91f4-16b7-4a04-a23d-f3299d6781e5","Type":"ContainerStarted","Data":"423979c9032f700941c6ce38f4c98fef5aa70ea229342ae6af537c8295b6eff9"} Jan 08 23:40:22 crc kubenswrapper[4945]: I0108 23:40:22.817204 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" event={"ID":"994dbeee-49d8-4572-975d-727360fff33c","Type":"ContainerStarted","Data":"aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056"} Jan 08 23:40:22 crc kubenswrapper[4945]: I0108 23:40:22.829236 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rpppg" podStartSLOduration=1.8292139600000001 podStartE2EDuration="1.82921396s" podCreationTimestamp="2026-01-08 23:40:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:22.821716589 +0000 UTC m=+1493.132875535" watchObservedRunningTime="2026-01-08 23:40:22.82921396 +0000 UTC m=+1493.140372906" Jan 08 23:40:22 crc kubenswrapper[4945]: I0108 23:40:22.852651 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" podStartSLOduration=3.852624973 podStartE2EDuration="3.852624973s" podCreationTimestamp="2026-01-08 23:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:22.839128268 +0000 UTC m=+1493.150287214" watchObservedRunningTime="2026-01-08 23:40:22.852624973 +0000 UTC m=+1493.163783919" Jan 08 23:40:23 crc kubenswrapper[4945]: I0108 23:40:23.831692 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:23 crc kubenswrapper[4945]: I0108 23:40:23.831888 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p2zcr" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerName="registry-server" containerID="cri-o://ce78099f6b76ee58269e6368ddf967c6a73172acec8fdcb8b43ccdba69ca8a5d" gracePeriod=2 Jan 08 23:40:24 crc kubenswrapper[4945]: I0108 23:40:24.307880 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:24 crc kubenswrapper[4945]: I0108 23:40:24.324388 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:24 crc kubenswrapper[4945]: I0108 23:40:24.840600 4945 generic.go:334] "Generic (PLEG): container finished" podID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerID="ce78099f6b76ee58269e6368ddf967c6a73172acec8fdcb8b43ccdba69ca8a5d" exitCode=0 Jan 08 23:40:24 crc kubenswrapper[4945]: I0108 23:40:24.840684 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zcr" event={"ID":"8dab4475-9aef-40ea-8c66-6f14c30e545c","Type":"ContainerDied","Data":"ce78099f6b76ee58269e6368ddf967c6a73172acec8fdcb8b43ccdba69ca8a5d"} Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.134942 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.279804 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-utilities\") pod \"8dab4475-9aef-40ea-8c66-6f14c30e545c\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.280076 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-422f5\" (UniqueName: \"kubernetes.io/projected/8dab4475-9aef-40ea-8c66-6f14c30e545c-kube-api-access-422f5\") pod \"8dab4475-9aef-40ea-8c66-6f14c30e545c\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.280185 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-catalog-content\") pod \"8dab4475-9aef-40ea-8c66-6f14c30e545c\" (UID: \"8dab4475-9aef-40ea-8c66-6f14c30e545c\") " Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.280638 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-utilities" (OuterVolumeSpecName: "utilities") pod "8dab4475-9aef-40ea-8c66-6f14c30e545c" (UID: "8dab4475-9aef-40ea-8c66-6f14c30e545c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.288244 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dab4475-9aef-40ea-8c66-6f14c30e545c-kube-api-access-422f5" (OuterVolumeSpecName: "kube-api-access-422f5") pod "8dab4475-9aef-40ea-8c66-6f14c30e545c" (UID: "8dab4475-9aef-40ea-8c66-6f14c30e545c"). InnerVolumeSpecName "kube-api-access-422f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.305838 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8dab4475-9aef-40ea-8c66-6f14c30e545c" (UID: "8dab4475-9aef-40ea-8c66-6f14c30e545c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.382532 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.382566 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-422f5\" (UniqueName: \"kubernetes.io/projected/8dab4475-9aef-40ea-8c66-6f14c30e545c-kube-api-access-422f5\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.382575 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dab4475-9aef-40ea-8c66-6f14c30e545c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.862398 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87","Type":"ContainerStarted","Data":"378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24"} Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.862503 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24" gracePeriod=30 Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.864486 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58af371a-afdf-47cc-8ac5-d78a9db4703b","Type":"ContainerStarted","Data":"16666bd37cccb6c678f4cdaa8b7f7e672a90275083cdba3da149f39b5238de37"} Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.866319 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zcr" event={"ID":"8dab4475-9aef-40ea-8c66-6f14c30e545c","Type":"ContainerDied","Data":"9a9b89a7cebd662ba02b5f48190722ec9a8026f216e2ae24a06999a658c16a66"} Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.866355 4945 scope.go:117] "RemoveContainer" containerID="ce78099f6b76ee58269e6368ddf967c6a73172acec8fdcb8b43ccdba69ca8a5d" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.866380 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2zcr" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.868435 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2b74afd-664c-4549-97bf-041b2bc152c8","Type":"ContainerStarted","Data":"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f"} Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.873445 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2388c4d5-c56c-431e-af61-4294a629c1fd","Type":"ContainerStarted","Data":"3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7"} Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.910096 4945 scope.go:117] "RemoveContainer" containerID="64c403401e059d748adcee7457210a568b6d3ebcdb38757511cf8b6054b7456b" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.923960 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.970574706 podStartE2EDuration="7.923921946s" podCreationTimestamp="2026-01-08 23:40:19 +0000 UTC" firstStartedPulling="2026-01-08 23:40:21.078573928 +0000 UTC m=+1491.389732874" lastFinishedPulling="2026-01-08 23:40:26.031921168 +0000 UTC m=+1496.343080114" observedRunningTime="2026-01-08 23:40:26.89918216 +0000 UTC m=+1497.210341106" watchObservedRunningTime="2026-01-08 23:40:26.923921946 +0000 UTC m=+1497.235080892" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.929686 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zcr"] Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.951729 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zcr"] Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.962080 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.199268459 podStartE2EDuration="7.962066384s" podCreationTimestamp="2026-01-08 23:40:19 +0000 UTC" firstStartedPulling="2026-01-08 23:40:21.263753774 +0000 UTC m=+1491.574912720" lastFinishedPulling="2026-01-08 23:40:26.026551699 +0000 UTC m=+1496.337710645" observedRunningTime="2026-01-08 23:40:26.959617815 +0000 UTC m=+1497.270776761" watchObservedRunningTime="2026-01-08 23:40:26.962066384 +0000 UTC m=+1497.273225330" Jan 08 23:40:26 crc kubenswrapper[4945]: I0108 23:40:26.971124 4945 scope.go:117] "RemoveContainer" containerID="a7cebbf61d952cff91f5ee6588b1b635d321e98cce2e1a6b15642a75695206b3" Jan 08 23:40:27 crc kubenswrapper[4945]: I0108 23:40:27.884138 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2b74afd-664c-4549-97bf-041b2bc152c8","Type":"ContainerStarted","Data":"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8"} Jan 08 23:40:27 crc kubenswrapper[4945]: I0108 23:40:27.884218 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerName="nova-metadata-log" containerID="cri-o://f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f" gracePeriod=30 Jan 08 23:40:27 crc kubenswrapper[4945]: I0108 23:40:27.884614 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerName="nova-metadata-metadata" containerID="cri-o://59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8" gracePeriod=30 Jan 08 23:40:27 crc kubenswrapper[4945]: I0108 23:40:27.889395 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58af371a-afdf-47cc-8ac5-d78a9db4703b","Type":"ContainerStarted","Data":"316a5588848ace74d609cd5f706561fc84f271353677b57401496daabd9df679"} Jan 08 23:40:27 crc kubenswrapper[4945]: I0108 23:40:27.906671 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.99832643 podStartE2EDuration="8.906645097s" podCreationTimestamp="2026-01-08 23:40:19 +0000 UTC" firstStartedPulling="2026-01-08 23:40:21.256586242 +0000 UTC m=+1491.567745188" lastFinishedPulling="2026-01-08 23:40:26.164904909 +0000 UTC m=+1496.476063855" observedRunningTime="2026-01-08 23:40:27.904393813 +0000 UTC m=+1498.215552759" watchObservedRunningTime="2026-01-08 23:40:27.906645097 +0000 UTC m=+1498.217804043" Jan 08 23:40:27 crc kubenswrapper[4945]: I0108 23:40:27.935604 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.016273312 podStartE2EDuration="8.935579113s" podCreationTimestamp="2026-01-08 23:40:19 +0000 UTC" firstStartedPulling="2026-01-08 23:40:21.296938733 +0000 UTC m=+1491.608097679" lastFinishedPulling="2026-01-08 23:40:26.216244534 +0000 UTC m=+1496.527403480" observedRunningTime="2026-01-08 23:40:27.929711212 +0000 UTC m=+1498.240870158" watchObservedRunningTime="2026-01-08 23:40:27.935579113 +0000 UTC m=+1498.246738059" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.013208 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" path="/var/lib/kubelet/pods/8dab4475-9aef-40ea-8c66-6f14c30e545c/volumes" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.498964 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.637232 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-combined-ca-bundle\") pod \"b2b74afd-664c-4549-97bf-041b2bc152c8\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.637300 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-config-data\") pod \"b2b74afd-664c-4549-97bf-041b2bc152c8\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.637366 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b74afd-664c-4549-97bf-041b2bc152c8-logs\") pod \"b2b74afd-664c-4549-97bf-041b2bc152c8\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.637406 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9gtw\" (UniqueName: \"kubernetes.io/projected/b2b74afd-664c-4549-97bf-041b2bc152c8-kube-api-access-w9gtw\") pod \"b2b74afd-664c-4549-97bf-041b2bc152c8\" (UID: \"b2b74afd-664c-4549-97bf-041b2bc152c8\") " Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.638768 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2b74afd-664c-4549-97bf-041b2bc152c8-logs" (OuterVolumeSpecName: "logs") pod "b2b74afd-664c-4549-97bf-041b2bc152c8" (UID: "b2b74afd-664c-4549-97bf-041b2bc152c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.660863 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2b74afd-664c-4549-97bf-041b2bc152c8-kube-api-access-w9gtw" (OuterVolumeSpecName: "kube-api-access-w9gtw") pod "b2b74afd-664c-4549-97bf-041b2bc152c8" (UID: "b2b74afd-664c-4549-97bf-041b2bc152c8"). InnerVolumeSpecName "kube-api-access-w9gtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.682082 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2b74afd-664c-4549-97bf-041b2bc152c8" (UID: "b2b74afd-664c-4549-97bf-041b2bc152c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.689702 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-config-data" (OuterVolumeSpecName: "config-data") pod "b2b74afd-664c-4549-97bf-041b2bc152c8" (UID: "b2b74afd-664c-4549-97bf-041b2bc152c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.739707 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.739749 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b74afd-664c-4549-97bf-041b2bc152c8-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.739764 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9gtw\" (UniqueName: \"kubernetes.io/projected/b2b74afd-664c-4549-97bf-041b2bc152c8-kube-api-access-w9gtw\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.739780 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b74afd-664c-4549-97bf-041b2bc152c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.900882 4945 generic.go:334] "Generic (PLEG): container finished" podID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerID="59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8" exitCode=0 Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.900915 4945 generic.go:334] "Generic (PLEG): container finished" podID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerID="f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f" exitCode=143 Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.901043 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2b74afd-664c-4549-97bf-041b2bc152c8","Type":"ContainerDied","Data":"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8"} Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.901078 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.901127 4945 scope.go:117] "RemoveContainer" containerID="59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.901112 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2b74afd-664c-4549-97bf-041b2bc152c8","Type":"ContainerDied","Data":"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f"} Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.901366 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b2b74afd-664c-4549-97bf-041b2bc152c8","Type":"ContainerDied","Data":"48a98e4cb4ccf8c8e10e453e110b1dad778f5d2f010beebc36de3323c35c71be"} Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.958139 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.959618 4945 scope.go:117] "RemoveContainer" containerID="f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f" Jan 08 23:40:28 crc kubenswrapper[4945]: I0108 23:40:28.979090 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.000719 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:29 crc kubenswrapper[4945]: E0108 23:40:29.001201 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerName="registry-server" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.001217 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerName="registry-server" Jan 08 23:40:29 crc kubenswrapper[4945]: E0108 23:40:29.001235 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerName="nova-metadata-metadata" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.001242 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerName="nova-metadata-metadata" Jan 08 23:40:29 crc kubenswrapper[4945]: E0108 23:40:29.001259 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerName="nova-metadata-log" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.001266 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerName="nova-metadata-log" Jan 08 23:40:29 crc kubenswrapper[4945]: E0108 23:40:29.001277 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerName="extract-utilities" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.001284 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerName="extract-utilities" Jan 08 23:40:29 crc kubenswrapper[4945]: E0108 23:40:29.001299 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerName="extract-content" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.001310 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerName="extract-content" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.001509 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dab4475-9aef-40ea-8c66-6f14c30e545c" containerName="registry-server" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.001535 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerName="nova-metadata-log" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.001551 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" containerName="nova-metadata-metadata" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.002527 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.010826 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.010970 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.012937 4945 scope.go:117] "RemoveContainer" containerID="59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.015250 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:29 crc kubenswrapper[4945]: E0108 23:40:29.016910 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8\": container with ID starting with 59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8 not found: ID does not exist" containerID="59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.016955 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8"} err="failed to get container status \"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8\": rpc error: code = NotFound desc = could not find container \"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8\": container with ID starting with 59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8 not found: ID does not exist" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.016985 4945 scope.go:117] "RemoveContainer" containerID="f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f" Jan 08 23:40:29 crc kubenswrapper[4945]: E0108 23:40:29.019370 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f\": container with ID starting with f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f not found: ID does not exist" containerID="f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.019419 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f"} err="failed to get container status \"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f\": rpc error: code = NotFound desc = could not find container \"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f\": container with ID starting with f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f not found: ID does not exist" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.019450 4945 scope.go:117] "RemoveContainer" containerID="59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.019889 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8"} err="failed to get container status \"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8\": rpc error: code = NotFound desc = could not find container \"59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8\": container with ID starting with 59ecd47470be81e397b208f6a8acc1340ec54ef846505a98e577a9b96c219cb8 not found: ID does not exist" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.020017 4945 scope.go:117] "RemoveContainer" containerID="f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.020744 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f"} err="failed to get container status \"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f\": rpc error: code = NotFound desc = could not find container \"f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f\": container with ID starting with f0d3ba6d458d1795e384aeb6bfe89c993f85d24f9c3805fae4c53f9a102be53f not found: ID does not exist" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.148026 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b26c09-caee-4d0e-afd6-94a8673c9af7-logs\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.148096 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgg2\" (UniqueName: \"kubernetes.io/projected/f5b26c09-caee-4d0e-afd6-94a8673c9af7-kube-api-access-hbgg2\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.148137 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-config-data\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.148159 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.148185 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.250398 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b26c09-caee-4d0e-afd6-94a8673c9af7-logs\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.250485 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbgg2\" (UniqueName: \"kubernetes.io/projected/f5b26c09-caee-4d0e-afd6-94a8673c9af7-kube-api-access-hbgg2\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.250547 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-config-data\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.250572 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.250606 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.251537 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b26c09-caee-4d0e-afd6-94a8673c9af7-logs\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.255343 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-config-data\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.256744 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.267686 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.273932 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbgg2\" (UniqueName: \"kubernetes.io/projected/f5b26c09-caee-4d0e-afd6-94a8673c9af7-kube-api-access-hbgg2\") pod \"nova-metadata-0\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.322527 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.792736 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:29 crc kubenswrapper[4945]: I0108 23:40:29.914311 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5b26c09-caee-4d0e-afd6-94a8673c9af7","Type":"ContainerStarted","Data":"dc104ccc0f6df8b6231dc8cd805e6820507a38c78db0833d03f36e29ea1ecb10"} Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.017279 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2b74afd-664c-4549-97bf-041b2bc152c8" path="/var/lib/kubelet/pods/b2b74afd-664c-4549-97bf-041b2bc152c8/volumes" Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.057217 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.205550 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.206697 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.250784 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.400016 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.473950 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-x8bw5"] Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.474340 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" podUID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" containerName="dnsmasq-dns" containerID="cri-o://3c0c07919ffc3a6d21d8c9d9708d89bd56bd1ac225a2cecff96ce0e36ff56531" gracePeriod=10 Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.556332 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.556398 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.943892 4945 generic.go:334] "Generic (PLEG): container finished" podID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" containerID="3c0c07919ffc3a6d21d8c9d9708d89bd56bd1ac225a2cecff96ce0e36ff56531" exitCode=0 Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.944102 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" event={"ID":"c65a6567-6928-4df5-8b0f-ed77fefddcd8","Type":"ContainerDied","Data":"3c0c07919ffc3a6d21d8c9d9708d89bd56bd1ac225a2cecff96ce0e36ff56531"} Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.952723 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5b26c09-caee-4d0e-afd6-94a8673c9af7","Type":"ContainerStarted","Data":"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa"} Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.952786 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5b26c09-caee-4d0e-afd6-94a8673c9af7","Type":"ContainerStarted","Data":"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4"} Jan 08 23:40:30 crc kubenswrapper[4945]: I0108 23:40:30.990727 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.990682479 podStartE2EDuration="2.990682479s" podCreationTimestamp="2026-01-08 23:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:30.979731046 +0000 UTC m=+1501.290889992" watchObservedRunningTime="2026-01-08 23:40:30.990682479 +0000 UTC m=+1501.301841425" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.009857 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.180976 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.200981 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-swift-storage-0\") pod \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.201133 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-nb\") pod \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.201497 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78vgt\" (UniqueName: \"kubernetes.io/projected/c65a6567-6928-4df5-8b0f-ed77fefddcd8-kube-api-access-78vgt\") pod \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.201520 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-svc\") pod \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.201800 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-config\") pod \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.201830 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-sb\") pod \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\" (UID: \"c65a6567-6928-4df5-8b0f-ed77fefddcd8\") " Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.324475 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c65a6567-6928-4df5-8b0f-ed77fefddcd8-kube-api-access-78vgt" (OuterVolumeSpecName: "kube-api-access-78vgt") pod "c65a6567-6928-4df5-8b0f-ed77fefddcd8" (UID: "c65a6567-6928-4df5-8b0f-ed77fefddcd8"). InnerVolumeSpecName "kube-api-access-78vgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.348017 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78vgt\" (UniqueName: \"kubernetes.io/projected/c65a6567-6928-4df5-8b0f-ed77fefddcd8-kube-api-access-78vgt\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.390815 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c65a6567-6928-4df5-8b0f-ed77fefddcd8" (UID: "c65a6567-6928-4df5-8b0f-ed77fefddcd8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.407790 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c65a6567-6928-4df5-8b0f-ed77fefddcd8" (UID: "c65a6567-6928-4df5-8b0f-ed77fefddcd8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.449863 4945 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.449897 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.454343 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c65a6567-6928-4df5-8b0f-ed77fefddcd8" (UID: "c65a6567-6928-4df5-8b0f-ed77fefddcd8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.485778 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-config" (OuterVolumeSpecName: "config") pod "c65a6567-6928-4df5-8b0f-ed77fefddcd8" (UID: "c65a6567-6928-4df5-8b0f-ed77fefddcd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.490797 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c65a6567-6928-4df5-8b0f-ed77fefddcd8" (UID: "c65a6567-6928-4df5-8b0f-ed77fefddcd8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.551352 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.551403 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.551419 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c65a6567-6928-4df5-8b0f-ed77fefddcd8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.638646 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.638834 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.965872 4945 generic.go:334] "Generic (PLEG): container finished" podID="5e9d1095-f2ce-463c-9f99-f4d8a10b834b" containerID="1f410de8214b57d3744bd57097664e1e1a6c42f3f45a62aa5538c148ff68273e" exitCode=0 Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.965963 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lrvhj" event={"ID":"5e9d1095-f2ce-463c-9f99-f4d8a10b834b","Type":"ContainerDied","Data":"1f410de8214b57d3744bd57097664e1e1a6c42f3f45a62aa5538c148ff68273e"} Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.968931 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.969133 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-x8bw5" event={"ID":"c65a6567-6928-4df5-8b0f-ed77fefddcd8","Type":"ContainerDied","Data":"7f7547189a01e1071ccf85f176cb5b98436dd397a2c80517803ff52c0d0c881a"} Jan 08 23:40:31 crc kubenswrapper[4945]: I0108 23:40:31.969194 4945 scope.go:117] "RemoveContainer" containerID="3c0c07919ffc3a6d21d8c9d9708d89bd56bd1ac225a2cecff96ce0e36ff56531" Jan 08 23:40:32 crc kubenswrapper[4945]: I0108 23:40:32.004789 4945 scope.go:117] "RemoveContainer" containerID="06f6880899812494c79a3bde8112d18f67d5ca8c6d4976c0678741e17fc576fe" Jan 08 23:40:32 crc kubenswrapper[4945]: I0108 23:40:32.071970 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-x8bw5"] Jan 08 23:40:32 crc kubenswrapper[4945]: I0108 23:40:32.087156 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-x8bw5"] Jan 08 23:40:32 crc kubenswrapper[4945]: I0108 23:40:32.993809 4945 generic.go:334] "Generic (PLEG): container finished" podID="db3e91f4-16b7-4a04-a23d-f3299d6781e5" containerID="db272f3a2cb229ff56011cfe4fa4d375bf8c3d30ed815cac7cd6b0c9c56a3120" exitCode=0 Jan 08 23:40:32 crc kubenswrapper[4945]: I0108 23:40:32.993885 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rpppg" event={"ID":"db3e91f4-16b7-4a04-a23d-f3299d6781e5","Type":"ContainerDied","Data":"db272f3a2cb229ff56011cfe4fa4d375bf8c3d30ed815cac7cd6b0c9c56a3120"} Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.392671 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.500859 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-scripts\") pod \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.500973 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwbsl\" (UniqueName: \"kubernetes.io/projected/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-kube-api-access-kwbsl\") pod \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.501030 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-config-data\") pod \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.501084 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-combined-ca-bundle\") pod \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\" (UID: \"5e9d1095-f2ce-463c-9f99-f4d8a10b834b\") " Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.509244 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-kube-api-access-kwbsl" (OuterVolumeSpecName: "kube-api-access-kwbsl") pod "5e9d1095-f2ce-463c-9f99-f4d8a10b834b" (UID: "5e9d1095-f2ce-463c-9f99-f4d8a10b834b"). InnerVolumeSpecName "kube-api-access-kwbsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.513403 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-scripts" (OuterVolumeSpecName: "scripts") pod "5e9d1095-f2ce-463c-9f99-f4d8a10b834b" (UID: "5e9d1095-f2ce-463c-9f99-f4d8a10b834b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.538237 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-config-data" (OuterVolumeSpecName: "config-data") pod "5e9d1095-f2ce-463c-9f99-f4d8a10b834b" (UID: "5e9d1095-f2ce-463c-9f99-f4d8a10b834b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.542649 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e9d1095-f2ce-463c-9f99-f4d8a10b834b" (UID: "5e9d1095-f2ce-463c-9f99-f4d8a10b834b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.603698 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.603743 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwbsl\" (UniqueName: \"kubernetes.io/projected/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-kube-api-access-kwbsl\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.603760 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:33 crc kubenswrapper[4945]: I0108 23:40:33.603775 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e9d1095-f2ce-463c-9f99-f4d8a10b834b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.013866 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" path="/var/lib/kubelet/pods/c65a6567-6928-4df5-8b0f-ed77fefddcd8/volumes" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.018954 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lrvhj" event={"ID":"5e9d1095-f2ce-463c-9f99-f4d8a10b834b","Type":"ContainerDied","Data":"7c32944936300d55a01d05f1ea761e73211185485368c3248868eea37e1adc03"} Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.018969 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lrvhj" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.023592 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c32944936300d55a01d05f1ea761e73211185485368c3248868eea37e1adc03" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.202085 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.202403 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-log" containerID="cri-o://16666bd37cccb6c678f4cdaa8b7f7e672a90275083cdba3da149f39b5238de37" gracePeriod=30 Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.202722 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-api" containerID="cri-o://316a5588848ace74d609cd5f706561fc84f271353677b57401496daabd9df679" gracePeriod=30 Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.221789 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.222015 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2388c4d5-c56c-431e-af61-4294a629c1fd" containerName="nova-scheduler-scheduler" containerID="cri-o://3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7" gracePeriod=30 Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.255977 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.256229 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerName="nova-metadata-log" containerID="cri-o://6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4" gracePeriod=30 Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.256380 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerName="nova-metadata-metadata" containerID="cri-o://7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa" gracePeriod=30 Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.323827 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.324217 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.614761 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.641781 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7sn9\" (UniqueName: \"kubernetes.io/projected/db3e91f4-16b7-4a04-a23d-f3299d6781e5-kube-api-access-g7sn9\") pod \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.641830 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-scripts\") pod \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.641890 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-combined-ca-bundle\") pod \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.642090 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-config-data\") pod \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\" (UID: \"db3e91f4-16b7-4a04-a23d-f3299d6781e5\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.654240 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db3e91f4-16b7-4a04-a23d-f3299d6781e5-kube-api-access-g7sn9" (OuterVolumeSpecName: "kube-api-access-g7sn9") pod "db3e91f4-16b7-4a04-a23d-f3299d6781e5" (UID: "db3e91f4-16b7-4a04-a23d-f3299d6781e5"). InnerVolumeSpecName "kube-api-access-g7sn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.742324 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-scripts" (OuterVolumeSpecName: "scripts") pod "db3e91f4-16b7-4a04-a23d-f3299d6781e5" (UID: "db3e91f4-16b7-4a04-a23d-f3299d6781e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.744750 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7sn9\" (UniqueName: \"kubernetes.io/projected/db3e91f4-16b7-4a04-a23d-f3299d6781e5-kube-api-access-g7sn9\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.744783 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.778270 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-config-data" (OuterVolumeSpecName: "config-data") pod "db3e91f4-16b7-4a04-a23d-f3299d6781e5" (UID: "db3e91f4-16b7-4a04-a23d-f3299d6781e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.778449 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db3e91f4-16b7-4a04-a23d-f3299d6781e5" (UID: "db3e91f4-16b7-4a04-a23d-f3299d6781e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.847623 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.847715 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db3e91f4-16b7-4a04-a23d-f3299d6781e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.906284 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.948866 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbgg2\" (UniqueName: \"kubernetes.io/projected/f5b26c09-caee-4d0e-afd6-94a8673c9af7-kube-api-access-hbgg2\") pod \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.948973 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-config-data\") pod \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.949104 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-combined-ca-bundle\") pod \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.949730 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-nova-metadata-tls-certs\") pod \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.949970 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b26c09-caee-4d0e-afd6-94a8673c9af7-logs\") pod \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\" (UID: \"f5b26c09-caee-4d0e-afd6-94a8673c9af7\") " Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.950894 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5b26c09-caee-4d0e-afd6-94a8673c9af7-logs" (OuterVolumeSpecName: "logs") pod "f5b26c09-caee-4d0e-afd6-94a8673c9af7" (UID: "f5b26c09-caee-4d0e-afd6-94a8673c9af7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.953177 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5b26c09-caee-4d0e-afd6-94a8673c9af7-kube-api-access-hbgg2" (OuterVolumeSpecName: "kube-api-access-hbgg2") pod "f5b26c09-caee-4d0e-afd6-94a8673c9af7" (UID: "f5b26c09-caee-4d0e-afd6-94a8673c9af7"). InnerVolumeSpecName "kube-api-access-hbgg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.975734 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5b26c09-caee-4d0e-afd6-94a8673c9af7" (UID: "f5b26c09-caee-4d0e-afd6-94a8673c9af7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:34 crc kubenswrapper[4945]: I0108 23:40:34.997360 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-config-data" (OuterVolumeSpecName: "config-data") pod "f5b26c09-caee-4d0e-afd6-94a8673c9af7" (UID: "f5b26c09-caee-4d0e-afd6-94a8673c9af7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.011362 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f5b26c09-caee-4d0e-afd6-94a8673c9af7" (UID: "f5b26c09-caee-4d0e-afd6-94a8673c9af7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.030720 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rpppg" event={"ID":"db3e91f4-16b7-4a04-a23d-f3299d6781e5","Type":"ContainerDied","Data":"423979c9032f700941c6ce38f4c98fef5aa70ea229342ae6af537c8295b6eff9"} Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.031276 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="423979c9032f700941c6ce38f4c98fef5aa70ea229342ae6af537c8295b6eff9" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.030956 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rpppg" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.033840 4945 generic.go:334] "Generic (PLEG): container finished" podID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerID="7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa" exitCode=0 Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.033859 4945 generic.go:334] "Generic (PLEG): container finished" podID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerID="6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4" exitCode=143 Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.033895 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5b26c09-caee-4d0e-afd6-94a8673c9af7","Type":"ContainerDied","Data":"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa"} Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.033912 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5b26c09-caee-4d0e-afd6-94a8673c9af7","Type":"ContainerDied","Data":"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4"} Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.033923 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f5b26c09-caee-4d0e-afd6-94a8673c9af7","Type":"ContainerDied","Data":"dc104ccc0f6df8b6231dc8cd805e6820507a38c78db0833d03f36e29ea1ecb10"} Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.033940 4945 scope.go:117] "RemoveContainer" containerID="7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.034079 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.042290 4945 generic.go:334] "Generic (PLEG): container finished" podID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerID="16666bd37cccb6c678f4cdaa8b7f7e672a90275083cdba3da149f39b5238de37" exitCode=143 Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.042330 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58af371a-afdf-47cc-8ac5-d78a9db4703b","Type":"ContainerDied","Data":"16666bd37cccb6c678f4cdaa8b7f7e672a90275083cdba3da149f39b5238de37"} Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.052411 4945 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.052458 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5b26c09-caee-4d0e-afd6-94a8673c9af7-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.052477 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbgg2\" (UniqueName: \"kubernetes.io/projected/f5b26c09-caee-4d0e-afd6-94a8673c9af7-kube-api-access-hbgg2\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.052491 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.052504 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b26c09-caee-4d0e-afd6-94a8673c9af7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.073885 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.082484 4945 scope.go:117] "RemoveContainer" containerID="6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.091822 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.122731 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.123543 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerName="nova-metadata-metadata" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123567 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerName="nova-metadata-metadata" Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.123585 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerName="nova-metadata-log" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123592 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerName="nova-metadata-log" Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.123604 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" containerName="init" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123610 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" containerName="init" Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.123624 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" containerName="dnsmasq-dns" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123630 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" containerName="dnsmasq-dns" Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.123644 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db3e91f4-16b7-4a04-a23d-f3299d6781e5" containerName="nova-cell1-conductor-db-sync" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123652 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="db3e91f4-16b7-4a04-a23d-f3299d6781e5" containerName="nova-cell1-conductor-db-sync" Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.123664 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e9d1095-f2ce-463c-9f99-f4d8a10b834b" containerName="nova-manage" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123669 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e9d1095-f2ce-463c-9f99-f4d8a10b834b" containerName="nova-manage" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123858 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c65a6567-6928-4df5-8b0f-ed77fefddcd8" containerName="dnsmasq-dns" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123870 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerName="nova-metadata-metadata" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123883 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" containerName="nova-metadata-log" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123899 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="db3e91f4-16b7-4a04-a23d-f3299d6781e5" containerName="nova-cell1-conductor-db-sync" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.123906 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e9d1095-f2ce-463c-9f99-f4d8a10b834b" containerName="nova-manage" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.124725 4945 scope.go:117] "RemoveContainer" containerID="7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.125261 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.126525 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa\": container with ID starting with 7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa not found: ID does not exist" containerID="7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.126578 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa"} err="failed to get container status \"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa\": rpc error: code = NotFound desc = could not find container \"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa\": container with ID starting with 7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa not found: ID does not exist" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.126605 4945 scope.go:117] "RemoveContainer" containerID="6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4" Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.127361 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4\": container with ID starting with 6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4 not found: ID does not exist" containerID="6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.127427 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4"} err="failed to get container status \"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4\": rpc error: code = NotFound desc = could not find container \"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4\": container with ID starting with 6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4 not found: ID does not exist" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.127467 4945 scope.go:117] "RemoveContainer" containerID="7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.130607 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa"} err="failed to get container status \"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa\": rpc error: code = NotFound desc = could not find container \"7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa\": container with ID starting with 7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa not found: ID does not exist" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.130662 4945 scope.go:117] "RemoveContainer" containerID="6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.131944 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4"} err="failed to get container status \"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4\": rpc error: code = NotFound desc = could not find container \"6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4\": container with ID starting with 6a0e78d59e9880c79e347d47845cdd79d4c10cab8c08239916491383332144d4 not found: ID does not exist" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.132519 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.133629 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.137280 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.156764 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhdpm\" (UniqueName: \"kubernetes.io/projected/67c81217-0b52-461b-9eaf-cefc63fbaa10-kube-api-access-zhdpm\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.157357 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.157459 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81217-0b52-461b-9eaf-cefc63fbaa10-logs\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.157529 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-config-data\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.157659 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.163236 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.164764 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.180608 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.199114 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.210495 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.212212 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.213707 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 08 23:40:35 crc kubenswrapper[4945]: E0108 23:40:35.213763 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2388c4d5-c56c-431e-af61-4294a629c1fd" containerName="nova-scheduler-scheduler" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.259285 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-config-data\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.259374 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.259441 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhdpm\" (UniqueName: \"kubernetes.io/projected/67c81217-0b52-461b-9eaf-cefc63fbaa10-kube-api-access-zhdpm\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.259468 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhbr\" (UniqueName: \"kubernetes.io/projected/7b8f132e-3fda-4a38-8416-1055a62e7552-kube-api-access-lfhbr\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.259489 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.259515 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.259550 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.259568 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81217-0b52-461b-9eaf-cefc63fbaa10-logs\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.260941 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81217-0b52-461b-9eaf-cefc63fbaa10-logs\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.264076 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.264769 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.270735 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-config-data\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.285569 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhdpm\" (UniqueName: \"kubernetes.io/projected/67c81217-0b52-461b-9eaf-cefc63fbaa10-kube-api-access-zhdpm\") pod \"nova-metadata-0\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.361709 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfhbr\" (UniqueName: \"kubernetes.io/projected/7b8f132e-3fda-4a38-8416-1055a62e7552-kube-api-access-lfhbr\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.361790 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.361880 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.367034 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.367290 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.389513 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfhbr\" (UniqueName: \"kubernetes.io/projected/7b8f132e-3fda-4a38-8416-1055a62e7552-kube-api-access-lfhbr\") pod \"nova-cell1-conductor-0\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.448183 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.501189 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:35 crc kubenswrapper[4945]: I0108 23:40:35.995892 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:40:36 crc kubenswrapper[4945]: I0108 23:40:36.014434 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5b26c09-caee-4d0e-afd6-94a8673c9af7" path="/var/lib/kubelet/pods/f5b26c09-caee-4d0e-afd6-94a8673c9af7/volumes" Jan 08 23:40:36 crc kubenswrapper[4945]: I0108 23:40:36.065335 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67c81217-0b52-461b-9eaf-cefc63fbaa10","Type":"ContainerStarted","Data":"1cdc2807ef368081844e821abc4704c5e8a5356f18ff1e2b6206fa791f8e4cd2"} Jan 08 23:40:36 crc kubenswrapper[4945]: I0108 23:40:36.100391 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 08 23:40:37 crc kubenswrapper[4945]: I0108 23:40:37.075661 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7b8f132e-3fda-4a38-8416-1055a62e7552","Type":"ContainerStarted","Data":"8a7142f351f49dd941a579a875be8ea9073637a6c87204db987a41fea1ed6c9f"} Jan 08 23:40:37 crc kubenswrapper[4945]: I0108 23:40:37.076101 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7b8f132e-3fda-4a38-8416-1055a62e7552","Type":"ContainerStarted","Data":"6c2771676fce4f06fa6806009437dbcca0260587466affbf887c31ab79e1fc31"} Jan 08 23:40:37 crc kubenswrapper[4945]: I0108 23:40:37.076127 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:37 crc kubenswrapper[4945]: I0108 23:40:37.077481 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67c81217-0b52-461b-9eaf-cefc63fbaa10","Type":"ContainerStarted","Data":"d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72"} Jan 08 23:40:37 crc kubenswrapper[4945]: I0108 23:40:37.077506 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67c81217-0b52-461b-9eaf-cefc63fbaa10","Type":"ContainerStarted","Data":"61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63"} Jan 08 23:40:37 crc kubenswrapper[4945]: I0108 23:40:37.107672 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.107648594 podStartE2EDuration="2.107648594s" podCreationTimestamp="2026-01-08 23:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:37.091948446 +0000 UTC m=+1507.403107402" watchObservedRunningTime="2026-01-08 23:40:37.107648594 +0000 UTC m=+1507.418807540" Jan 08 23:40:37 crc kubenswrapper[4945]: I0108 23:40:37.141802 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.141756385 podStartE2EDuration="2.141756385s" podCreationTimestamp="2026-01-08 23:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:37.115774949 +0000 UTC m=+1507.426933895" watchObservedRunningTime="2026-01-08 23:40:37.141756385 +0000 UTC m=+1507.452915341" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.103012 4945 generic.go:334] "Generic (PLEG): container finished" podID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerID="316a5588848ace74d609cd5f706561fc84f271353677b57401496daabd9df679" exitCode=0 Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.104735 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58af371a-afdf-47cc-8ac5-d78a9db4703b","Type":"ContainerDied","Data":"316a5588848ace74d609cd5f706561fc84f271353677b57401496daabd9df679"} Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.104859 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58af371a-afdf-47cc-8ac5-d78a9db4703b","Type":"ContainerDied","Data":"82bbfd16cd9df69fb7af73773f124e8f5f167f34a6697c131fcb5d857578565a"} Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.104926 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82bbfd16cd9df69fb7af73773f124e8f5f167f34a6697c131fcb5d857578565a" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.166623 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.326542 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58af371a-afdf-47cc-8ac5-d78a9db4703b-logs\") pod \"58af371a-afdf-47cc-8ac5-d78a9db4703b\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.326805 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-combined-ca-bundle\") pod \"58af371a-afdf-47cc-8ac5-d78a9db4703b\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.326850 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njzl2\" (UniqueName: \"kubernetes.io/projected/58af371a-afdf-47cc-8ac5-d78a9db4703b-kube-api-access-njzl2\") pod \"58af371a-afdf-47cc-8ac5-d78a9db4703b\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.326962 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-config-data\") pod \"58af371a-afdf-47cc-8ac5-d78a9db4703b\" (UID: \"58af371a-afdf-47cc-8ac5-d78a9db4703b\") " Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.327094 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58af371a-afdf-47cc-8ac5-d78a9db4703b-logs" (OuterVolumeSpecName: "logs") pod "58af371a-afdf-47cc-8ac5-d78a9db4703b" (UID: "58af371a-afdf-47cc-8ac5-d78a9db4703b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.327496 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58af371a-afdf-47cc-8ac5-d78a9db4703b-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.336497 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58af371a-afdf-47cc-8ac5-d78a9db4703b-kube-api-access-njzl2" (OuterVolumeSpecName: "kube-api-access-njzl2") pod "58af371a-afdf-47cc-8ac5-d78a9db4703b" (UID: "58af371a-afdf-47cc-8ac5-d78a9db4703b"). InnerVolumeSpecName "kube-api-access-njzl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.385770 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-config-data" (OuterVolumeSpecName: "config-data") pod "58af371a-afdf-47cc-8ac5-d78a9db4703b" (UID: "58af371a-afdf-47cc-8ac5-d78a9db4703b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.389228 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58af371a-afdf-47cc-8ac5-d78a9db4703b" (UID: "58af371a-afdf-47cc-8ac5-d78a9db4703b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.429565 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.429965 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njzl2\" (UniqueName: \"kubernetes.io/projected/58af371a-afdf-47cc-8ac5-d78a9db4703b-kube-api-access-njzl2\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:38 crc kubenswrapper[4945]: I0108 23:40:38.430088 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58af371a-afdf-47cc-8ac5-d78a9db4703b-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.074568 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.117394 4945 generic.go:334] "Generic (PLEG): container finished" podID="2388c4d5-c56c-431e-af61-4294a629c1fd" containerID="3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7" exitCode=0 Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.117453 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.117487 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.117513 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2388c4d5-c56c-431e-af61-4294a629c1fd","Type":"ContainerDied","Data":"3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7"} Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.117543 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2388c4d5-c56c-431e-af61-4294a629c1fd","Type":"ContainerDied","Data":"0a20d03c1ee39bc67bc0808cc0a7bf9f40cae50bdded1061a2f035147957d9ea"} Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.117564 4945 scope.go:117] "RemoveContainer" containerID="3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.154089 4945 scope.go:117] "RemoveContainer" containerID="3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7" Jan 08 23:40:39 crc kubenswrapper[4945]: E0108 23:40:39.156610 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7\": container with ID starting with 3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7 not found: ID does not exist" containerID="3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.156738 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7"} err="failed to get container status \"3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7\": rpc error: code = NotFound desc = could not find container \"3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7\": container with ID starting with 3bb7e4db8833e36c8244dadbb85fb0d6a264233657cda43f08020fa3c9f6efa7 not found: ID does not exist" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.161343 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.168119 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.193437 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:39 crc kubenswrapper[4945]: E0108 23:40:39.193879 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-log" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.193903 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-log" Jan 08 23:40:39 crc kubenswrapper[4945]: E0108 23:40:39.193917 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-api" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.193923 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-api" Jan 08 23:40:39 crc kubenswrapper[4945]: E0108 23:40:39.193940 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2388c4d5-c56c-431e-af61-4294a629c1fd" containerName="nova-scheduler-scheduler" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.193946 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2388c4d5-c56c-431e-af61-4294a629c1fd" containerName="nova-scheduler-scheduler" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.194152 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-api" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.194181 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2388c4d5-c56c-431e-af61-4294a629c1fd" containerName="nova-scheduler-scheduler" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.194192 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" containerName="nova-api-log" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.198815 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.201449 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.230105 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.254316 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-combined-ca-bundle\") pod \"2388c4d5-c56c-431e-af61-4294a629c1fd\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.254413 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zngm6\" (UniqueName: \"kubernetes.io/projected/2388c4d5-c56c-431e-af61-4294a629c1fd-kube-api-access-zngm6\") pod \"2388c4d5-c56c-431e-af61-4294a629c1fd\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.254474 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-config-data\") pod \"2388c4d5-c56c-431e-af61-4294a629c1fd\" (UID: \"2388c4d5-c56c-431e-af61-4294a629c1fd\") " Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.261316 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2388c4d5-c56c-431e-af61-4294a629c1fd-kube-api-access-zngm6" (OuterVolumeSpecName: "kube-api-access-zngm6") pod "2388c4d5-c56c-431e-af61-4294a629c1fd" (UID: "2388c4d5-c56c-431e-af61-4294a629c1fd"). InnerVolumeSpecName "kube-api-access-zngm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.298379 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2388c4d5-c56c-431e-af61-4294a629c1fd" (UID: "2388c4d5-c56c-431e-af61-4294a629c1fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.298548 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-config-data" (OuterVolumeSpecName: "config-data") pod "2388c4d5-c56c-431e-af61-4294a629c1fd" (UID: "2388c4d5-c56c-431e-af61-4294a629c1fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.357142 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.357197 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aaeaaca-7e52-4492-9adb-2800f51d8159-logs\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.357229 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcjzw\" (UniqueName: \"kubernetes.io/projected/9aaeaaca-7e52-4492-9adb-2800f51d8159-kube-api-access-rcjzw\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.357249 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-config-data\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.357314 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zngm6\" (UniqueName: \"kubernetes.io/projected/2388c4d5-c56c-431e-af61-4294a629c1fd-kube-api-access-zngm6\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.357329 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.357337 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2388c4d5-c56c-431e-af61-4294a629c1fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.452647 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.460377 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aaeaaca-7e52-4492-9adb-2800f51d8159-logs\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.460813 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcjzw\" (UniqueName: \"kubernetes.io/projected/9aaeaaca-7e52-4492-9adb-2800f51d8159-kube-api-access-rcjzw\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.460968 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aaeaaca-7e52-4492-9adb-2800f51d8159-logs\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.461100 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-config-data\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.461631 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.463278 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.467511 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.468708 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-config-data\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.482815 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.484484 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.486260 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcjzw\" (UniqueName: \"kubernetes.io/projected/9aaeaaca-7e52-4492-9adb-2800f51d8159-kube-api-access-rcjzw\") pod \"nova-api-0\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.486878 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.491961 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.516986 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.664917 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn77k\" (UniqueName: \"kubernetes.io/projected/4d92db13-d607-4a17-914d-83b7e18587f3-kube-api-access-wn77k\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.665041 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.665147 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-config-data\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.766786 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-config-data\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.767197 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn77k\" (UniqueName: \"kubernetes.io/projected/4d92db13-d607-4a17-914d-83b7e18587f3-kube-api-access-wn77k\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.767315 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.772201 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-config-data\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.772945 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.785717 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn77k\" (UniqueName: \"kubernetes.io/projected/4d92db13-d607-4a17-914d-83b7e18587f3-kube-api-access-wn77k\") pod \"nova-scheduler-0\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " pod="openstack/nova-scheduler-0" Jan 08 23:40:39 crc kubenswrapper[4945]: I0108 23:40:39.809223 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:40:40 crc kubenswrapper[4945]: I0108 23:40:40.032352 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2388c4d5-c56c-431e-af61-4294a629c1fd" path="/var/lib/kubelet/pods/2388c4d5-c56c-431e-af61-4294a629c1fd/volumes" Jan 08 23:40:40 crc kubenswrapper[4945]: I0108 23:40:40.033541 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58af371a-afdf-47cc-8ac5-d78a9db4703b" path="/var/lib/kubelet/pods/58af371a-afdf-47cc-8ac5-d78a9db4703b/volumes" Jan 08 23:40:40 crc kubenswrapper[4945]: I0108 23:40:40.037205 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:40:40 crc kubenswrapper[4945]: I0108 23:40:40.139441 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9aaeaaca-7e52-4492-9adb-2800f51d8159","Type":"ContainerStarted","Data":"46f5b61fe277c6f04a7350324b96356783e1955ea5bcf083f18a5b59afbc2dd5"} Jan 08 23:40:40 crc kubenswrapper[4945]: I0108 23:40:40.303932 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:40:40 crc kubenswrapper[4945]: I0108 23:40:40.449397 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 08 23:40:40 crc kubenswrapper[4945]: I0108 23:40:40.449877 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 08 23:40:41 crc kubenswrapper[4945]: I0108 23:40:41.151711 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9aaeaaca-7e52-4492-9adb-2800f51d8159","Type":"ContainerStarted","Data":"b41f37673cf4e4eb5c572a5e9439dad2ce1f8eed476a001be39d00827e2a3dcc"} Jan 08 23:40:41 crc kubenswrapper[4945]: I0108 23:40:41.152951 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9aaeaaca-7e52-4492-9adb-2800f51d8159","Type":"ContainerStarted","Data":"c8148153186cb9e27922c0a6832ed859289e615bd5a9544f870cdee47fc6095e"} Jan 08 23:40:41 crc kubenswrapper[4945]: I0108 23:40:41.161176 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d92db13-d607-4a17-914d-83b7e18587f3","Type":"ContainerStarted","Data":"221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022"} Jan 08 23:40:41 crc kubenswrapper[4945]: I0108 23:40:41.161235 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d92db13-d607-4a17-914d-83b7e18587f3","Type":"ContainerStarted","Data":"1c0d584cd41551e7abc0686c1a36bae1fa80c6f1c6d559b802915354f131822f"} Jan 08 23:40:41 crc kubenswrapper[4945]: I0108 23:40:41.177501 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.177476411 podStartE2EDuration="2.177476411s" podCreationTimestamp="2026-01-08 23:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:41.170088563 +0000 UTC m=+1511.481247509" watchObservedRunningTime="2026-01-08 23:40:41.177476411 +0000 UTC m=+1511.488635367" Jan 08 23:40:41 crc kubenswrapper[4945]: I0108 23:40:41.206940 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.20691639 podStartE2EDuration="2.20691639s" podCreationTimestamp="2026-01-08 23:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:41.189592263 +0000 UTC m=+1511.500751209" watchObservedRunningTime="2026-01-08 23:40:41.20691639 +0000 UTC m=+1511.518075336" Jan 08 23:40:43 crc kubenswrapper[4945]: E0108 23:40:43.398545 4945 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/95a31fc6aefbca27ea1b5685960dbe5138638b0b55632337f6f73ebbe45d2087/diff" to get inode usage: stat /var/lib/containers/storage/overlay/95a31fc6aefbca27ea1b5685960dbe5138638b0b55632337f6f73ebbe45d2087/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-6578955fd5-x8bw5_c65a6567-6928-4df5-8b0f-ed77fefddcd8/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-6578955fd5-x8bw5_c65a6567-6928-4df5-8b0f-ed77fefddcd8/dnsmasq-dns/0.log: no such file or directory Jan 08 23:40:43 crc kubenswrapper[4945]: I0108 23:40:43.578681 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:40:43 crc kubenswrapper[4945]: I0108 23:40:43.578735 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:40:44 crc kubenswrapper[4945]: I0108 23:40:44.809499 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 08 23:40:45 crc kubenswrapper[4945]: I0108 23:40:45.326039 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 08 23:40:45 crc kubenswrapper[4945]: I0108 23:40:45.449581 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 08 23:40:45 crc kubenswrapper[4945]: I0108 23:40:45.449624 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 08 23:40:45 crc kubenswrapper[4945]: I0108 23:40:45.544588 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.472907 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7xkd"] Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.475402 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.498087 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7xkd"] Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.499191 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.499520 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.604432 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-catalog-content\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.604571 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmvld\" (UniqueName: \"kubernetes.io/projected/7be1a1f4-22cc-4144-9f30-f8c902897d34-kube-api-access-jmvld\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.604648 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-utilities\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.705853 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmvld\" (UniqueName: \"kubernetes.io/projected/7be1a1f4-22cc-4144-9f30-f8c902897d34-kube-api-access-jmvld\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.705957 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-utilities\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.706031 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-catalog-content\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.706443 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-utilities\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.706466 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-catalog-content\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.728899 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmvld\" (UniqueName: \"kubernetes.io/projected/7be1a1f4-22cc-4144-9f30-f8c902897d34-kube-api-access-jmvld\") pod \"redhat-operators-l7xkd\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:46 crc kubenswrapper[4945]: I0108 23:40:46.811178 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:47 crc kubenswrapper[4945]: I0108 23:40:47.370974 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7xkd"] Jan 08 23:40:48 crc kubenswrapper[4945]: I0108 23:40:48.230476 4945 generic.go:334] "Generic (PLEG): container finished" podID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerID="6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd" exitCode=0 Jan 08 23:40:48 crc kubenswrapper[4945]: I0108 23:40:48.230587 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7xkd" event={"ID":"7be1a1f4-22cc-4144-9f30-f8c902897d34","Type":"ContainerDied","Data":"6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd"} Jan 08 23:40:48 crc kubenswrapper[4945]: I0108 23:40:48.231400 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7xkd" event={"ID":"7be1a1f4-22cc-4144-9f30-f8c902897d34","Type":"ContainerStarted","Data":"f81dc67ade2e5123921e2153733f86c6a0bd72123dcf57835ce74ca1b1abb2ef"} Jan 08 23:40:49 crc kubenswrapper[4945]: I0108 23:40:49.517340 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 08 23:40:49 crc kubenswrapper[4945]: I0108 23:40:49.517779 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 08 23:40:49 crc kubenswrapper[4945]: I0108 23:40:49.810577 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 08 23:40:49 crc kubenswrapper[4945]: I0108 23:40:49.836538 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 08 23:40:50 crc kubenswrapper[4945]: I0108 23:40:50.256427 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7xkd" event={"ID":"7be1a1f4-22cc-4144-9f30-f8c902897d34","Type":"ContainerStarted","Data":"0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a"} Jan 08 23:40:50 crc kubenswrapper[4945]: I0108 23:40:50.328370 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 08 23:40:50 crc kubenswrapper[4945]: I0108 23:40:50.599213 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 08 23:40:50 crc kubenswrapper[4945]: I0108 23:40:50.599213 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 08 23:40:54 crc kubenswrapper[4945]: I0108 23:40:54.300272 4945 generic.go:334] "Generic (PLEG): container finished" podID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerID="0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a" exitCode=0 Jan 08 23:40:54 crc kubenswrapper[4945]: I0108 23:40:54.300391 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7xkd" event={"ID":"7be1a1f4-22cc-4144-9f30-f8c902897d34","Type":"ContainerDied","Data":"0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a"} Jan 08 23:40:55 crc kubenswrapper[4945]: I0108 23:40:55.315474 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7xkd" event={"ID":"7be1a1f4-22cc-4144-9f30-f8c902897d34","Type":"ContainerStarted","Data":"d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f"} Jan 08 23:40:55 crc kubenswrapper[4945]: I0108 23:40:55.345420 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7xkd" podStartSLOduration=2.586190244 podStartE2EDuration="9.345388045s" podCreationTimestamp="2026-01-08 23:40:46 +0000 UTC" firstStartedPulling="2026-01-08 23:40:48.233005274 +0000 UTC m=+1518.544164220" lastFinishedPulling="2026-01-08 23:40:54.992203055 +0000 UTC m=+1525.303362021" observedRunningTime="2026-01-08 23:40:55.334556825 +0000 UTC m=+1525.645715801" watchObservedRunningTime="2026-01-08 23:40:55.345388045 +0000 UTC m=+1525.656547011" Jan 08 23:40:55 crc kubenswrapper[4945]: I0108 23:40:55.453768 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 08 23:40:55 crc kubenswrapper[4945]: I0108 23:40:55.459486 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 08 23:40:55 crc kubenswrapper[4945]: I0108 23:40:55.461348 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 08 23:40:56 crc kubenswrapper[4945]: I0108 23:40:56.329661 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 08 23:40:56 crc kubenswrapper[4945]: I0108 23:40:56.812004 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:56 crc kubenswrapper[4945]: I0108 23:40:56.812369 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:40:56 crc kubenswrapper[4945]: W0108 23:40:56.908222 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5b26c09_caee_4d0e_afd6_94a8673c9af7.slice/crio-7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa.scope WatchSource:0}: Error finding container 7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa: Status 404 returned error can't find the container with id 7eaa95d7d85b9733449d62fe493866d6b674b0a965f78909b082161b5c5eeffa Jan 08 23:40:57 crc kubenswrapper[4945]: E0108 23:40:57.142860 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a8edaf_32ec_42c4_9b6d_6dbe9d07ae87.slice/crio-378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a8edaf_32ec_42c4_9b6d_6dbe9d07ae87.slice/crio-conmon-378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24.scope\": RecentStats: unable to find data in memory cache]" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.311886 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.340226 4945 generic.go:334] "Generic (PLEG): container finished" podID="48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" containerID="378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24" exitCode=137 Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.341128 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.342032 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87","Type":"ContainerDied","Data":"378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24"} Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.342112 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87","Type":"ContainerDied","Data":"12ca1750d0096bc804839897e71555d84e46b43e7c677d6439fa5813047ac67b"} Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.342139 4945 scope.go:117] "RemoveContainer" containerID="378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.365651 4945 scope.go:117] "RemoveContainer" containerID="378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24" Jan 08 23:40:57 crc kubenswrapper[4945]: E0108 23:40:57.366173 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24\": container with ID starting with 378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24 not found: ID does not exist" containerID="378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.366211 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24"} err="failed to get container status \"378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24\": rpc error: code = NotFound desc = could not find container \"378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24\": container with ID starting with 378a57da80bf2f1c6c59e5079087cdc8a7c93ca0e0bfbc08c2704a017cde5f24 not found: ID does not exist" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.453611 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6lx2\" (UniqueName: \"kubernetes.io/projected/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-kube-api-access-m6lx2\") pod \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.453815 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-combined-ca-bundle\") pod \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.453922 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-config-data\") pod \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\" (UID: \"48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87\") " Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.462048 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-kube-api-access-m6lx2" (OuterVolumeSpecName: "kube-api-access-m6lx2") pod "48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" (UID: "48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87"). InnerVolumeSpecName "kube-api-access-m6lx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.485464 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-config-data" (OuterVolumeSpecName: "config-data") pod "48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" (UID: "48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.506437 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" (UID: "48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.556388 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6lx2\" (UniqueName: \"kubernetes.io/projected/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-kube-api-access-m6lx2\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.556429 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.556442 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.678657 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.692830 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.713552 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:57 crc kubenswrapper[4945]: E0108 23:40:57.714311 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" containerName="nova-cell1-novncproxy-novncproxy" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.714346 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" containerName="nova-cell1-novncproxy-novncproxy" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.714771 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" containerName="nova-cell1-novncproxy-novncproxy" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.715922 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.718594 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.718714 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.724189 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.735960 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.857667 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7xkd" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="registry-server" probeResult="failure" output=< Jan 08 23:40:57 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 08 23:40:57 crc kubenswrapper[4945]: > Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.863750 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.863798 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbxzg\" (UniqueName: \"kubernetes.io/projected/5ede73cf-0521-442e-8f01-b63d8d9b4725-kube-api-access-nbxzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.863826 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.863854 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.863937 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.965484 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.965542 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbxzg\" (UniqueName: \"kubernetes.io/projected/5ede73cf-0521-442e-8f01-b63d8d9b4725-kube-api-access-nbxzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.965572 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.965591 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.965676 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.970527 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.970677 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.977800 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.979349 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:57 crc kubenswrapper[4945]: I0108 23:40:57.982527 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbxzg\" (UniqueName: \"kubernetes.io/projected/5ede73cf-0521-442e-8f01-b63d8d9b4725-kube-api-access-nbxzg\") pod \"nova-cell1-novncproxy-0\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:58 crc kubenswrapper[4945]: I0108 23:40:58.012680 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87" path="/var/lib/kubelet/pods/48a8edaf-32ec-42c4-9b6d-6dbe9d07ae87/volumes" Jan 08 23:40:58 crc kubenswrapper[4945]: I0108 23:40:58.049619 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:40:58 crc kubenswrapper[4945]: I0108 23:40:58.549170 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:40:58 crc kubenswrapper[4945]: W0108 23:40:58.557588 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ede73cf_0521_442e_8f01_b63d8d9b4725.slice/crio-2c6e8cda26c18989fb1201112dda148e360fdcb1f3022385c3fe2f5a5422b309 WatchSource:0}: Error finding container 2c6e8cda26c18989fb1201112dda148e360fdcb1f3022385c3fe2f5a5422b309: Status 404 returned error can't find the container with id 2c6e8cda26c18989fb1201112dda148e360fdcb1f3022385c3fe2f5a5422b309 Jan 08 23:40:59 crc kubenswrapper[4945]: I0108 23:40:59.370827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5ede73cf-0521-442e-8f01-b63d8d9b4725","Type":"ContainerStarted","Data":"aab0536ef7d9d6c8e9048d5c7063401c083ca8fef6235e9b02f49cd7abccc975"} Jan 08 23:40:59 crc kubenswrapper[4945]: I0108 23:40:59.371109 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5ede73cf-0521-442e-8f01-b63d8d9b4725","Type":"ContainerStarted","Data":"2c6e8cda26c18989fb1201112dda148e360fdcb1f3022385c3fe2f5a5422b309"} Jan 08 23:40:59 crc kubenswrapper[4945]: I0108 23:40:59.524153 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 08 23:40:59 crc kubenswrapper[4945]: I0108 23:40:59.524679 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 08 23:40:59 crc kubenswrapper[4945]: I0108 23:40:59.530046 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 08 23:40:59 crc kubenswrapper[4945]: I0108 23:40:59.531979 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 08 23:40:59 crc kubenswrapper[4945]: I0108 23:40:59.542832 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.542818543 podStartE2EDuration="2.542818543s" podCreationTimestamp="2026-01-08 23:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:40:59.395839785 +0000 UTC m=+1529.706998731" watchObservedRunningTime="2026-01-08 23:40:59.542818543 +0000 UTC m=+1529.853977489" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.379229 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.386583 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.570424 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l7prv"] Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.572798 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.606080 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l7prv"] Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.721324 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.721654 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-config\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.721772 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.721891 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ftr5\" (UniqueName: \"kubernetes.io/projected/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-kube-api-access-5ftr5\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.722026 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.722138 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.823592 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.824063 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-config\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.824202 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.824311 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ftr5\" (UniqueName: \"kubernetes.io/projected/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-kube-api-access-5ftr5\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.824419 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.824510 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.824859 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.825279 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-config\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.825415 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.825759 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.825891 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.875801 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ftr5\" (UniqueName: \"kubernetes.io/projected/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-kube-api-access-5ftr5\") pod \"dnsmasq-dns-cd5cbd7b9-l7prv\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:00 crc kubenswrapper[4945]: I0108 23:41:00.899250 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:01 crc kubenswrapper[4945]: I0108 23:41:01.448069 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l7prv"] Jan 08 23:41:01 crc kubenswrapper[4945]: W0108 23:41:01.452012 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca0aa3d3_8093_42f1_9fa6_ad3883441ab2.slice/crio-d646ee81f78323a0da7a9968773b36ed1bb5714998cbf77cb7a7d7ad8cdaa0d1 WatchSource:0}: Error finding container d646ee81f78323a0da7a9968773b36ed1bb5714998cbf77cb7a7d7ad8cdaa0d1: Status 404 returned error can't find the container with id d646ee81f78323a0da7a9968773b36ed1bb5714998cbf77cb7a7d7ad8cdaa0d1 Jan 08 23:41:02 crc kubenswrapper[4945]: I0108 23:41:02.419908 4945 generic.go:334] "Generic (PLEG): container finished" podID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerID="43e8473c2dc71816f08dca8bcc5dc8eb33185a2f8bc495c5a216898f114c816d" exitCode=0 Jan 08 23:41:02 crc kubenswrapper[4945]: I0108 23:41:02.421739 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" event={"ID":"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2","Type":"ContainerDied","Data":"43e8473c2dc71816f08dca8bcc5dc8eb33185a2f8bc495c5a216898f114c816d"} Jan 08 23:41:02 crc kubenswrapper[4945]: I0108 23:41:02.421780 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" event={"ID":"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2","Type":"ContainerStarted","Data":"d646ee81f78323a0da7a9968773b36ed1bb5714998cbf77cb7a7d7ad8cdaa0d1"} Jan 08 23:41:02 crc kubenswrapper[4945]: I0108 23:41:02.914782 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:41:02 crc kubenswrapper[4945]: I0108 23:41:02.917362 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="ceilometer-central-agent" containerID="cri-o://333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc" gracePeriod=30 Jan 08 23:41:02 crc kubenswrapper[4945]: I0108 23:41:02.917446 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="sg-core" containerID="cri-o://1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5" gracePeriod=30 Jan 08 23:41:02 crc kubenswrapper[4945]: I0108 23:41:02.917420 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="proxy-httpd" containerID="cri-o://da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875" gracePeriod=30 Jan 08 23:41:02 crc kubenswrapper[4945]: I0108 23:41:02.917452 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="ceilometer-notification-agent" containerID="cri-o://324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315" gracePeriod=30 Jan 08 23:41:03 crc kubenswrapper[4945]: I0108 23:41:03.049812 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:41:03 crc kubenswrapper[4945]: I0108 23:41:03.433679 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4859737-48c6-4939-a3d3-68b93075c72d" containerID="da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875" exitCode=0 Jan 08 23:41:03 crc kubenswrapper[4945]: I0108 23:41:03.433716 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4859737-48c6-4939-a3d3-68b93075c72d" containerID="1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5" exitCode=2 Jan 08 23:41:03 crc kubenswrapper[4945]: I0108 23:41:03.433768 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerDied","Data":"da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875"} Jan 08 23:41:03 crc kubenswrapper[4945]: I0108 23:41:03.433854 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerDied","Data":"1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5"} Jan 08 23:41:03 crc kubenswrapper[4945]: I0108 23:41:03.435832 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" event={"ID":"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2","Type":"ContainerStarted","Data":"6439494f8f4be73fcba62cb9f436505c4882f4e970c1fb7590409804556a3683"} Jan 08 23:41:03 crc kubenswrapper[4945]: I0108 23:41:03.459067 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" podStartSLOduration=3.459033573 podStartE2EDuration="3.459033573s" podCreationTimestamp="2026-01-08 23:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:41:03.454248748 +0000 UTC m=+1533.765407704" watchObservedRunningTime="2026-01-08 23:41:03.459033573 +0000 UTC m=+1533.770192519" Jan 08 23:41:04 crc kubenswrapper[4945]: I0108 23:41:04.224744 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:04 crc kubenswrapper[4945]: I0108 23:41:04.225400 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-log" containerID="cri-o://c8148153186cb9e27922c0a6832ed859289e615bd5a9544f870cdee47fc6095e" gracePeriod=30 Jan 08 23:41:04 crc kubenswrapper[4945]: I0108 23:41:04.225705 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-api" containerID="cri-o://b41f37673cf4e4eb5c572a5e9439dad2ce1f8eed476a001be39d00827e2a3dcc" gracePeriod=30 Jan 08 23:41:04 crc kubenswrapper[4945]: I0108 23:41:04.446881 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4859737-48c6-4939-a3d3-68b93075c72d" containerID="333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc" exitCode=0 Jan 08 23:41:04 crc kubenswrapper[4945]: I0108 23:41:04.446951 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerDied","Data":"333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc"} Jan 08 23:41:04 crc kubenswrapper[4945]: I0108 23:41:04.450320 4945 generic.go:334] "Generic (PLEG): container finished" podID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerID="c8148153186cb9e27922c0a6832ed859289e615bd5a9544f870cdee47fc6095e" exitCode=143 Jan 08 23:41:04 crc kubenswrapper[4945]: I0108 23:41:04.450359 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9aaeaaca-7e52-4492-9adb-2800f51d8159","Type":"ContainerDied","Data":"c8148153186cb9e27922c0a6832ed859289e615bd5a9544f870cdee47fc6095e"} Jan 08 23:41:04 crc kubenswrapper[4945]: I0108 23:41:04.450636 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:06 crc kubenswrapper[4945]: I0108 23:41:06.870532 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:41:06 crc kubenswrapper[4945]: I0108 23:41:06.918525 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.062763 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.119632 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7xkd"] Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.184057 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-combined-ca-bundle\") pod \"f4859737-48c6-4939-a3d3-68b93075c72d\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.184129 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-log-httpd\") pod \"f4859737-48c6-4939-a3d3-68b93075c72d\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.184199 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-ceilometer-tls-certs\") pod \"f4859737-48c6-4939-a3d3-68b93075c72d\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.184474 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f4859737-48c6-4939-a3d3-68b93075c72d" (UID: "f4859737-48c6-4939-a3d3-68b93075c72d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.184971 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdxhc\" (UniqueName: \"kubernetes.io/projected/f4859737-48c6-4939-a3d3-68b93075c72d-kube-api-access-fdxhc\") pod \"f4859737-48c6-4939-a3d3-68b93075c72d\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.185023 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-config-data\") pod \"f4859737-48c6-4939-a3d3-68b93075c72d\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.185087 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-run-httpd\") pod \"f4859737-48c6-4939-a3d3-68b93075c72d\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.185227 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-sg-core-conf-yaml\") pod \"f4859737-48c6-4939-a3d3-68b93075c72d\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.185270 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-scripts\") pod \"f4859737-48c6-4939-a3d3-68b93075c72d\" (UID: \"f4859737-48c6-4939-a3d3-68b93075c72d\") " Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.185572 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f4859737-48c6-4939-a3d3-68b93075c72d" (UID: "f4859737-48c6-4939-a3d3-68b93075c72d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.186042 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.186058 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f4859737-48c6-4939-a3d3-68b93075c72d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.197141 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-scripts" (OuterVolumeSpecName: "scripts") pod "f4859737-48c6-4939-a3d3-68b93075c72d" (UID: "f4859737-48c6-4939-a3d3-68b93075c72d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.203255 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4859737-48c6-4939-a3d3-68b93075c72d-kube-api-access-fdxhc" (OuterVolumeSpecName: "kube-api-access-fdxhc") pod "f4859737-48c6-4939-a3d3-68b93075c72d" (UID: "f4859737-48c6-4939-a3d3-68b93075c72d"). InnerVolumeSpecName "kube-api-access-fdxhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.217101 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f4859737-48c6-4939-a3d3-68b93075c72d" (UID: "f4859737-48c6-4939-a3d3-68b93075c72d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.261621 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f4859737-48c6-4939-a3d3-68b93075c72d" (UID: "f4859737-48c6-4939-a3d3-68b93075c72d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.266926 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4859737-48c6-4939-a3d3-68b93075c72d" (UID: "f4859737-48c6-4939-a3d3-68b93075c72d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.287342 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.287372 4945 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.287382 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdxhc\" (UniqueName: \"kubernetes.io/projected/f4859737-48c6-4939-a3d3-68b93075c72d-kube-api-access-fdxhc\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.287392 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.287401 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.303303 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-config-data" (OuterVolumeSpecName: "config-data") pod "f4859737-48c6-4939-a3d3-68b93075c72d" (UID: "f4859737-48c6-4939-a3d3-68b93075c72d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.389200 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4859737-48c6-4939-a3d3-68b93075c72d-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.417904 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9aaeaaca_7e52_4492_9adb_2800f51d8159.slice/crio-b41f37673cf4e4eb5c572a5e9439dad2ce1f8eed476a001be39d00827e2a3dcc.scope\": RecentStats: unable to find data in memory cache]" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.479663 4945 generic.go:334] "Generic (PLEG): container finished" podID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerID="b41f37673cf4e4eb5c572a5e9439dad2ce1f8eed476a001be39d00827e2a3dcc" exitCode=0 Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.479737 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9aaeaaca-7e52-4492-9adb-2800f51d8159","Type":"ContainerDied","Data":"b41f37673cf4e4eb5c572a5e9439dad2ce1f8eed476a001be39d00827e2a3dcc"} Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.483133 4945 generic.go:334] "Generic (PLEG): container finished" podID="f4859737-48c6-4939-a3d3-68b93075c72d" containerID="324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315" exitCode=0 Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.483204 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.483209 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerDied","Data":"324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315"} Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.483348 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f4859737-48c6-4939-a3d3-68b93075c72d","Type":"ContainerDied","Data":"6e566e558630dfb3f88dbf56f8a11b92b2c028436475318eb8f8cee1eb25f9b0"} Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.483388 4945 scope.go:117] "RemoveContainer" containerID="da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.516361 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.517224 4945 scope.go:117] "RemoveContainer" containerID="1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.526044 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.546802 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.547642 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="ceilometer-notification-agent" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.547662 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="ceilometer-notification-agent" Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.547677 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="ceilometer-central-agent" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.547683 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="ceilometer-central-agent" Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.547697 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="sg-core" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.547704 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="sg-core" Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.547729 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="proxy-httpd" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.547735 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="proxy-httpd" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.547942 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="ceilometer-notification-agent" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.547959 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="sg-core" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.547971 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="proxy-httpd" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.548009 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" containerName="ceilometer-central-agent" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.550233 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.553099 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.554271 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.554418 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.554782 4945 scope.go:117] "RemoveContainer" containerID="324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.561258 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.593957 4945 scope.go:117] "RemoveContainer" containerID="333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.620746 4945 scope.go:117] "RemoveContainer" containerID="da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875" Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.622584 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875\": container with ID starting with da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875 not found: ID does not exist" containerID="da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.622640 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875"} err="failed to get container status \"da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875\": rpc error: code = NotFound desc = could not find container \"da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875\": container with ID starting with da9d8c40f4ee3169114a5bce502d5a14425f223027ccdca7e92648ea3155b875 not found: ID does not exist" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.622675 4945 scope.go:117] "RemoveContainer" containerID="1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5" Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.623064 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5\": container with ID starting with 1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5 not found: ID does not exist" containerID="1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.623097 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5"} err="failed to get container status \"1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5\": rpc error: code = NotFound desc = could not find container \"1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5\": container with ID starting with 1d76c40a4aa55f44da2647e181c0a6ad46cab9142998f7007db19a58576c12a5 not found: ID does not exist" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.623113 4945 scope.go:117] "RemoveContainer" containerID="324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315" Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.623452 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315\": container with ID starting with 324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315 not found: ID does not exist" containerID="324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.623478 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315"} err="failed to get container status \"324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315\": rpc error: code = NotFound desc = could not find container \"324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315\": container with ID starting with 324efe239b9eea521b48859d9709d670b15cb4a61d082194a3b84097c7a4c315 not found: ID does not exist" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.623493 4945 scope.go:117] "RemoveContainer" containerID="333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc" Jan 08 23:41:07 crc kubenswrapper[4945]: E0108 23:41:07.625137 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc\": container with ID starting with 333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc not found: ID does not exist" containerID="333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.625176 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc"} err="failed to get container status \"333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc\": rpc error: code = NotFound desc = could not find container \"333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc\": container with ID starting with 333a69d19959184c80b55ee9b0fb39434edc6807e1e85169420316532ee164bc not found: ID does not exist" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.696096 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-log-httpd\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.696440 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-scripts\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.696523 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-config-data\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.696560 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.696658 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.696766 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7ng2\" (UniqueName: \"kubernetes.io/projected/c9674718-110d-4241-a199-9663979defde-kube-api-access-m7ng2\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.696838 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:07 crc kubenswrapper[4945]: I0108 23:41:07.697172 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-run-httpd\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.798890 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-run-httpd\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.798946 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-log-httpd\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.798984 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-scripts\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.799076 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-config-data\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.799165 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.799215 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.799295 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7ng2\" (UniqueName: \"kubernetes.io/projected/c9674718-110d-4241-a199-9663979defde-kube-api-access-m7ng2\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.799336 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.799536 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-run-httpd\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.802363 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-log-httpd\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.807449 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.808118 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-config-data\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.808733 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-scripts\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.819753 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.824300 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7ng2\" (UniqueName: \"kubernetes.io/projected/c9674718-110d-4241-a199-9663979defde-kube-api-access-m7ng2\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.831942 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.870723 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:07.967880 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.036070 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4859737-48c6-4939-a3d3-68b93075c72d" path="/var/lib/kubelet/pods/f4859737-48c6-4939-a3d3-68b93075c72d/volumes" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.050747 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.072117 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.113475 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aaeaaca-7e52-4492-9adb-2800f51d8159-logs\") pod \"9aaeaaca-7e52-4492-9adb-2800f51d8159\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.113555 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcjzw\" (UniqueName: \"kubernetes.io/projected/9aaeaaca-7e52-4492-9adb-2800f51d8159-kube-api-access-rcjzw\") pod \"9aaeaaca-7e52-4492-9adb-2800f51d8159\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.113588 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-combined-ca-bundle\") pod \"9aaeaaca-7e52-4492-9adb-2800f51d8159\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.113630 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-config-data\") pod \"9aaeaaca-7e52-4492-9adb-2800f51d8159\" (UID: \"9aaeaaca-7e52-4492-9adb-2800f51d8159\") " Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.114741 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aaeaaca-7e52-4492-9adb-2800f51d8159-logs" (OuterVolumeSpecName: "logs") pod "9aaeaaca-7e52-4492-9adb-2800f51d8159" (UID: "9aaeaaca-7e52-4492-9adb-2800f51d8159"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.119074 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aaeaaca-7e52-4492-9adb-2800f51d8159-kube-api-access-rcjzw" (OuterVolumeSpecName: "kube-api-access-rcjzw") pod "9aaeaaca-7e52-4492-9adb-2800f51d8159" (UID: "9aaeaaca-7e52-4492-9adb-2800f51d8159"). InnerVolumeSpecName "kube-api-access-rcjzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.144232 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-config-data" (OuterVolumeSpecName: "config-data") pod "9aaeaaca-7e52-4492-9adb-2800f51d8159" (UID: "9aaeaaca-7e52-4492-9adb-2800f51d8159"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.154430 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9aaeaaca-7e52-4492-9adb-2800f51d8159" (UID: "9aaeaaca-7e52-4492-9adb-2800f51d8159"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.216663 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcjzw\" (UniqueName: \"kubernetes.io/projected/9aaeaaca-7e52-4492-9adb-2800f51d8159-kube-api-access-rcjzw\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.216703 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.216718 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aaeaaca-7e52-4492-9adb-2800f51d8159-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.216734 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aaeaaca-7e52-4492-9adb-2800f51d8159-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.497946 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9aaeaaca-7e52-4492-9adb-2800f51d8159","Type":"ContainerDied","Data":"46f5b61fe277c6f04a7350324b96356783e1955ea5bcf083f18a5b59afbc2dd5"} Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.498033 4945 scope.go:117] "RemoveContainer" containerID="b41f37673cf4e4eb5c572a5e9439dad2ce1f8eed476a001be39d00827e2a3dcc" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.498134 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.509260 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7xkd" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="registry-server" containerID="cri-o://d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f" gracePeriod=2 Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.544396 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.558665 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.562377 4945 scope.go:117] "RemoveContainer" containerID="c8148153186cb9e27922c0a6832ed859289e615bd5a9544f870cdee47fc6095e" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.593799 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.624756 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:08 crc kubenswrapper[4945]: E0108 23:41:08.625327 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-api" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.625345 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-api" Jan 08 23:41:08 crc kubenswrapper[4945]: E0108 23:41:08.625361 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-log" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.625366 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-log" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.626392 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-api" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.626418 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" containerName="nova-api-log" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.627480 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.631856 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.632201 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.632341 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.698844 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.729748 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-logs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.729841 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.729887 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-public-tls-certs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.729932 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-config-data\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.729969 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvml\" (UniqueName: \"kubernetes.io/projected/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-kube-api-access-4nvml\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.730057 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.738445 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:41:08 crc kubenswrapper[4945]: W0108 23:41:08.739516 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9674718_110d_4241_a199_9663979defde.slice/crio-9158d97ce2ecd82a7716775d25c8d38a54ff8ef920891073ddf84d4c777c8621 WatchSource:0}: Error finding container 9158d97ce2ecd82a7716775d25c8d38a54ff8ef920891073ddf84d4c777c8621: Status 404 returned error can't find the container with id 9158d97ce2ecd82a7716775d25c8d38a54ff8ef920891073ddf84d4c777c8621 Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.831899 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.832051 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-public-tls-certs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.832150 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-config-data\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.832227 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvml\" (UniqueName: \"kubernetes.io/projected/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-kube-api-access-4nvml\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.832291 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.832373 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-logs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.833137 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-logs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.846860 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.847409 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-public-tls-certs\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.849888 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.852128 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-config-data\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.858047 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvml\" (UniqueName: \"kubernetes.io/projected/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-kube-api-access-4nvml\") pod \"nova-api-0\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " pod="openstack/nova-api-0" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.896014 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qpx88"] Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.897270 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.902980 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.903205 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 08 23:41:08 crc kubenswrapper[4945]: I0108 23:41:08.914124 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qpx88"] Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.036564 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgtxx\" (UniqueName: \"kubernetes.io/projected/6dd481cc-b375-4a3a-b41c-2690888844e6-kube-api-access-dgtxx\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.037042 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.037153 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-scripts\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.037187 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-config-data\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.080848 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.123541 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.138632 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-config-data\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.138733 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgtxx\" (UniqueName: \"kubernetes.io/projected/6dd481cc-b375-4a3a-b41c-2690888844e6-kube-api-access-dgtxx\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.138798 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.138892 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-scripts\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.142982 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-scripts\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.147634 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-config-data\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.152054 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.169321 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgtxx\" (UniqueName: \"kubernetes.io/projected/6dd481cc-b375-4a3a-b41c-2690888844e6-kube-api-access-dgtxx\") pod \"nova-cell1-cell-mapping-qpx88\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.239808 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-catalog-content\") pod \"7be1a1f4-22cc-4144-9f30-f8c902897d34\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.239887 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-utilities\") pod \"7be1a1f4-22cc-4144-9f30-f8c902897d34\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.239929 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmvld\" (UniqueName: \"kubernetes.io/projected/7be1a1f4-22cc-4144-9f30-f8c902897d34-kube-api-access-jmvld\") pod \"7be1a1f4-22cc-4144-9f30-f8c902897d34\" (UID: \"7be1a1f4-22cc-4144-9f30-f8c902897d34\") " Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.257287 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be1a1f4-22cc-4144-9f30-f8c902897d34-kube-api-access-jmvld" (OuterVolumeSpecName: "kube-api-access-jmvld") pod "7be1a1f4-22cc-4144-9f30-f8c902897d34" (UID: "7be1a1f4-22cc-4144-9f30-f8c902897d34"). InnerVolumeSpecName "kube-api-access-jmvld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.257773 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.267224 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-utilities" (OuterVolumeSpecName: "utilities") pod "7be1a1f4-22cc-4144-9f30-f8c902897d34" (UID: "7be1a1f4-22cc-4144-9f30-f8c902897d34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.346428 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.346471 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmvld\" (UniqueName: \"kubernetes.io/projected/7be1a1f4-22cc-4144-9f30-f8c902897d34-kube-api-access-jmvld\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.373175 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7be1a1f4-22cc-4144-9f30-f8c902897d34" (UID: "7be1a1f4-22cc-4144-9f30-f8c902897d34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.448942 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be1a1f4-22cc-4144-9f30-f8c902897d34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.517264 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerStarted","Data":"9158d97ce2ecd82a7716775d25c8d38a54ff8ef920891073ddf84d4c777c8621"} Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.520474 4945 generic.go:334] "Generic (PLEG): container finished" podID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerID="d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f" exitCode=0 Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.520543 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7xkd" event={"ID":"7be1a1f4-22cc-4144-9f30-f8c902897d34","Type":"ContainerDied","Data":"d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f"} Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.520573 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7xkd" event={"ID":"7be1a1f4-22cc-4144-9f30-f8c902897d34","Type":"ContainerDied","Data":"f81dc67ade2e5123921e2153733f86c6a0bd72123dcf57835ce74ca1b1abb2ef"} Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.520589 4945 scope.go:117] "RemoveContainer" containerID="d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.520695 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7xkd" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.576266 4945 scope.go:117] "RemoveContainer" containerID="0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a" Jan 08 23:41:09 crc kubenswrapper[4945]: W0108 23:41:09.591923 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ca4c803_d7f2_442e_9f8c_8376c62a8d2c.slice/crio-30bb3a019f6496b1a9d713610a881c0bdd0f883b67f0928d73107ebde0f644f3 WatchSource:0}: Error finding container 30bb3a019f6496b1a9d713610a881c0bdd0f883b67f0928d73107ebde0f644f3: Status 404 returned error can't find the container with id 30bb3a019f6496b1a9d713610a881c0bdd0f883b67f0928d73107ebde0f644f3 Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.595217 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.608289 4945 scope.go:117] "RemoveContainer" containerID="6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.623887 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7xkd"] Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.639048 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7xkd"] Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.646770 4945 scope.go:117] "RemoveContainer" containerID="d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f" Jan 08 23:41:09 crc kubenswrapper[4945]: E0108 23:41:09.647377 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f\": container with ID starting with d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f not found: ID does not exist" containerID="d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.647431 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f"} err="failed to get container status \"d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f\": rpc error: code = NotFound desc = could not find container \"d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f\": container with ID starting with d0f24eb2332d1aa50f7463b40da2b498b86b2724703738750841839f7db5331f not found: ID does not exist" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.647475 4945 scope.go:117] "RemoveContainer" containerID="0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a" Jan 08 23:41:09 crc kubenswrapper[4945]: E0108 23:41:09.647815 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a\": container with ID starting with 0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a not found: ID does not exist" containerID="0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.647858 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a"} err="failed to get container status \"0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a\": rpc error: code = NotFound desc = could not find container \"0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a\": container with ID starting with 0d1ef5671ddb5189d2675429cbc87484f6c5a48c80146dafa1ea8aec8e3c622a not found: ID does not exist" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.647897 4945 scope.go:117] "RemoveContainer" containerID="6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd" Jan 08 23:41:09 crc kubenswrapper[4945]: E0108 23:41:09.648412 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd\": container with ID starting with 6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd not found: ID does not exist" containerID="6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.648442 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd"} err="failed to get container status \"6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd\": rpc error: code = NotFound desc = could not find container \"6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd\": container with ID starting with 6ed198c1b7b22946ae2b673cc33d58d7200c6cf1c6bb6678c80488ccc30101dd not found: ID does not exist" Jan 08 23:41:09 crc kubenswrapper[4945]: I0108 23:41:09.962774 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qpx88"] Jan 08 23:41:09 crc kubenswrapper[4945]: W0108 23:41:09.964368 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dd481cc_b375_4a3a_b41c_2690888844e6.slice/crio-a5ef8eb5d3517f55d267bbe0cc5c2c62e125d3a2849b46c78f805afe64333eaa WatchSource:0}: Error finding container a5ef8eb5d3517f55d267bbe0cc5c2c62e125d3a2849b46c78f805afe64333eaa: Status 404 returned error can't find the container with id a5ef8eb5d3517f55d267bbe0cc5c2c62e125d3a2849b46c78f805afe64333eaa Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.013157 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" path="/var/lib/kubelet/pods/7be1a1f4-22cc-4144-9f30-f8c902897d34/volumes" Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.014411 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aaeaaca-7e52-4492-9adb-2800f51d8159" path="/var/lib/kubelet/pods/9aaeaaca-7e52-4492-9adb-2800f51d8159/volumes" Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.541730 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qpx88" event={"ID":"6dd481cc-b375-4a3a-b41c-2690888844e6","Type":"ContainerStarted","Data":"8082cabfd87114362cc4ea66d61572fa084a1efc7431b0bed7df7375b3fc0b20"} Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.542197 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qpx88" event={"ID":"6dd481cc-b375-4a3a-b41c-2690888844e6","Type":"ContainerStarted","Data":"a5ef8eb5d3517f55d267bbe0cc5c2c62e125d3a2849b46c78f805afe64333eaa"} Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.543534 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerStarted","Data":"90a4f2d6b6f7481844beacbb10433c3f569585443e447333d0efb309134aacf4"} Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.543578 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerStarted","Data":"587e368516c01890b1b0a081641da4fdf10374a3be177548851e1df43ac2b62d"} Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.545731 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c","Type":"ContainerStarted","Data":"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db"} Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.545756 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c","Type":"ContainerStarted","Data":"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55"} Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.545766 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c","Type":"ContainerStarted","Data":"30bb3a019f6496b1a9d713610a881c0bdd0f883b67f0928d73107ebde0f644f3"} Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.561455 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qpx88" podStartSLOduration=2.561437444 podStartE2EDuration="2.561437444s" podCreationTimestamp="2026-01-08 23:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:41:10.558144815 +0000 UTC m=+1540.869303771" watchObservedRunningTime="2026-01-08 23:41:10.561437444 +0000 UTC m=+1540.872596390" Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.581009 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5809770739999998 podStartE2EDuration="2.580977074s" podCreationTimestamp="2026-01-08 23:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:41:10.578355091 +0000 UTC m=+1540.889514037" watchObservedRunningTime="2026-01-08 23:41:10.580977074 +0000 UTC m=+1540.892136020" Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.901188 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.990983 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-wvp9f"] Jan 08 23:41:10 crc kubenswrapper[4945]: I0108 23:41:10.991933 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" podUID="994dbeee-49d8-4572-975d-727360fff33c" containerName="dnsmasq-dns" containerID="cri-o://aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056" gracePeriod=10 Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.549948 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.572115 4945 generic.go:334] "Generic (PLEG): container finished" podID="994dbeee-49d8-4572-975d-727360fff33c" containerID="aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056" exitCode=0 Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.572177 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" event={"ID":"994dbeee-49d8-4572-975d-727360fff33c","Type":"ContainerDied","Data":"aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056"} Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.572202 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" event={"ID":"994dbeee-49d8-4572-975d-727360fff33c","Type":"ContainerDied","Data":"e0b7088116a2a54f1efb3083704913ac57efae7e4266b939b7a0fc89e02ca896"} Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.572219 4945 scope.go:117] "RemoveContainer" containerID="aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.572321 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-wvp9f" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.591040 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerStarted","Data":"a2db183acfe6ab165940694a43d75dcfd08108c8d4af23e41358e09eb336e30e"} Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.599742 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-nb\") pod \"994dbeee-49d8-4572-975d-727360fff33c\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.599825 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-sb\") pod \"994dbeee-49d8-4572-975d-727360fff33c\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.599883 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fbzp\" (UniqueName: \"kubernetes.io/projected/994dbeee-49d8-4572-975d-727360fff33c-kube-api-access-9fbzp\") pod \"994dbeee-49d8-4572-975d-727360fff33c\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.599925 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-svc\") pod \"994dbeee-49d8-4572-975d-727360fff33c\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.600012 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-swift-storage-0\") pod \"994dbeee-49d8-4572-975d-727360fff33c\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.600078 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-config\") pod \"994dbeee-49d8-4572-975d-727360fff33c\" (UID: \"994dbeee-49d8-4572-975d-727360fff33c\") " Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.611136 4945 scope.go:117] "RemoveContainer" containerID="e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.611463 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994dbeee-49d8-4572-975d-727360fff33c-kube-api-access-9fbzp" (OuterVolumeSpecName: "kube-api-access-9fbzp") pod "994dbeee-49d8-4572-975d-727360fff33c" (UID: "994dbeee-49d8-4572-975d-727360fff33c"). InnerVolumeSpecName "kube-api-access-9fbzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.651039 4945 scope.go:117] "RemoveContainer" containerID="aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056" Jan 08 23:41:11 crc kubenswrapper[4945]: E0108 23:41:11.652265 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056\": container with ID starting with aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056 not found: ID does not exist" containerID="aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.652300 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056"} err="failed to get container status \"aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056\": rpc error: code = NotFound desc = could not find container \"aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056\": container with ID starting with aa2cd69b1a16b6019d2c4bcbb01573e719f50d0374354c0bfe71ef53ce73f056 not found: ID does not exist" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.652321 4945 scope.go:117] "RemoveContainer" containerID="e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a" Jan 08 23:41:11 crc kubenswrapper[4945]: E0108 23:41:11.653180 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a\": container with ID starting with e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a not found: ID does not exist" containerID="e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.653199 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a"} err="failed to get container status \"e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a\": rpc error: code = NotFound desc = could not find container \"e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a\": container with ID starting with e9a815aaa16207e999dc17b8e2b4324be57df0fe2683ccae8712a61335fd5d4a not found: ID does not exist" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.664404 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "994dbeee-49d8-4572-975d-727360fff33c" (UID: "994dbeee-49d8-4572-975d-727360fff33c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.675868 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "994dbeee-49d8-4572-975d-727360fff33c" (UID: "994dbeee-49d8-4572-975d-727360fff33c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.688439 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-config" (OuterVolumeSpecName: "config") pod "994dbeee-49d8-4572-975d-727360fff33c" (UID: "994dbeee-49d8-4572-975d-727360fff33c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.696583 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "994dbeee-49d8-4572-975d-727360fff33c" (UID: "994dbeee-49d8-4572-975d-727360fff33c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.703753 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.705494 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fbzp\" (UniqueName: \"kubernetes.io/projected/994dbeee-49d8-4572-975d-727360fff33c-kube-api-access-9fbzp\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.705926 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.705943 4945 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.705953 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.716179 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "994dbeee-49d8-4572-975d-727360fff33c" (UID: "994dbeee-49d8-4572-975d-727360fff33c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.808512 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/994dbeee-49d8-4572-975d-727360fff33c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.901984 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-wvp9f"] Jan 08 23:41:11 crc kubenswrapper[4945]: I0108 23:41:11.910338 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-wvp9f"] Jan 08 23:41:12 crc kubenswrapper[4945]: I0108 23:41:12.019649 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994dbeee-49d8-4572-975d-727360fff33c" path="/var/lib/kubelet/pods/994dbeee-49d8-4572-975d-727360fff33c/volumes" Jan 08 23:41:13 crc kubenswrapper[4945]: I0108 23:41:13.578773 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:41:13 crc kubenswrapper[4945]: I0108 23:41:13.579182 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:41:13 crc kubenswrapper[4945]: I0108 23:41:13.579241 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:41:13 crc kubenswrapper[4945]: I0108 23:41:13.580106 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:41:13 crc kubenswrapper[4945]: I0108 23:41:13.580159 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" gracePeriod=600 Jan 08 23:41:13 crc kubenswrapper[4945]: I0108 23:41:13.610073 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerStarted","Data":"93c9d2ce6e3251c442de8342769f7f316659735ad8ce35d3e69e890d9d23a3e3"} Jan 08 23:41:13 crc kubenswrapper[4945]: I0108 23:41:13.610576 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 08 23:41:13 crc kubenswrapper[4945]: I0108 23:41:13.641862 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5997054779999997 podStartE2EDuration="6.641841549s" podCreationTimestamp="2026-01-08 23:41:07 +0000 UTC" firstStartedPulling="2026-01-08 23:41:08.756775771 +0000 UTC m=+1539.067934717" lastFinishedPulling="2026-01-08 23:41:12.798911832 +0000 UTC m=+1543.110070788" observedRunningTime="2026-01-08 23:41:13.631356256 +0000 UTC m=+1543.942515202" watchObservedRunningTime="2026-01-08 23:41:13.641841549 +0000 UTC m=+1543.953000495" Jan 08 23:41:13 crc kubenswrapper[4945]: E0108 23:41:13.722818 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:41:14 crc kubenswrapper[4945]: I0108 23:41:14.625344 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" exitCode=0 Jan 08 23:41:14 crc kubenswrapper[4945]: I0108 23:41:14.631518 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8"} Jan 08 23:41:14 crc kubenswrapper[4945]: I0108 23:41:14.631635 4945 scope.go:117] "RemoveContainer" containerID="6ea29ffcc641534adace455f20d68f37c7c8da0950e832af522e2661b455a0c2" Jan 08 23:41:14 crc kubenswrapper[4945]: I0108 23:41:14.633454 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:41:14 crc kubenswrapper[4945]: E0108 23:41:14.650745 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:41:15 crc kubenswrapper[4945]: I0108 23:41:15.636884 4945 generic.go:334] "Generic (PLEG): container finished" podID="6dd481cc-b375-4a3a-b41c-2690888844e6" containerID="8082cabfd87114362cc4ea66d61572fa084a1efc7431b0bed7df7375b3fc0b20" exitCode=0 Jan 08 23:41:15 crc kubenswrapper[4945]: I0108 23:41:15.636921 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qpx88" event={"ID":"6dd481cc-b375-4a3a-b41c-2690888844e6","Type":"ContainerDied","Data":"8082cabfd87114362cc4ea66d61572fa084a1efc7431b0bed7df7375b3fc0b20"} Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.042335 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.123270 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-combined-ca-bundle\") pod \"6dd481cc-b375-4a3a-b41c-2690888844e6\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.123751 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-config-data\") pod \"6dd481cc-b375-4a3a-b41c-2690888844e6\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.124474 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgtxx\" (UniqueName: \"kubernetes.io/projected/6dd481cc-b375-4a3a-b41c-2690888844e6-kube-api-access-dgtxx\") pod \"6dd481cc-b375-4a3a-b41c-2690888844e6\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.124677 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-scripts\") pod \"6dd481cc-b375-4a3a-b41c-2690888844e6\" (UID: \"6dd481cc-b375-4a3a-b41c-2690888844e6\") " Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.130473 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-scripts" (OuterVolumeSpecName: "scripts") pod "6dd481cc-b375-4a3a-b41c-2690888844e6" (UID: "6dd481cc-b375-4a3a-b41c-2690888844e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.131040 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd481cc-b375-4a3a-b41c-2690888844e6-kube-api-access-dgtxx" (OuterVolumeSpecName: "kube-api-access-dgtxx") pod "6dd481cc-b375-4a3a-b41c-2690888844e6" (UID: "6dd481cc-b375-4a3a-b41c-2690888844e6"). InnerVolumeSpecName "kube-api-access-dgtxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.153882 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-config-data" (OuterVolumeSpecName: "config-data") pod "6dd481cc-b375-4a3a-b41c-2690888844e6" (UID: "6dd481cc-b375-4a3a-b41c-2690888844e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.163943 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dd481cc-b375-4a3a-b41c-2690888844e6" (UID: "6dd481cc-b375-4a3a-b41c-2690888844e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.227751 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgtxx\" (UniqueName: \"kubernetes.io/projected/6dd481cc-b375-4a3a-b41c-2690888844e6-kube-api-access-dgtxx\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.227783 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.227794 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.227805 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dd481cc-b375-4a3a-b41c-2690888844e6-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.659698 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qpx88" event={"ID":"6dd481cc-b375-4a3a-b41c-2690888844e6","Type":"ContainerDied","Data":"a5ef8eb5d3517f55d267bbe0cc5c2c62e125d3a2849b46c78f805afe64333eaa"} Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.660271 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ef8eb5d3517f55d267bbe0cc5c2c62e125d3a2849b46c78f805afe64333eaa" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.659766 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qpx88" Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.859566 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.860088 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerName="nova-api-log" containerID="cri-o://56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55" gracePeriod=30 Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.860956 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerName="nova-api-api" containerID="cri-o://313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db" gracePeriod=30 Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.878569 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.879168 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4d92db13-d607-4a17-914d-83b7e18587f3" containerName="nova-scheduler-scheduler" containerID="cri-o://221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022" gracePeriod=30 Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.922266 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.923856 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-log" containerID="cri-o://61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63" gracePeriod=30 Jan 08 23:41:17 crc kubenswrapper[4945]: I0108 23:41:17.924510 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-metadata" containerID="cri-o://d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72" gracePeriod=30 Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.583598 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.663232 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-logs\") pod \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.663520 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-config-data\") pod \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.663750 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nvml\" (UniqueName: \"kubernetes.io/projected/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-kube-api-access-4nvml\") pod \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.663881 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-public-tls-certs\") pod \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.664040 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-internal-tls-certs\") pod \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.664191 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-combined-ca-bundle\") pod \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\" (UID: \"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c\") " Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.664367 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-logs" (OuterVolumeSpecName: "logs") pod "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" (UID: "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.664778 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.671277 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-kube-api-access-4nvml" (OuterVolumeSpecName: "kube-api-access-4nvml") pod "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" (UID: "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c"). InnerVolumeSpecName "kube-api-access-4nvml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.675172 4945 generic.go:334] "Generic (PLEG): container finished" podID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerID="61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63" exitCode=143 Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.675244 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67c81217-0b52-461b-9eaf-cefc63fbaa10","Type":"ContainerDied","Data":"61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63"} Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.680390 4945 generic.go:334] "Generic (PLEG): container finished" podID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerID="313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db" exitCode=0 Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.680487 4945 generic.go:334] "Generic (PLEG): container finished" podID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerID="56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55" exitCode=143 Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.680497 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.680419 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c","Type":"ContainerDied","Data":"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db"} Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.681094 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c","Type":"ContainerDied","Data":"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55"} Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.681160 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7ca4c803-d7f2-442e-9f8c-8376c62a8d2c","Type":"ContainerDied","Data":"30bb3a019f6496b1a9d713610a881c0bdd0f883b67f0928d73107ebde0f644f3"} Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.681180 4945 scope.go:117] "RemoveContainer" containerID="313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.708431 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" (UID: "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.712173 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-config-data" (OuterVolumeSpecName: "config-data") pod "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" (UID: "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.723353 4945 scope.go:117] "RemoveContainer" containerID="56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.729155 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" (UID: "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.732360 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" (UID: "7ca4c803-d7f2-442e-9f8c-8376c62a8d2c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.748356 4945 scope.go:117] "RemoveContainer" containerID="313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db" Jan 08 23:41:18 crc kubenswrapper[4945]: E0108 23:41:18.755007 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db\": container with ID starting with 313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db not found: ID does not exist" containerID="313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.755047 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db"} err="failed to get container status \"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db\": rpc error: code = NotFound desc = could not find container \"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db\": container with ID starting with 313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db not found: ID does not exist" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.755072 4945 scope.go:117] "RemoveContainer" containerID="56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55" Jan 08 23:41:18 crc kubenswrapper[4945]: E0108 23:41:18.755489 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55\": container with ID starting with 56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55 not found: ID does not exist" containerID="56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.755514 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55"} err="failed to get container status \"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55\": rpc error: code = NotFound desc = could not find container \"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55\": container with ID starting with 56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55 not found: ID does not exist" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.755530 4945 scope.go:117] "RemoveContainer" containerID="313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.755934 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db"} err="failed to get container status \"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db\": rpc error: code = NotFound desc = could not find container \"313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db\": container with ID starting with 313bcb8d7caf7011d5f3e9df09f11def7c89de32f9ec59eb1fee244a4ed203db not found: ID does not exist" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.755950 4945 scope.go:117] "RemoveContainer" containerID="56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.756247 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55"} err="failed to get container status \"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55\": rpc error: code = NotFound desc = could not find container \"56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55\": container with ID starting with 56acc80b83dcf4f2435e5f4d0ff2a813ef8e02517d1549a0e9fd7700f1b81f55 not found: ID does not exist" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.766920 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.766971 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.766984 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nvml\" (UniqueName: \"kubernetes.io/projected/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-kube-api-access-4nvml\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.767016 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:18 crc kubenswrapper[4945]: I0108 23:41:18.767027 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.095685 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.116556 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.133101 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.174755 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn77k\" (UniqueName: \"kubernetes.io/projected/4d92db13-d607-4a17-914d-83b7e18587f3-kube-api-access-wn77k\") pod \"4d92db13-d607-4a17-914d-83b7e18587f3\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.174888 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-config-data\") pod \"4d92db13-d607-4a17-914d-83b7e18587f3\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.174932 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-combined-ca-bundle\") pod \"4d92db13-d607-4a17-914d-83b7e18587f3\" (UID: \"4d92db13-d607-4a17-914d-83b7e18587f3\") " Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.183916 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184351 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d92db13-d607-4a17-914d-83b7e18587f3-kube-api-access-wn77k" (OuterVolumeSpecName: "kube-api-access-wn77k") pod "4d92db13-d607-4a17-914d-83b7e18587f3" (UID: "4d92db13-d607-4a17-914d-83b7e18587f3"). InnerVolumeSpecName "kube-api-access-wn77k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184550 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerName="nova-api-log" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184575 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerName="nova-api-log" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184593 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994dbeee-49d8-4572-975d-727360fff33c" containerName="init" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184601 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="994dbeee-49d8-4572-975d-727360fff33c" containerName="init" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184616 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d92db13-d607-4a17-914d-83b7e18587f3" containerName="nova-scheduler-scheduler" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184624 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d92db13-d607-4a17-914d-83b7e18587f3" containerName="nova-scheduler-scheduler" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184641 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="registry-server" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184650 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="registry-server" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184667 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994dbeee-49d8-4572-975d-727360fff33c" containerName="dnsmasq-dns" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184675 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="994dbeee-49d8-4572-975d-727360fff33c" containerName="dnsmasq-dns" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184694 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="extract-utilities" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184702 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="extract-utilities" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184724 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerName="nova-api-api" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184732 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerName="nova-api-api" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184745 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd481cc-b375-4a3a-b41c-2690888844e6" containerName="nova-manage" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184753 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd481cc-b375-4a3a-b41c-2690888844e6" containerName="nova-manage" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.184771 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="extract-content" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.184779 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="extract-content" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.185069 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d92db13-d607-4a17-914d-83b7e18587f3" containerName="nova-scheduler-scheduler" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.185099 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerName="nova-api-api" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.185112 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" containerName="nova-api-log" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.185129 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd481cc-b375-4a3a-b41c-2690888844e6" containerName="nova-manage" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.185143 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="994dbeee-49d8-4572-975d-727360fff33c" containerName="dnsmasq-dns" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.185155 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be1a1f4-22cc-4144-9f30-f8c902897d34" containerName="registry-server" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.186524 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.191361 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.195229 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.195379 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.209107 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.216182 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d92db13-d607-4a17-914d-83b7e18587f3" (UID: "4d92db13-d607-4a17-914d-83b7e18587f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.217179 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-config-data" (OuterVolumeSpecName: "config-data") pod "4d92db13-d607-4a17-914d-83b7e18587f3" (UID: "4d92db13-d607-4a17-914d-83b7e18587f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.277617 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-public-tls-certs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.277806 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.277855 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-internal-tls-certs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.277887 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-config-data\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.277922 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb334a7-9a7f-4e20-9dc8-092b9372bb10-logs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.277956 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmq2x\" (UniqueName: \"kubernetes.io/projected/adb334a7-9a7f-4e20-9dc8-092b9372bb10-kube-api-access-zmq2x\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.278029 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn77k\" (UniqueName: \"kubernetes.io/projected/4d92db13-d607-4a17-914d-83b7e18587f3-kube-api-access-wn77k\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.278040 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.278052 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d92db13-d607-4a17-914d-83b7e18587f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.379879 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.379951 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-internal-tls-certs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.379986 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-config-data\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.380035 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb334a7-9a7f-4e20-9dc8-092b9372bb10-logs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.380067 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmq2x\" (UniqueName: \"kubernetes.io/projected/adb334a7-9a7f-4e20-9dc8-092b9372bb10-kube-api-access-zmq2x\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.380099 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-public-tls-certs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.380798 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb334a7-9a7f-4e20-9dc8-092b9372bb10-logs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.385751 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-internal-tls-certs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.387712 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-config-data\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.388607 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.388739 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-public-tls-certs\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.411573 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmq2x\" (UniqueName: \"kubernetes.io/projected/adb334a7-9a7f-4e20-9dc8-092b9372bb10-kube-api-access-zmq2x\") pod \"nova-api-0\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.602020 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.693145 4945 generic.go:334] "Generic (PLEG): container finished" podID="4d92db13-d607-4a17-914d-83b7e18587f3" containerID="221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022" exitCode=0 Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.693223 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.693345 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d92db13-d607-4a17-914d-83b7e18587f3","Type":"ContainerDied","Data":"221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022"} Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.693656 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d92db13-d607-4a17-914d-83b7e18587f3","Type":"ContainerDied","Data":"1c0d584cd41551e7abc0686c1a36bae1fa80c6f1c6d559b802915354f131822f"} Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.693682 4945 scope.go:117] "RemoveContainer" containerID="221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.718379 4945 scope.go:117] "RemoveContainer" containerID="221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022" Jan 08 23:41:19 crc kubenswrapper[4945]: E0108 23:41:19.719069 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022\": container with ID starting with 221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022 not found: ID does not exist" containerID="221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.719125 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022"} err="failed to get container status \"221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022\": rpc error: code = NotFound desc = could not find container \"221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022\": container with ID starting with 221ea1a80d7a2326ca78e7467fd6f828fc70ba0d60ee65bba335e9592cafd022 not found: ID does not exist" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.773381 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.792228 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.808571 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.819839 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.822612 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.824134 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.893965 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-config-data\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.894246 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ql4g\" (UniqueName: \"kubernetes.io/projected/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-kube-api-access-4ql4g\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.894392 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.997200 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-config-data\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.997273 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ql4g\" (UniqueName: \"kubernetes.io/projected/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-kube-api-access-4ql4g\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:19 crc kubenswrapper[4945]: I0108 23:41:19.997348 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.007444 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-config-data\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.007614 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.024627 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ql4g\" (UniqueName: \"kubernetes.io/projected/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-kube-api-access-4ql4g\") pod \"nova-scheduler-0\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " pod="openstack/nova-scheduler-0" Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.029271 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d92db13-d607-4a17-914d-83b7e18587f3" path="/var/lib/kubelet/pods/4d92db13-d607-4a17-914d-83b7e18587f3/volumes" Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.030937 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca4c803-d7f2-442e-9f8c-8376c62a8d2c" path="/var/lib/kubelet/pods/7ca4c803-d7f2-442e-9f8c-8376c62a8d2c/volumes" Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.118628 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:41:20 crc kubenswrapper[4945]: W0108 23:41:20.121094 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb334a7_9a7f_4e20_9dc8_092b9372bb10.slice/crio-665fe41f5487878e4b55c461ebbb750341c8f9d90fab5cb6adbbe8a57d0023b7 WatchSource:0}: Error finding container 665fe41f5487878e4b55c461ebbb750341c8f9d90fab5cb6adbbe8a57d0023b7: Status 404 returned error can't find the container with id 665fe41f5487878e4b55c461ebbb750341c8f9d90fab5cb6adbbe8a57d0023b7 Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.148751 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.630440 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:41:20 crc kubenswrapper[4945]: W0108 23:41:20.633648 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f823122_da64_4ac4_aa14_96bc8f2f9c1c.slice/crio-87e747653007d44d2f91b54c2ba7bf74fb2bf0058b8a8ad219d5c4a5a3c771ee WatchSource:0}: Error finding container 87e747653007d44d2f91b54c2ba7bf74fb2bf0058b8a8ad219d5c4a5a3c771ee: Status 404 returned error can't find the container with id 87e747653007d44d2f91b54c2ba7bf74fb2bf0058b8a8ad219d5c4a5a3c771ee Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.712095 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adb334a7-9a7f-4e20-9dc8-092b9372bb10","Type":"ContainerStarted","Data":"0e98c349bd4bb46a725fbf926d7b62bd3253a7708fe210e97d08c5303e0c8116"} Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.712159 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adb334a7-9a7f-4e20-9dc8-092b9372bb10","Type":"ContainerStarted","Data":"7ecbd9fcee16f88440831e6610cfef4e63350db971f4e6808a4543a0b4ed747b"} Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.712170 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adb334a7-9a7f-4e20-9dc8-092b9372bb10","Type":"ContainerStarted","Data":"665fe41f5487878e4b55c461ebbb750341c8f9d90fab5cb6adbbe8a57d0023b7"} Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.716258 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f823122-da64-4ac4-aa14-96bc8f2f9c1c","Type":"ContainerStarted","Data":"87e747653007d44d2f91b54c2ba7bf74fb2bf0058b8a8ad219d5c4a5a3c771ee"} Jan 08 23:41:20 crc kubenswrapper[4945]: I0108 23:41:20.772073 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.772050519 podStartE2EDuration="1.772050519s" podCreationTimestamp="2026-01-08 23:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:41:20.770775328 +0000 UTC m=+1551.081934274" watchObservedRunningTime="2026-01-08 23:41:20.772050519 +0000 UTC m=+1551.083209475" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.069066 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:46752->10.217.0.192:8775: read: connection reset by peer" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.069343 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:46768->10.217.0.192:8775: read: connection reset by peer" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.578461 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.676867 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhdpm\" (UniqueName: \"kubernetes.io/projected/67c81217-0b52-461b-9eaf-cefc63fbaa10-kube-api-access-zhdpm\") pod \"67c81217-0b52-461b-9eaf-cefc63fbaa10\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.677256 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81217-0b52-461b-9eaf-cefc63fbaa10-logs\") pod \"67c81217-0b52-461b-9eaf-cefc63fbaa10\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.677437 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-nova-metadata-tls-certs\") pod \"67c81217-0b52-461b-9eaf-cefc63fbaa10\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.677486 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-config-data\") pod \"67c81217-0b52-461b-9eaf-cefc63fbaa10\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.677633 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-combined-ca-bundle\") pod \"67c81217-0b52-461b-9eaf-cefc63fbaa10\" (UID: \"67c81217-0b52-461b-9eaf-cefc63fbaa10\") " Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.679331 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c81217-0b52-461b-9eaf-cefc63fbaa10-logs" (OuterVolumeSpecName: "logs") pod "67c81217-0b52-461b-9eaf-cefc63fbaa10" (UID: "67c81217-0b52-461b-9eaf-cefc63fbaa10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.684310 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c81217-0b52-461b-9eaf-cefc63fbaa10-kube-api-access-zhdpm" (OuterVolumeSpecName: "kube-api-access-zhdpm") pod "67c81217-0b52-461b-9eaf-cefc63fbaa10" (UID: "67c81217-0b52-461b-9eaf-cefc63fbaa10"). InnerVolumeSpecName "kube-api-access-zhdpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.716762 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-config-data" (OuterVolumeSpecName: "config-data") pod "67c81217-0b52-461b-9eaf-cefc63fbaa10" (UID: "67c81217-0b52-461b-9eaf-cefc63fbaa10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.718960 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67c81217-0b52-461b-9eaf-cefc63fbaa10" (UID: "67c81217-0b52-461b-9eaf-cefc63fbaa10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.738237 4945 generic.go:334] "Generic (PLEG): container finished" podID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerID="d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72" exitCode=0 Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.738326 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67c81217-0b52-461b-9eaf-cefc63fbaa10","Type":"ContainerDied","Data":"d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72"} Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.738362 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67c81217-0b52-461b-9eaf-cefc63fbaa10","Type":"ContainerDied","Data":"1cdc2807ef368081844e821abc4704c5e8a5356f18ff1e2b6206fa791f8e4cd2"} Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.738383 4945 scope.go:117] "RemoveContainer" containerID="d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.738596 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.748071 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f823122-da64-4ac4-aa14-96bc8f2f9c1c","Type":"ContainerStarted","Data":"d56225670b178c36792afe8b5e40b521c40f21fe6ec9c747b20bf8f26e4c3640"} Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.765545 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "67c81217-0b52-461b-9eaf-cefc63fbaa10" (UID: "67c81217-0b52-461b-9eaf-cefc63fbaa10"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.777165 4945 scope.go:117] "RemoveContainer" containerID="61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.779486 4945 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.779509 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.779519 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67c81217-0b52-461b-9eaf-cefc63fbaa10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.779527 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhdpm\" (UniqueName: \"kubernetes.io/projected/67c81217-0b52-461b-9eaf-cefc63fbaa10-kube-api-access-zhdpm\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.779538 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67c81217-0b52-461b-9eaf-cefc63fbaa10-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.808683 4945 scope.go:117] "RemoveContainer" containerID="d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72" Jan 08 23:41:21 crc kubenswrapper[4945]: E0108 23:41:21.809952 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72\": container with ID starting with d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72 not found: ID does not exist" containerID="d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.810022 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72"} err="failed to get container status \"d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72\": rpc error: code = NotFound desc = could not find container \"d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72\": container with ID starting with d0c3e2259dce88d8b5b32b2c585096e8579289547788d3d21753bb0a67f20f72 not found: ID does not exist" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.810053 4945 scope.go:117] "RemoveContainer" containerID="61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63" Jan 08 23:41:21 crc kubenswrapper[4945]: E0108 23:41:21.811010 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63\": container with ID starting with 61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63 not found: ID does not exist" containerID="61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63" Jan 08 23:41:21 crc kubenswrapper[4945]: I0108 23:41:21.811079 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63"} err="failed to get container status \"61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63\": rpc error: code = NotFound desc = could not find container \"61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63\": container with ID starting with 61a2eccfd3d6db740c4277ba982127ff3df17e4a9559e25626ea6a71422bfa63 not found: ID does not exist" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.068068 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.068048859 podStartE2EDuration="3.068048859s" podCreationTimestamp="2026-01-08 23:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:41:21.776339859 +0000 UTC m=+1552.087498805" watchObservedRunningTime="2026-01-08 23:41:22.068048859 +0000 UTC m=+1552.379207805" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.076287 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.092501 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.127383 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:41:22 crc kubenswrapper[4945]: E0108 23:41:22.128335 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-metadata" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.128477 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-metadata" Jan 08 23:41:22 crc kubenswrapper[4945]: E0108 23:41:22.128644 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-log" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.128719 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-log" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.129066 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-log" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.129156 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" containerName="nova-metadata-metadata" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.130482 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.137192 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.137389 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.143521 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.188613 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.188667 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-config-data\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.188743 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4pdr\" (UniqueName: \"kubernetes.io/projected/ea1eec40-294d-4749-bdb2-678289eeb815-kube-api-access-v4pdr\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.188800 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1eec40-294d-4749-bdb2-678289eeb815-logs\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.189061 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.291368 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.291739 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.291855 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-config-data\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.292009 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4pdr\" (UniqueName: \"kubernetes.io/projected/ea1eec40-294d-4749-bdb2-678289eeb815-kube-api-access-v4pdr\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.292147 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1eec40-294d-4749-bdb2-678289eeb815-logs\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.292870 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1eec40-294d-4749-bdb2-678289eeb815-logs\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.297207 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.300463 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.302801 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-config-data\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.313675 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4pdr\" (UniqueName: \"kubernetes.io/projected/ea1eec40-294d-4749-bdb2-678289eeb815-kube-api-access-v4pdr\") pod \"nova-metadata-0\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.450523 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:41:22 crc kubenswrapper[4945]: I0108 23:41:22.995493 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:41:23 crc kubenswrapper[4945]: I0108 23:41:23.779036 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1eec40-294d-4749-bdb2-678289eeb815","Type":"ContainerStarted","Data":"360c56ebe46377c61cd806e3349a6a48d19f706ccb293962a2db56915941150d"} Jan 08 23:41:23 crc kubenswrapper[4945]: I0108 23:41:23.779414 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1eec40-294d-4749-bdb2-678289eeb815","Type":"ContainerStarted","Data":"18f1d04e9c438af26a72f40f614d629c0cba6569447b80e683d6b701672e3900"} Jan 08 23:41:23 crc kubenswrapper[4945]: I0108 23:41:23.779428 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1eec40-294d-4749-bdb2-678289eeb815","Type":"ContainerStarted","Data":"d1be8831fb2d1f360911853538b8a8df8d3a293e14572192185ec4e8fd705714"} Jan 08 23:41:23 crc kubenswrapper[4945]: I0108 23:41:23.820701 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8206868090000001 podStartE2EDuration="1.820686809s" podCreationTimestamp="2026-01-08 23:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-08 23:41:23.820145376 +0000 UTC m=+1554.131304322" watchObservedRunningTime="2026-01-08 23:41:23.820686809 +0000 UTC m=+1554.131845755" Jan 08 23:41:24 crc kubenswrapper[4945]: I0108 23:41:24.013925 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c81217-0b52-461b-9eaf-cefc63fbaa10" path="/var/lib/kubelet/pods/67c81217-0b52-461b-9eaf-cefc63fbaa10/volumes" Jan 08 23:41:25 crc kubenswrapper[4945]: I0108 23:41:25.149421 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 08 23:41:27 crc kubenswrapper[4945]: I0108 23:41:27.451121 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 08 23:41:27 crc kubenswrapper[4945]: I0108 23:41:27.451467 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 08 23:41:28 crc kubenswrapper[4945]: I0108 23:41:28.000908 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:41:28 crc kubenswrapper[4945]: E0108 23:41:28.001603 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:41:29 crc kubenswrapper[4945]: I0108 23:41:29.603161 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 08 23:41:29 crc kubenswrapper[4945]: I0108 23:41:29.604361 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 08 23:41:30 crc kubenswrapper[4945]: I0108 23:41:30.149071 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 08 23:41:30 crc kubenswrapper[4945]: I0108 23:41:30.180577 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 08 23:41:30 crc kubenswrapper[4945]: I0108 23:41:30.618176 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 08 23:41:30 crc kubenswrapper[4945]: I0108 23:41:30.618176 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 08 23:41:30 crc kubenswrapper[4945]: I0108 23:41:30.881409 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 08 23:41:32 crc kubenswrapper[4945]: I0108 23:41:32.452096 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 08 23:41:32 crc kubenswrapper[4945]: I0108 23:41:32.452887 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 08 23:41:33 crc kubenswrapper[4945]: I0108 23:41:33.470181 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 08 23:41:33 crc kubenswrapper[4945]: I0108 23:41:33.470181 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 08 23:41:37 crc kubenswrapper[4945]: I0108 23:41:37.884070 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 08 23:41:39 crc kubenswrapper[4945]: I0108 23:41:39.001932 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:41:39 crc kubenswrapper[4945]: E0108 23:41:39.003109 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:41:39 crc kubenswrapper[4945]: I0108 23:41:39.611984 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 08 23:41:39 crc kubenswrapper[4945]: I0108 23:41:39.612837 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 08 23:41:39 crc kubenswrapper[4945]: I0108 23:41:39.613268 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 08 23:41:39 crc kubenswrapper[4945]: I0108 23:41:39.631414 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 08 23:41:39 crc kubenswrapper[4945]: I0108 23:41:39.960064 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 08 23:41:39 crc kubenswrapper[4945]: I0108 23:41:39.968570 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 08 23:41:42 crc kubenswrapper[4945]: I0108 23:41:42.456774 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 08 23:41:42 crc kubenswrapper[4945]: I0108 23:41:42.457370 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 08 23:41:42 crc kubenswrapper[4945]: I0108 23:41:42.462597 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 08 23:41:42 crc kubenswrapper[4945]: I0108 23:41:42.463370 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 08 23:41:54 crc kubenswrapper[4945]: I0108 23:41:54.001622 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:41:54 crc kubenswrapper[4945]: E0108 23:41:54.002452 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.083101 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3bb0-account-create-update-4s5gt"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.084908 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.095279 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.135398 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3bb0-account-create-update-4s5gt"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.185154 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3bb0-account-create-update-89p7q"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.207756 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8zl\" (UniqueName: \"kubernetes.io/projected/179a20bc-72ca-4f86-8cd0-5c6df9211839-kube-api-access-rs8zl\") pod \"placement-3bb0-account-create-update-4s5gt\" (UID: \"179a20bc-72ca-4f86-8cd0-5c6df9211839\") " pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.207904 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179a20bc-72ca-4f86-8cd0-5c6df9211839-operator-scripts\") pod \"placement-3bb0-account-create-update-4s5gt\" (UID: \"179a20bc-72ca-4f86-8cd0-5c6df9211839\") " pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.221090 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3bb0-account-create-update-89p7q"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.267564 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xp9kx"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.268793 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.292203 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.312771 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179a20bc-72ca-4f86-8cd0-5c6df9211839-operator-scripts\") pod \"placement-3bb0-account-create-update-4s5gt\" (UID: \"179a20bc-72ca-4f86-8cd0-5c6df9211839\") " pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.312878 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8zl\" (UniqueName: \"kubernetes.io/projected/179a20bc-72ca-4f86-8cd0-5c6df9211839-kube-api-access-rs8zl\") pod \"placement-3bb0-account-create-update-4s5gt\" (UID: \"179a20bc-72ca-4f86-8cd0-5c6df9211839\") " pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.313796 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179a20bc-72ca-4f86-8cd0-5c6df9211839-operator-scripts\") pod \"placement-3bb0-account-create-update-4s5gt\" (UID: \"179a20bc-72ca-4f86-8cd0-5c6df9211839\") " pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.324744 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xp9kx"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.386064 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2hqz2"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.404629 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8zl\" (UniqueName: \"kubernetes.io/projected/179a20bc-72ca-4f86-8cd0-5c6df9211839-kube-api-access-rs8zl\") pod \"placement-3bb0-account-create-update-4s5gt\" (UID: \"179a20bc-72ca-4f86-8cd0-5c6df9211839\") " pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.418955 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.419700 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2hqz2"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.419752 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksk2n\" (UniqueName: \"kubernetes.io/projected/eda15927-fac6-455b-8615-24f8f535c80a-kube-api-access-ksk2n\") pod \"root-account-create-update-xp9kx\" (UID: \"eda15927-fac6-455b-8615-24f8f535c80a\") " pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.419915 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts\") pod \"root-account-create-update-xp9kx\" (UID: \"eda15927-fac6-455b-8615-24f8f535c80a\") " pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.453135 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.453667 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="60fef7df-b0da-45e7-8dfe-434dacea4715" containerName="openstackclient" containerID="cri-o://e62730e74d5750aea233a6665a98c38e3fab63a273b9b58230cb2e7f26de724f" gracePeriod=2 Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.496729 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.510874 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d684-account-create-update-ww984"] Jan 08 23:42:03 crc kubenswrapper[4945]: E0108 23:42:03.511339 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fef7df-b0da-45e7-8dfe-434dacea4715" containerName="openstackclient" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.511351 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fef7df-b0da-45e7-8dfe-434dacea4715" containerName="openstackclient" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.511527 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fef7df-b0da-45e7-8dfe-434dacea4715" containerName="openstackclient" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.512191 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.528323 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d684-account-create-update-ww984"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.528775 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.529926 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts\") pod \"root-account-create-update-xp9kx\" (UID: \"eda15927-fac6-455b-8615-24f8f535c80a\") " pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.529984 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksk2n\" (UniqueName: \"kubernetes.io/projected/eda15927-fac6-455b-8615-24f8f535c80a-kube-api-access-ksk2n\") pod \"root-account-create-update-xp9kx\" (UID: \"eda15927-fac6-455b-8615-24f8f535c80a\") " pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.534279 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts\") pod \"root-account-create-update-xp9kx\" (UID: \"eda15927-fac6-455b-8615-24f8f535c80a\") " pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.626112 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.638008 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xmd\" (UniqueName: \"kubernetes.io/projected/a6e1114b-d949-45e8-a640-d448d52ea983-kube-api-access-f6xmd\") pod \"barbican-d684-account-create-update-ww984\" (UID: \"a6e1114b-d949-45e8-a640-d448d52ea983\") " pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.638071 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6e1114b-d949-45e8-a640-d448d52ea983-operator-scripts\") pod \"barbican-d684-account-create-update-ww984\" (UID: \"a6e1114b-d949-45e8-a640-d448d52ea983\") " pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.644109 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksk2n\" (UniqueName: \"kubernetes.io/projected/eda15927-fac6-455b-8615-24f8f535c80a-kube-api-access-ksk2n\") pod \"root-account-create-update-xp9kx\" (UID: \"eda15927-fac6-455b-8615-24f8f535c80a\") " pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.669168 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d684-account-create-update-ng64c"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.720515 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.722603 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="openstack-network-exporter" containerID="cri-o://d9317ebca4ad485773a2bb68895a59989eaf7ea993a34e6617668bb4a89d52c9" gracePeriod=300 Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.746932 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6xmd\" (UniqueName: \"kubernetes.io/projected/a6e1114b-d949-45e8-a640-d448d52ea983-kube-api-access-f6xmd\") pod \"barbican-d684-account-create-update-ww984\" (UID: \"a6e1114b-d949-45e8-a640-d448d52ea983\") " pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.747553 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6e1114b-d949-45e8-a640-d448d52ea983-operator-scripts\") pod \"barbican-d684-account-create-update-ww984\" (UID: \"a6e1114b-d949-45e8-a640-d448d52ea983\") " pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:03 crc kubenswrapper[4945]: E0108 23:42:03.756084 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:03 crc kubenswrapper[4945]: E0108 23:42:03.756141 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data podName:e920b84a-bd1b-4649-9cc0-e3b239d6a5b9 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:04.256127241 +0000 UTC m=+1594.567286187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data") pod "rabbitmq-cell1-server-0" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9") : configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.758133 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6e1114b-d949-45e8-a640-d448d52ea983-operator-scripts\") pod \"barbican-d684-account-create-update-ww984\" (UID: \"a6e1114b-d949-45e8-a640-d448d52ea983\") " pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.796713 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d684-account-create-update-ng64c"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.905168 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ee25-account-create-update-wfvwd"] Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.928397 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.930223 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.948290 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6xmd\" (UniqueName: \"kubernetes.io/projected/a6e1114b-d949-45e8-a640-d448d52ea983-kube-api-access-f6xmd\") pod \"barbican-d684-account-create-update-ww984\" (UID: \"a6e1114b-d949-45e8-a640-d448d52ea983\") " pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:03 crc kubenswrapper[4945]: I0108 23:42:03.959859 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.100611 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cf1f31-7393-4381-9c13-723fe4732c95" path="/var/lib/kubelet/pods/08cf1f31-7393-4381-9c13-723fe4732c95/volumes" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.109437 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de1550e-e5d5-4cba-bdc5-e56194b3446d" path="/var/lib/kubelet/pods/4de1550e-e5d5-4cba-bdc5-e56194b3446d/volumes" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.126862 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69fe9da9-1222-42c9-aefc-b051e72f81f7" path="/var/lib/kubelet/pods/69fe9da9-1222-42c9-aefc-b051e72f81f7/volumes" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.127631 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ee25-account-create-update-wfvwd"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.127661 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84e9-account-create-update-jm7gq"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.133824 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.148050 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.162366 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84e9-account-create-update-jm7gq"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.175644 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.176207 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerName="openstack-network-exporter" containerID="cri-o://29134cac38be275897153b9e527a487bd6dc85a0149bae9cb21fa2cca5dc21f1" gracePeriod=300 Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.192685 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5h9\" (UniqueName: \"kubernetes.io/projected/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-kube-api-access-lg5h9\") pod \"cinder-ee25-account-create-update-wfvwd\" (UID: \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\") " pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.192785 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-operator-scripts\") pod \"cinder-ee25-account-create-update-wfvwd\" (UID: \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\") " pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.193474 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.193528 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:04.693508827 +0000 UTC m=+1595.004667773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-scripts" not found Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.194066 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.194095 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:04.694087051 +0000 UTC m=+1595.005245997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-config" not found Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.203279 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.204656 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d4b0-account-create-update-k68cp"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.210232 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.230722 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.232127 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d4b0-account-create-update-k68cp"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.270332 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee25-account-create-update-fhpch"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.301199 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-operator-scripts\") pod \"cinder-ee25-account-create-update-wfvwd\" (UID: \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\") " pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.301289 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90095c9-0666-4a15-878f-4b62280c3d00-operator-scripts\") pod \"neutron-84e9-account-create-update-jm7gq\" (UID: \"d90095c9-0666-4a15-878f-4b62280c3d00\") " pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.301311 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grj55\" (UniqueName: \"kubernetes.io/projected/d90095c9-0666-4a15-878f-4b62280c3d00-kube-api-access-grj55\") pod \"neutron-84e9-account-create-update-jm7gq\" (UID: \"d90095c9-0666-4a15-878f-4b62280c3d00\") " pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.301460 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg5h9\" (UniqueName: \"kubernetes.io/projected/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-kube-api-access-lg5h9\") pod \"cinder-ee25-account-create-update-wfvwd\" (UID: \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\") " pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.305038 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.305113 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data podName:e920b84a-bd1b-4649-9cc0-e3b239d6a5b9 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:05.305090963 +0000 UTC m=+1595.616249909 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data") pod "rabbitmq-cell1-server-0" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9") : configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.306561 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-operator-scripts\") pod \"cinder-ee25-account-create-update-wfvwd\" (UID: \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\") " pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.327942 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ee25-account-create-update-fhpch"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.337358 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="ovsdbserver-sb" containerID="cri-o://230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9" gracePeriod=300 Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.357665 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg5h9\" (UniqueName: \"kubernetes.io/projected/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-kube-api-access-lg5h9\") pod \"cinder-ee25-account-create-update-wfvwd\" (UID: \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\") " pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.359360 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84e9-account-create-update-4ktlt"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.379247 4945 generic.go:334] "Generic (PLEG): container finished" podID="90046452-437b-4666-83a8-e8ee09bfc932" containerID="d9317ebca4ad485773a2bb68895a59989eaf7ea993a34e6617668bb4a89d52c9" exitCode=2 Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.379318 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90046452-437b-4666-83a8-e8ee09bfc932","Type":"ContainerDied","Data":"d9317ebca4ad485773a2bb68895a59989eaf7ea993a34e6617668bb4a89d52c9"} Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.386509 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84e9-account-create-update-4ktlt"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.405636 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90095c9-0666-4a15-878f-4b62280c3d00-operator-scripts\") pod \"neutron-84e9-account-create-update-jm7gq\" (UID: \"d90095c9-0666-4a15-878f-4b62280c3d00\") " pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.405695 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grj55\" (UniqueName: \"kubernetes.io/projected/d90095c9-0666-4a15-878f-4b62280c3d00-kube-api-access-grj55\") pod \"neutron-84e9-account-create-update-jm7gq\" (UID: \"d90095c9-0666-4a15-878f-4b62280c3d00\") " pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.405860 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74d9ee-5e3b-4205-8ac4-1729d495861b-operator-scripts\") pod \"glance-d4b0-account-create-update-k68cp\" (UID: \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\") " pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.405909 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbfbs\" (UniqueName: \"kubernetes.io/projected/0e74d9ee-5e3b-4205-8ac4-1729d495861b-kube-api-access-dbfbs\") pod \"glance-d4b0-account-create-update-k68cp\" (UID: \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\") " pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.406872 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90095c9-0666-4a15-878f-4b62280c3d00-operator-scripts\") pod \"neutron-84e9-account-create-update-jm7gq\" (UID: \"d90095c9-0666-4a15-878f-4b62280c3d00\") " pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.408199 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j2txr"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.432338 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j2txr"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.503339 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grj55\" (UniqueName: \"kubernetes.io/projected/d90095c9-0666-4a15-878f-4b62280c3d00-kube-api-access-grj55\") pod \"neutron-84e9-account-create-update-jm7gq\" (UID: \"d90095c9-0666-4a15-878f-4b62280c3d00\") " pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.514110 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74d9ee-5e3b-4205-8ac4-1729d495861b-operator-scripts\") pod \"glance-d4b0-account-create-update-k68cp\" (UID: \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\") " pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.514170 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbfbs\" (UniqueName: \"kubernetes.io/projected/0e74d9ee-5e3b-4205-8ac4-1729d495861b-kube-api-access-dbfbs\") pod \"glance-d4b0-account-create-update-k68cp\" (UID: \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\") " pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.514983 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74d9ee-5e3b-4205-8ac4-1729d495861b-operator-scripts\") pod \"glance-d4b0-account-create-update-k68cp\" (UID: \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\") " pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.535063 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d4b0-account-create-update-b5sj2"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.542499 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.558634 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerName="ovsdbserver-nb" containerID="cri-o://324e6e48dab6c7d527635ca30c759a02e9ba3563bc130abb4ddbc3f4fe3ceb54" gracePeriod=300 Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.558915 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbfbs\" (UniqueName: \"kubernetes.io/projected/0e74d9ee-5e3b-4205-8ac4-1729d495861b-kube-api-access-dbfbs\") pod \"glance-d4b0-account-create-update-k68cp\" (UID: \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\") " pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.565871 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-hfhkg"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.572273 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.599311 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fr87r"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.629881 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.658408 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d4b0-account-create-update-b5sj2"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.690501 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.713478 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-nh2p7"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.713726 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-nh2p7" podUID="242587ad-03ea-45d9-be99-4deb624ce107" containerName="openstack-network-exporter" containerID="cri-o://65d1afbf60fdd5284b8bb7770cb76316e3e898a189e17fc2b6f0138e0d6d3482" gracePeriod=30 Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.723196 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.723208 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.723251 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:05.723235846 +0000 UTC m=+1596.034394792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-config" not found Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.723284 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:05.723264837 +0000 UTC m=+1596.034423783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-scripts" not found Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.804087 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2784-account-create-update-ll7qt"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.807027 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.815093 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.825181 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 08 23:42:04 crc kubenswrapper[4945]: E0108 23:42:04.825267 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data podName:71eb40d2-e481-445d-99ea-948b918b862d nodeName:}" failed. No retries permitted until 2026-01-08 23:42:05.325247621 +0000 UTC m=+1595.636406647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data") pod "rabbitmq-server-0" (UID: "71eb40d2-e481-445d-99ea-948b918b862d") : configmap "rabbitmq-config-data" not found Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.832773 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w9nlx"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.838128 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.843258 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.877255 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2784-account-create-update-ll7qt"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.895954 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w9nlx"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.906408 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9vrvr"] Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.927005 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pssp8\" (UniqueName: \"kubernetes.io/projected/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-kube-api-access-pssp8\") pod \"nova-api-2784-account-create-update-ll7qt\" (UID: \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\") " pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.927069 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-operator-scripts\") pod \"nova-api-2784-account-create-update-ll7qt\" (UID: \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\") " pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.934625 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9vrvr"] Jan 08 23:42:04 crc kubenswrapper[4945]: W0108 23:42:04.948609 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod179a20bc_72ca_4f86_8cd0_5c6df9211839.slice/crio-f34715e2f1d4a845c572bcd6537dda003609d026e4ecbea6811c93eb8b0e9c47 WatchSource:0}: Error finding container f34715e2f1d4a845c572bcd6537dda003609d026e4ecbea6811c93eb8b0e9c47: Status 404 returned error can't find the container with id f34715e2f1d4a845c572bcd6537dda003609d026e4ecbea6811c93eb8b0e9c47 Jan 08 23:42:04 crc kubenswrapper[4945]: I0108 23:42:04.993951 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2784-account-create-update-nfhmc"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.021078 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2784-account-create-update-nfhmc"] Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.030265 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:05 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: if [ -n "placement" ]; then Jan 08 23:42:05 crc kubenswrapper[4945]: GRANT_DATABASE="placement" Jan 08 23:42:05 crc kubenswrapper[4945]: else Jan 08 23:42:05 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:05 crc kubenswrapper[4945]: fi Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:05 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:05 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:05 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:05 crc kubenswrapper[4945]: # support updates Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.032095 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-3bb0-account-create-update-4s5gt" podUID="179a20bc-72ca-4f86-8cd0-5c6df9211839" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.034308 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.053451 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="ovn-northd" containerID="cri-o://fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.045585 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e6d9797-d686-44c0-918a-f17bac13b874-operator-scripts\") pod \"nova-cell0-9ea6-account-create-update-w9nlx\" (UID: \"8e6d9797-d686-44c0-918a-f17bac13b874\") " pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.053724 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvmr\" (UniqueName: \"kubernetes.io/projected/8e6d9797-d686-44c0-918a-f17bac13b874-kube-api-access-qwvmr\") pod \"nova-cell0-9ea6-account-create-update-w9nlx\" (UID: \"8e6d9797-d686-44c0-918a-f17bac13b874\") " pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.053778 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pssp8\" (UniqueName: \"kubernetes.io/projected/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-kube-api-access-pssp8\") pod \"nova-api-2784-account-create-update-ll7qt\" (UID: \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\") " pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.053862 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-operator-scripts\") pod \"nova-api-2784-account-create-update-ll7qt\" (UID: \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\") " pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.054769 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-operator-scripts\") pod \"nova-api-2784-account-create-update-ll7qt\" (UID: \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\") " pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.060439 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="openstack-network-exporter" containerID="cri-o://a785ea69394c633a9012de634007aaa1ea39fa590a19423eeb91abe2b39b2bdf" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.110485 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pssp8\" (UniqueName: \"kubernetes.io/projected/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-kube-api-access-pssp8\") pod \"nova-api-2784-account-create-update-ll7qt\" (UID: \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\") " pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.164406 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e6d9797-d686-44c0-918a-f17bac13b874-operator-scripts\") pod \"nova-cell0-9ea6-account-create-update-w9nlx\" (UID: \"8e6d9797-d686-44c0-918a-f17bac13b874\") " pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.164471 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwvmr\" (UniqueName: \"kubernetes.io/projected/8e6d9797-d686-44c0-918a-f17bac13b874-kube-api-access-qwvmr\") pod \"nova-cell0-9ea6-account-create-update-w9nlx\" (UID: \"8e6d9797-d686-44c0-918a-f17bac13b874\") " pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.168716 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e6d9797-d686-44c0-918a-f17bac13b874-operator-scripts\") pod \"nova-cell0-9ea6-account-create-update-w9nlx\" (UID: \"8e6d9797-d686-44c0-918a-f17bac13b874\") " pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.189632 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w2dkh"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.227157 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w2dkh"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.228148 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.233690 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwvmr\" (UniqueName: \"kubernetes.io/projected/8e6d9797-d686-44c0-918a-f17bac13b874-kube-api-access-qwvmr\") pod \"nova-cell0-9ea6-account-create-update-w9nlx\" (UID: \"8e6d9797-d686-44c0-918a-f17bac13b874\") " pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.250243 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.261927 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-d4r6l"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.304264 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-d4r6l"] Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.313497 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.324535 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.355404 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.355531 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="ovn-northd" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.355787 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d4586c964-cfb7b"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.358635 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d4586c964-cfb7b" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" containerName="placement-log" containerID="cri-o://b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.359449 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-d4586c964-cfb7b" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" containerName="placement-api" containerID="cri-o://bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.387784 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.387858 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data podName:e920b84a-bd1b-4649-9cc0-e3b239d6a5b9 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:07.38784259 +0000 UTC m=+1597.699001536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data") pod "rabbitmq-cell1-server-0" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9") : configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.388163 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.388237 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data podName:71eb40d2-e481-445d-99ea-948b918b862d nodeName:}" failed. No retries permitted until 2026-01-08 23:42:06.388217839 +0000 UTC m=+1596.699376785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data") pod "rabbitmq-server-0" (UID: "71eb40d2-e481-445d-99ea-948b918b862d") : configmap "rabbitmq-config-data" not found Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.448446 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3bb0-account-create-update-4s5gt" event={"ID":"179a20bc-72ca-4f86-8cd0-5c6df9211839","Type":"ContainerStarted","Data":"f34715e2f1d4a845c572bcd6537dda003609d026e4ecbea6811c93eb8b0e9c47"} Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.457681 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ae6f-account-create-update-d4jd4"] Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.458524 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:05 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: if [ -n "placement" ]; then Jan 08 23:42:05 crc kubenswrapper[4945]: GRANT_DATABASE="placement" Jan 08 23:42:05 crc kubenswrapper[4945]: else Jan 08 23:42:05 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:05 crc kubenswrapper[4945]: fi Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:05 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:05 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:05 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:05 crc kubenswrapper[4945]: # support updates Jan 08 23:42:05 crc kubenswrapper[4945]: Jan 08 23:42:05 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.472774 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-3bb0-account-create-update-4s5gt" podUID="179a20bc-72ca-4f86-8cd0-5c6df9211839" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.503661 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-94g76"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.519599 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_90046452-437b-4666-83a8-e8ee09bfc932/ovsdbserver-sb/0.log" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.519649 4945 generic.go:334] "Generic (PLEG): container finished" podID="90046452-437b-4666-83a8-e8ee09bfc932" containerID="230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9" exitCode=143 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.519725 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90046452-437b-4666-83a8-e8ee09bfc932","Type":"ContainerDied","Data":"230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9"} Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.560791 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ae6f-account-create-update-d4jd4"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.566821 4945 generic.go:334] "Generic (PLEG): container finished" podID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerID="a785ea69394c633a9012de634007aaa1ea39fa590a19423eeb91abe2b39b2bdf" exitCode=2 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.566910 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-94g76"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.566939 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eefc7456-a6c7-4442-aa3a-370a1f9b01fa","Type":"ContainerDied","Data":"a785ea69394c633a9012de634007aaa1ea39fa590a19423eeb91abe2b39b2bdf"} Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.573317 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rckwr"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.582625 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-nqm45"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.589798 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nh2p7_242587ad-03ea-45d9-be99-4deb624ce107/openstack-network-exporter/0.log" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.589855 4945 generic.go:334] "Generic (PLEG): container finished" podID="242587ad-03ea-45d9-be99-4deb624ce107" containerID="65d1afbf60fdd5284b8bb7770cb76316e3e898a189e17fc2b6f0138e0d6d3482" exitCode=2 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.590026 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nh2p7" event={"ID":"242587ad-03ea-45d9-be99-4deb624ce107","Type":"ContainerDied","Data":"65d1afbf60fdd5284b8bb7770cb76316e3e898a189e17fc2b6f0138e0d6d3482"} Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.623384 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-nqm45"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.647039 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_36817bdb-e28c-495c-9e26-005e53f3cc2a/ovsdbserver-nb/0.log" Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.647220 4945 generic.go:334] "Generic (PLEG): container finished" podID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerID="29134cac38be275897153b9e527a487bd6dc85a0149bae9cb21fa2cca5dc21f1" exitCode=2 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.647293 4945 generic.go:334] "Generic (PLEG): container finished" podID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerID="324e6e48dab6c7d527635ca30c759a02e9ba3563bc130abb4ddbc3f4fe3ceb54" exitCode=143 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.647318 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"36817bdb-e28c-495c-9e26-005e53f3cc2a","Type":"ContainerDied","Data":"29134cac38be275897153b9e527a487bd6dc85a0149bae9cb21fa2cca5dc21f1"} Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.647349 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"36817bdb-e28c-495c-9e26-005e53f3cc2a","Type":"ContainerDied","Data":"324e6e48dab6c7d527635ca30c759a02e9ba3563bc130abb4ddbc3f4fe3ceb54"} Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.655929 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rckwr"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.712094 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-x2nr9"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.759056 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-x2nr9"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.777056 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3bb0-account-create-update-4s5gt"] Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.824753 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.824842 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:07.824825957 +0000 UTC m=+1598.135984903 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-scripts" not found Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.825260 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 08 23:42:05 crc kubenswrapper[4945]: E0108 23:42:05.825301 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:07.825290958 +0000 UTC m=+1598.136449904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-config" not found Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.833315 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3bb0-account-create-update-4s5gt"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.850381 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l7prv"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.850676 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" podUID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerName="dnsmasq-dns" containerID="cri-o://6439494f8f4be73fcba62cb9f436505c4882f4e970c1fb7590409804556a3683" gracePeriod=10 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.862957 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.863502 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-server" containerID="cri-o://94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.863896 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="swift-recon-cron" containerID="cri-o://b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.863966 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="rsync" containerID="cri-o://186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864038 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-expirer" containerID="cri-o://ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864103 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-updater" containerID="cri-o://9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864148 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-auditor" containerID="cri-o://47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864205 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-replicator" containerID="cri-o://653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864280 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-server" containerID="cri-o://ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864321 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-updater" containerID="cri-o://08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864354 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-auditor" containerID="cri-o://0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864391 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-replicator" containerID="cri-o://2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864430 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-server" containerID="cri-o://d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864475 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-reaper" containerID="cri-o://d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864510 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-auditor" containerID="cri-o://5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.864556 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-replicator" containerID="cri-o://27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9" gracePeriod=30 Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.876360 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qpx88"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.898487 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qpx88"] Jan 08 23:42:05 crc kubenswrapper[4945]: I0108 23:42:05.900278 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" podUID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.198:5353: connect: connection refused" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.001341 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.001695 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.031777 4945 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 08 23:42:06 crc kubenswrapper[4945]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 08 23:42:06 crc kubenswrapper[4945]: + source /usr/local/bin/container-scripts/functions Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNBridge=br-int Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNRemote=tcp:localhost:6642 Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNEncapType=geneve Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNAvailabilityZones= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ EnableChassisAsGateway=true Jan 08 23:42:06 crc kubenswrapper[4945]: ++ PhysicalNetworks= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNHostName= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 08 23:42:06 crc kubenswrapper[4945]: ++ ovs_dir=/var/lib/openvswitch Jan 08 23:42:06 crc kubenswrapper[4945]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 08 23:42:06 crc kubenswrapper[4945]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 08 23:42:06 crc kubenswrapper[4945]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + sleep 0.5 Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + sleep 0.5 Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + cleanup_ovsdb_server_semaphore Jan 08 23:42:06 crc kubenswrapper[4945]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 08 23:42:06 crc kubenswrapper[4945]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 08 23:42:06 crc kubenswrapper[4945]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-hfhkg" message=< Jan 08 23:42:06 crc kubenswrapper[4945]: Exiting ovsdb-server (5) [ OK ] Jan 08 23:42:06 crc kubenswrapper[4945]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 08 23:42:06 crc kubenswrapper[4945]: + source /usr/local/bin/container-scripts/functions Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNBridge=br-int Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNRemote=tcp:localhost:6642 Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNEncapType=geneve Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNAvailabilityZones= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ EnableChassisAsGateway=true Jan 08 23:42:06 crc kubenswrapper[4945]: ++ PhysicalNetworks= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNHostName= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 08 23:42:06 crc kubenswrapper[4945]: ++ ovs_dir=/var/lib/openvswitch Jan 08 23:42:06 crc kubenswrapper[4945]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 08 23:42:06 crc kubenswrapper[4945]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 08 23:42:06 crc kubenswrapper[4945]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + sleep 0.5 Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + sleep 0.5 Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + cleanup_ovsdb_server_semaphore Jan 08 23:42:06 crc kubenswrapper[4945]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 08 23:42:06 crc kubenswrapper[4945]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 08 23:42:06 crc kubenswrapper[4945]: > Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.031831 4945 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 08 23:42:06 crc kubenswrapper[4945]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 08 23:42:06 crc kubenswrapper[4945]: + source /usr/local/bin/container-scripts/functions Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNBridge=br-int Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNRemote=tcp:localhost:6642 Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNEncapType=geneve Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNAvailabilityZones= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ EnableChassisAsGateway=true Jan 08 23:42:06 crc kubenswrapper[4945]: ++ PhysicalNetworks= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ OVNHostName= Jan 08 23:42:06 crc kubenswrapper[4945]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 08 23:42:06 crc kubenswrapper[4945]: ++ ovs_dir=/var/lib/openvswitch Jan 08 23:42:06 crc kubenswrapper[4945]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 08 23:42:06 crc kubenswrapper[4945]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 08 23:42:06 crc kubenswrapper[4945]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + sleep 0.5 Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + sleep 0.5 Jan 08 23:42:06 crc kubenswrapper[4945]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 08 23:42:06 crc kubenswrapper[4945]: + cleanup_ovsdb_server_semaphore Jan 08 23:42:06 crc kubenswrapper[4945]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 08 23:42:06 crc kubenswrapper[4945]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 08 23:42:06 crc kubenswrapper[4945]: > pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" containerID="cri-o://9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.031867 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" containerID="cri-o://9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" gracePeriod=29 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.062290 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" containerID="cri-o://452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" gracePeriod=29 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.077706 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0101c256-7c32-4906-897c-112a6c686f66" path="/var/lib/kubelet/pods/0101c256-7c32-4906-897c-112a6c686f66/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.078540 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050d08ce-2edb-4748-ad2d-de4183cd0188" path="/var/lib/kubelet/pods/050d08ce-2edb-4748-ad2d-de4183cd0188/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.079096 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8ce681-eb6b-419d-ba7e-ab78f58c08a8" path="/var/lib/kubelet/pods/0d8ce681-eb6b-419d-ba7e-ab78f58c08a8/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.085334 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:06 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: if [ -n "" ]; then Jan 08 23:42:06 crc kubenswrapper[4945]: GRANT_DATABASE="" Jan 08 23:42:06 crc kubenswrapper[4945]: else Jan 08 23:42:06 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:06 crc kubenswrapper[4945]: fi Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:06 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:06 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:06 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:06 crc kubenswrapper[4945]: # support updates Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.085897 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1155ea44-2cab-445e-a621-fbd85a2b31a9" path="/var/lib/kubelet/pods/1155ea44-2cab-445e-a621-fbd85a2b31a9/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.086690 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-xp9kx" podUID="eda15927-fac6-455b-8615-24f8f535c80a" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.086812 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c46c438-5dec-4a52-b24e-110451d11489" path="/var/lib/kubelet/pods/1c46c438-5dec-4a52-b24e-110451d11489/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.088761 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e56604b-5a70-4403-9e9e-4842d685fadd" path="/var/lib/kubelet/pods/1e56604b-5a70-4403-9e9e-4842d685fadd/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.089536 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9 is running failed: container process not found" containerID="230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.090857 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30241cbb-d52a-4c7f-9d0c-2d44522952f7" path="/var/lib/kubelet/pods/30241cbb-d52a-4c7f-9d0c-2d44522952f7/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.091503 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0" path="/var/lib/kubelet/pods/3a21fdf3-fec0-40d2-b6ea-95b2808fe1e0/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.093403 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9 is running failed: container process not found" containerID="230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.097041 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4442a04f-05ad-4da6-8312-74cbce0ed2f1" path="/var/lib/kubelet/pods/4442a04f-05ad-4da6-8312-74cbce0ed2f1/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.099024 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9 is running failed: container process not found" containerID="230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.099100 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="ovsdbserver-sb" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.102435 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64fc29fe-804c-4553-811c-014595972fbd" path="/var/lib/kubelet/pods/64fc29fe-804c-4553-811c-014595972fbd/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.103135 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd481cc-b375-4a3a-b41c-2690888844e6" path="/var/lib/kubelet/pods/6dd481cc-b375-4a3a-b41c-2690888844e6/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.103926 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91d2a120-b7c1-44e5-a3c0-6720acab34a7" path="/var/lib/kubelet/pods/91d2a120-b7c1-44e5-a3c0-6720acab34a7/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.107617 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d" path="/var/lib/kubelet/pods/ecf1ba9a-68ca-46ac-b5f3-5ec6f7acdf5d/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.108290 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc565fd6-de46-476f-9dc4-8e53aad38fdd" path="/var/lib/kubelet/pods/fc565fd6-de46-476f-9dc4-8e53aad38fdd/volumes" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.108888 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lrvhj"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.108919 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lrvhj"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.108934 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.108952 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.108967 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6794547bf7-wqlnm"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.108979 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.109233 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6794547bf7-wqlnm" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerName="neutron-api" containerID="cri-o://d5e19d3d92fe8055cf6b5088d170ddda70b5ab7d24cd3f1d890303c9f017d30d" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.109370 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6794547bf7-wqlnm" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerName="neutron-httpd" containerID="cri-o://40e878309fb2714dc92ffc1ca85d0a0b40ba57da80d5ce071bad31bc2db4462c" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.109720 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerName="glance-log" containerID="cri-o://654dfd0dae13b6eca5059e86d0a2d97564f19b2b01579e9aca119bd08b290b5c" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.109424 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerName="cinder-scheduler" containerID="cri-o://cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.110818 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerName="probe" containerID="cri-o://a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.110925 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerName="glance-httpd" containerID="cri-o://f67a08265ea88bae6d39299224d2a2604f867f86d99af833fa2c5deefc166ff7" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.121585 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api-log" containerID="cri-o://617e103fd47ab70027896060185afd85b04295da494b0f4b35c58a7ba8a8d5e8" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.121768 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api" containerID="cri-o://ebe090f7ada13f633224e0bbcee404b72c09adfb8c09163bb99a6a8d5ca17ea4" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.174621 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-f2zr9"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.174647 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nh2p7_242587ad-03ea-45d9-be99-4deb624ce107/openstack-network-exporter/0.log" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.175072 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.286958 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-f2zr9"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.301289 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.335032 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee25-account-create-update-wfvwd"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.335374 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_36817bdb-e28c-495c-9e26-005e53f3cc2a/ovsdbserver-nb/0.log" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.335457 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.343127 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.343480 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerName="glance-log" containerID="cri-o://15641b43f05f79e74699bfe52baf19315b239ba529af80999ae5807b2745e479" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.344405 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerName="glance-httpd" containerID="cri-o://a3b7d465ce7932bc7a2c58b3a8d58a6d40a84dc47fe41a15fffd4d2e75e42570" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.374839 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-combined-ca-bundle\") pod \"242587ad-03ea-45d9-be99-4deb624ce107\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375077 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-scripts\") pod \"36817bdb-e28c-495c-9e26-005e53f3cc2a\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375183 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-metrics-certs-tls-certs\") pod \"242587ad-03ea-45d9-be99-4deb624ce107\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375247 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-combined-ca-bundle\") pod \"36817bdb-e28c-495c-9e26-005e53f3cc2a\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375359 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-metrics-certs-tls-certs\") pod \"36817bdb-e28c-495c-9e26-005e53f3cc2a\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375441 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdbserver-nb-tls-certs\") pod \"36817bdb-e28c-495c-9e26-005e53f3cc2a\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375539 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"36817bdb-e28c-495c-9e26-005e53f3cc2a\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375610 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdb-rundir\") pod \"36817bdb-e28c-495c-9e26-005e53f3cc2a\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375669 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovs-rundir\") pod \"242587ad-03ea-45d9-be99-4deb624ce107\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.375895 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-config\") pod \"36817bdb-e28c-495c-9e26-005e53f3cc2a\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.376145 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq6mb\" (UniqueName: \"kubernetes.io/projected/36817bdb-e28c-495c-9e26-005e53f3cc2a-kube-api-access-gq6mb\") pod \"36817bdb-e28c-495c-9e26-005e53f3cc2a\" (UID: \"36817bdb-e28c-495c-9e26-005e53f3cc2a\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.376232 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvlph\" (UniqueName: \"kubernetes.io/projected/242587ad-03ea-45d9-be99-4deb624ce107-kube-api-access-kvlph\") pod \"242587ad-03ea-45d9-be99-4deb624ce107\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.376320 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242587ad-03ea-45d9-be99-4deb624ce107-config\") pod \"242587ad-03ea-45d9-be99-4deb624ce107\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.376391 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovn-rundir\") pod \"242587ad-03ea-45d9-be99-4deb624ce107\" (UID: \"242587ad-03ea-45d9-be99-4deb624ce107\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.376789 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "242587ad-03ea-45d9-be99-4deb624ce107" (UID: "242587ad-03ea-45d9-be99-4deb624ce107"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.379666 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-scripts" (OuterVolumeSpecName: "scripts") pod "36817bdb-e28c-495c-9e26-005e53f3cc2a" (UID: "36817bdb-e28c-495c-9e26-005e53f3cc2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.379852 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "36817bdb-e28c-495c-9e26-005e53f3cc2a" (UID: "36817bdb-e28c-495c-9e26-005e53f3cc2a"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.380069 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "242587ad-03ea-45d9-be99-4deb624ce107" (UID: "242587ad-03ea-45d9-be99-4deb624ce107"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.380453 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-config" (OuterVolumeSpecName: "config") pod "36817bdb-e28c-495c-9e26-005e53f3cc2a" (UID: "36817bdb-e28c-495c-9e26-005e53f3cc2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.389277 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84e9-account-create-update-jm7gq"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.398082 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242587ad-03ea-45d9-be99-4deb624ce107-config" (OuterVolumeSpecName: "config") pod "242587ad-03ea-45d9-be99-4deb624ce107" (UID: "242587ad-03ea-45d9-be99-4deb624ce107"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.398424 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36817bdb-e28c-495c-9e26-005e53f3cc2a-kube-api-access-gq6mb" (OuterVolumeSpecName: "kube-api-access-gq6mb") pod "36817bdb-e28c-495c-9e26-005e53f3cc2a" (UID: "36817bdb-e28c-495c-9e26-005e53f3cc2a"). InnerVolumeSpecName "kube-api-access-gq6mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.399842 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ckkb7"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.401814 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242587ad-03ea-45d9-be99-4deb624ce107-kube-api-access-kvlph" (OuterVolumeSpecName: "kube-api-access-kvlph") pod "242587ad-03ea-45d9-be99-4deb624ce107" (UID: "242587ad-03ea-45d9-be99-4deb624ce107"). InnerVolumeSpecName "kube-api-access-kvlph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.409804 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ckkb7"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.422341 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerName="rabbitmq" containerID="cri-o://0e4014df7512e89b5d332f842e50840d513c77310ddfa321933cdc5b307230c9" gracePeriod=604800 Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.451242 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:06 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: if [ -n "barbican" ]; then Jan 08 23:42:06 crc kubenswrapper[4945]: GRANT_DATABASE="barbican" Jan 08 23:42:06 crc kubenswrapper[4945]: else Jan 08 23:42:06 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:06 crc kubenswrapper[4945]: fi Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:06 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:06 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:06 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:06 crc kubenswrapper[4945]: # support updates Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.454244 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-d684-account-create-update-ww984" podUID="a6e1114b-d949-45e8-a640-d448d52ea983" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.466505 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "36817bdb-e28c-495c-9e26-005e53f3cc2a" (UID: "36817bdb-e28c-495c-9e26-005e53f3cc2a"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479775 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479824 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq6mb\" (UniqueName: \"kubernetes.io/projected/36817bdb-e28c-495c-9e26-005e53f3cc2a-kube-api-access-gq6mb\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479836 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvlph\" (UniqueName: \"kubernetes.io/projected/242587ad-03ea-45d9-be99-4deb624ce107-kube-api-access-kvlph\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479847 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242587ad-03ea-45d9-be99-4deb624ce107-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479857 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479869 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36817bdb-e28c-495c-9e26-005e53f3cc2a-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479895 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479905 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.479914 4945 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/242587ad-03ea-45d9-be99-4deb624ce107-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.480161 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.480267 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data podName:71eb40d2-e481-445d-99ea-948b918b862d nodeName:}" failed. No retries permitted until 2026-01-08 23:42:08.480236791 +0000 UTC m=+1598.791395727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data") pod "rabbitmq-server-0" (UID: "71eb40d2-e481-445d-99ea-948b918b862d") : configmap "rabbitmq-config-data" not found Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.490434 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.515763 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fw668"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.518287 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "242587ad-03ea-45d9-be99-4deb624ce107" (UID: "242587ad-03ea-45d9-be99-4deb624ce107"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.534977 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fw668"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.535061 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.535299 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9f823122-da64-4ac4-aa14-96bc8f2f9c1c" containerName="nova-scheduler-scheduler" containerID="cri-o://d56225670b178c36792afe8b5e40b521c40f21fe6ec9c747b20bf8f26e4c3640" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.543379 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d4b0-account-create-update-k68cp"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.556795 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5nhlb"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.566118 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5nhlb"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.574198 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.574486 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-log" containerID="cri-o://7ecbd9fcee16f88440831e6610cfef4e63350db971f4e6808a4543a0b4ed747b" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.575038 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-api" containerID="cri-o://0e98c349bd4bb46a725fbf926d7b62bd3253a7708fe210e97d08c5303e0c8116" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.590898 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.602369 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d684-account-create-update-ww984"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.602843 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-log" containerID="cri-o://18f1d04e9c438af26a72f40f614d629c0cba6569447b80e683d6b701672e3900" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.603377 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-metadata" containerID="cri-o://360c56ebe46377c61cd806e3349a6a48d19f706ccb293962a2db56915941150d" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.613305 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-rzphm"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.623913 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.623977 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-rzphm"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.638152 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-654744c45f-2rmcg"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.638473 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-654744c45f-2rmcg" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerName="barbican-worker-log" containerID="cri-o://bef7fb091982995ae682a74e3650d7a2edfd9a14a59f6cae0ab48178e3d0612d" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.642499 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-654744c45f-2rmcg" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerName="barbican-worker" containerID="cri-o://d25ef862eebe28203473e7e0b4d587e0913d21309620357f26a462668c27fa9d" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.650974 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2784-account-create-update-ll7qt"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.651151 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36817bdb-e28c-495c-9e26-005e53f3cc2a" (UID: "36817bdb-e28c-495c-9e26-005e53f3cc2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.666803 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-qxmf7"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.685333 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75cbb987cb-dt6t6"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.685632 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.685670 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75cbb987cb-dt6t6" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api-log" containerID="cri-o://08ca0607ce584cf8045417f996764ffc392d3261d41883a5078094c48ae1c950" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.685806 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75cbb987cb-dt6t6" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api" containerID="cri-o://79c3e5ad5b8d05cf65c473b2c9291f7836e5f788b3ab861c6aaa651a1b04f94d" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.694299 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-qxmf7"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.720314 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w9nlx"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.722214 4945 generic.go:334] "Generic (PLEG): container finished" podID="60fef7df-b0da-45e7-8dfe-434dacea4715" containerID="e62730e74d5750aea233a6665a98c38e3fab63a273b9b58230cb2e7f26de724f" exitCode=137 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.730031 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.730076 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.742686 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7n26t"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.748194 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7n26t"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.757265 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-f5458d448-xj5lz"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.757771 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerName="barbican-keystone-listener-log" containerID="cri-o://cbeb77761327d37cd9d6f433a669d8d50075b45d4a2017862cf0fb4848999998" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.758404 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerName="barbican-keystone-listener" containerID="cri-o://f4d36b471e22f23698a2643337f5b4f19fbe9b1e28ec3042b9fac4a9d84d2ae4" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.769420 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "36817bdb-e28c-495c-9e26-005e53f3cc2a" (UID: "36817bdb-e28c-495c-9e26-005e53f3cc2a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.769531 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xp9kx"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.785685 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.786117 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5ede73cf-0521-442e-8f01-b63d8d9b4725" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://aab0536ef7d9d6c8e9048d5c7063401c083ca8fef6235e9b02f49cd7abccc975" gracePeriod=30 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.801695 4945 generic.go:334] "Generic (PLEG): container finished" podID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerID="654dfd0dae13b6eca5059e86d0a2d97564f19b2b01579e9aca119bd08b290b5c" exitCode=143 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.801782 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2eb23b1e-c7b1-465a-a91c-6042942e604a","Type":"ContainerDied","Data":"654dfd0dae13b6eca5059e86d0a2d97564f19b2b01579e9aca119bd08b290b5c"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.802200 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "36817bdb-e28c-495c-9e26-005e53f3cc2a" (UID: "36817bdb-e28c-495c-9e26-005e53f3cc2a"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.812368 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "242587ad-03ea-45d9-be99-4deb624ce107" (UID: "242587ad-03ea-45d9-be99-4deb624ce107"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.824521 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 08 23:42:06 crc kubenswrapper[4945]: W0108 23:42:06.833879 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd90095c9_0666_4a15_878f_4b62280c3d00.slice/crio-f6a7e698ee325e87cef1c63af91160534db4796156325b9f373eefee80deec0a WatchSource:0}: Error finding container f6a7e698ee325e87cef1c63af91160534db4796156325b9f373eefee80deec0a: Status 404 returned error can't find the container with id f6a7e698ee325e87cef1c63af91160534db4796156325b9f373eefee80deec0a Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.840564 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/242587ad-03ea-45d9-be99-4deb624ce107-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.840577 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xp9kx"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.840946 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.840967 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/36817bdb-e28c-495c-9e26-005e53f3cc2a-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.845464 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:06 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: if [ -n "neutron" ]; then Jan 08 23:42:06 crc kubenswrapper[4945]: GRANT_DATABASE="neutron" Jan 08 23:42:06 crc kubenswrapper[4945]: else Jan 08 23:42:06 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:06 crc kubenswrapper[4945]: fi Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:06 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:06 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:06 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:06 crc kubenswrapper[4945]: # support updates Jan 08 23:42:06 crc kubenswrapper[4945]: Jan 08 23:42:06 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:06 crc kubenswrapper[4945]: E0108 23:42:06.847122 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-84e9-account-create-update-jm7gq" podUID="d90095c9-0666-4a15-878f-4b62280c3d00" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.866552 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d684-account-create-update-ww984"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.867317 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_90046452-437b-4666-83a8-e8ee09bfc932/ovsdbserver-sb/0.log" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.867868 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.905368 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="71eb40d2-e481-445d-99ea-948b918b862d" containerName="rabbitmq" containerID="cri-o://3c8e62ad7bb3a5c1b692e76747e535c86452a618975faa4a7349a1cd8e6445b4" gracePeriod=604800 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912654 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912705 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912721 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912745 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912754 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912767 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912775 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912787 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912796 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912804 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912815 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912825 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912928 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.912969 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913021 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913035 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913048 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913061 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913074 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913085 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913103 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913115 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913126 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.913140 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.916347 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.920286 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rpppg"] Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.921594 4945 generic.go:334] "Generic (PLEG): container finished" podID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerID="15641b43f05f79e74699bfe52baf19315b239ba529af80999ae5807b2745e479" exitCode=143 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.921751 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59","Type":"ContainerDied","Data":"15641b43f05f79e74699bfe52baf19315b239ba529af80999ae5807b2745e479"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.934900 4945 generic.go:334] "Generic (PLEG): container finished" podID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerID="6439494f8f4be73fcba62cb9f436505c4882f4e970c1fb7590409804556a3683" exitCode=0 Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.935157 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.935339 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-l7prv" event={"ID":"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2","Type":"ContainerDied","Data":"6439494f8f4be73fcba62cb9f436505c4882f4e970c1fb7590409804556a3683"} Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.935415 4945 scope.go:117] "RemoveContainer" containerID="6439494f8f4be73fcba62cb9f436505c4882f4e970c1fb7590409804556a3683" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.946650 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mq2\" (UniqueName: \"kubernetes.io/projected/90046452-437b-4666-83a8-e8ee09bfc932-kube-api-access-57mq2\") pod \"90046452-437b-4666-83a8-e8ee09bfc932\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.946722 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-scripts\") pod \"90046452-437b-4666-83a8-e8ee09bfc932\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.946766 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-sb\") pod \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.946807 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-swift-storage-0\") pod \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.946846 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ftr5\" (UniqueName: \"kubernetes.io/projected/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-kube-api-access-5ftr5\") pod \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.946884 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-config\") pod \"90046452-437b-4666-83a8-e8ee09bfc932\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.950618 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-scripts" (OuterVolumeSpecName: "scripts") pod "90046452-437b-4666-83a8-e8ee09bfc932" (UID: "90046452-437b-4666-83a8-e8ee09bfc932"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:06 crc kubenswrapper[4945]: I0108 23:42:06.953099 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-config" (OuterVolumeSpecName: "config") pod "90046452-437b-4666-83a8-e8ee09bfc932" (UID: "90046452-437b-4666-83a8-e8ee09bfc932"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:06.997817 4945 scope.go:117] "RemoveContainer" containerID="43e8473c2dc71816f08dca8bcc5dc8eb33185a2f8bc495c5a216898f114c816d" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:06.998096 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" containerName="galera" containerID="cri-o://6864af6cbe8f647a9f0c28948b92d59f1746e14059ad5a2b80f2933cb34799cc" gracePeriod=30 Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:06.998401 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90046452-437b-4666-83a8-e8ee09bfc932-kube-api-access-57mq2" (OuterVolumeSpecName: "kube-api-access-57mq2") pod "90046452-437b-4666-83a8-e8ee09bfc932" (UID: "90046452-437b-4666-83a8-e8ee09bfc932"). InnerVolumeSpecName "kube-api-access-57mq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:06.998513 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-kube-api-access-5ftr5" (OuterVolumeSpecName: "kube-api-access-5ftr5") pod "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" (UID: "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2"). InnerVolumeSpecName "kube-api-access-5ftr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.000924 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.001742 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="7b8f132e-3fda-4a38-8416-1055a62e7552" containerName="nova-cell1-conductor-conductor" containerID="cri-o://8a7142f351f49dd941a579a875be8ea9073637a6c87204db987a41fea1ed6c9f" gracePeriod=30 Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.005458 4945 generic.go:334] "Generic (PLEG): container finished" podID="3c1913ce-ea65-4745-baf8-621191c50b55" containerID="b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4" exitCode=143 Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.005671 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4586c964-cfb7b" event={"ID":"3c1913ce-ea65-4745-baf8-621191c50b55","Type":"ContainerDied","Data":"b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4"} Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.029114 4945 generic.go:334] "Generic (PLEG): container finished" podID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" exitCode=0 Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.029433 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hfhkg" event={"ID":"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f","Type":"ContainerDied","Data":"9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8"} Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.036646 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rpppg"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.039398 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" (UID: "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.039662 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nh2p7_242587ad-03ea-45d9-be99-4deb624ce107/openstack-network-exporter/0.log" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.039827 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nh2p7" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.040046 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nh2p7" event={"ID":"242587ad-03ea-45d9-be99-4deb624ce107","Type":"ContainerDied","Data":"e065557af6edc7dc9e97c82bd1cb799d5f82d15ee58699fc15aa8b6baa8d5ceb"} Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.045726 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_36817bdb-e28c-495c-9e26-005e53f3cc2a/ovsdbserver-nb/0.log" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.046056 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.046148 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"36817bdb-e28c-495c-9e26-005e53f3cc2a","Type":"ContainerDied","Data":"1d2de1945f394afc7f8c64a13b440db07aa5c4c42b42236eadf3d5356331d1ad"} Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.047636 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xp9kx" event={"ID":"eda15927-fac6-455b-8615-24f8f535c80a","Type":"ContainerStarted","Data":"34016ce4644f1818736731b4ae3db44588e4cd1bd35c17ca2b8137980fa726f2"} Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.047691 4945 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-xp9kx" secret="" err="secret \"galera-openstack-cell1-dockercfg-2vgjz\" not found" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.049584 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-ovsdbserver-sb-tls-certs\") pod \"90046452-437b-4666-83a8-e8ee09bfc932\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.050793 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"90046452-437b-4666-83a8-e8ee09bfc932\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.052651 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-config\") pod \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.058418 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-combined-ca-bundle\") pod \"90046452-437b-4666-83a8-e8ee09bfc932\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.059070 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90046452-437b-4666-83a8-e8ee09bfc932-ovsdb-rundir\") pod \"90046452-437b-4666-83a8-e8ee09bfc932\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.059280 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-metrics-certs-tls-certs\") pod \"90046452-437b-4666-83a8-e8ee09bfc932\" (UID: \"90046452-437b-4666-83a8-e8ee09bfc932\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.059946 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-nb\") pod \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.060232 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-svc\") pod \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\" (UID: \"ca0aa3d3-8093-42f1-9fa6-ad3883441ab2\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.062611 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.062966 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "90046452-437b-4666-83a8-e8ee09bfc932" (UID: "90046452-437b-4666-83a8-e8ee09bfc932"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.069232 4945 generic.go:334] "Generic (PLEG): container finished" podID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerID="40e878309fb2714dc92ffc1ca85d0a0b40ba57da80d5ce071bad31bc2db4462c" exitCode=0 Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.069421 4945 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.069484 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts podName:eda15927-fac6-455b-8615-24f8f535c80a nodeName:}" failed. No retries permitted until 2026-01-08 23:42:07.569463222 +0000 UTC m=+1597.880622168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts") pod "root-account-create-update-xp9kx" (UID: "eda15927-fac6-455b-8615-24f8f535c80a") : configmap "openstack-cell1-scripts" not found Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.070530 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90046452-437b-4666-83a8-e8ee09bfc932-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "90046452-437b-4666-83a8-e8ee09bfc932" (UID: "90046452-437b-4666-83a8-e8ee09bfc932"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.089146 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6794547bf7-wqlnm" event={"ID":"3b682d87-6d87-4d38-b1c5-a5e4c3664472","Type":"ContainerDied","Data":"40e878309fb2714dc92ffc1ca85d0a0b40ba57da80d5ce071bad31bc2db4462c"} Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.089351 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.089369 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mq2\" (UniqueName: \"kubernetes.io/projected/90046452-437b-4666-83a8-e8ee09bfc932-kube-api-access-57mq2\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.089476 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.089490 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.089501 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ftr5\" (UniqueName: \"kubernetes.io/projected/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-kube-api-access-5ftr5\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.089512 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90046452-437b-4666-83a8-e8ee09bfc932-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.089522 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/90046452-437b-4666-83a8-e8ee09bfc932-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.101928 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="37125f43-8fb6-4625-a260-8d43cdbe167a" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12" gracePeriod=30 Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.103628 4945 scope.go:117] "RemoveContainer" containerID="65d1afbf60fdd5284b8bb7770cb76316e3e898a189e17fc2b6f0138e0d6d3482" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.110337 4945 generic.go:334] "Generic (PLEG): container finished" podID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerID="617e103fd47ab70027896060185afd85b04295da494b0f4b35c58a7ba8a8d5e8" exitCode=143 Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.110443 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04a2b873-3034-4b9f-9daf-81db6749d45f","Type":"ContainerDied","Data":"617e103fd47ab70027896060185afd85b04295da494b0f4b35c58a7ba8a8d5e8"} Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.119492 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d684-account-create-update-ww984" event={"ID":"a6e1114b-d949-45e8-a640-d448d52ea983","Type":"ContainerStarted","Data":"6e8322015f6a0a99328fa09314da7c008da8b6ba4f119d57da79ed1388682f3f"} Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.138939 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:07 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: if [ -n "placement" ]; then Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="placement" Jan 08 23:42:07 crc kubenswrapper[4945]: else Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:07 crc kubenswrapper[4945]: fi Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:07 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:07 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:07 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:07 crc kubenswrapper[4945]: # support updates Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.139206 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:07 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: if [ -n "barbican" ]; then Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="barbican" Jan 08 23:42:07 crc kubenswrapper[4945]: else Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:07 crc kubenswrapper[4945]: fi Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:07 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:07 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:07 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:07 crc kubenswrapper[4945]: # support updates Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.142330 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:07 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: if [ -n "" ]; then Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="" Jan 08 23:42:07 crc kubenswrapper[4945]: else Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:07 crc kubenswrapper[4945]: fi Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:07 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:07 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:07 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:07 crc kubenswrapper[4945]: # support updates Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.157511 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-3bb0-account-create-update-4s5gt" podUID="179a20bc-72ca-4f86-8cd0-5c6df9211839" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.158588 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsz4z"] Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.162481 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-xp9kx" podUID="eda15927-fac6-455b-8615-24f8f535c80a" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.162561 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-d684-account-create-update-ww984" podUID="a6e1114b-d949-45e8-a640-d448d52ea983" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.185574 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-config" (OuterVolumeSpecName: "config") pod "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" (UID: "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.195871 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.221421 4945 scope.go:117] "RemoveContainer" containerID="29134cac38be275897153b9e527a487bd6dc85a0149bae9cb21fa2cca5dc21f1" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.235443 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" (UID: "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.254423 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-xsz4z"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.281052 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.323574 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.323609 4945 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.325682 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:07 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: if [ -n "cinder" ]; then Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="cinder" Jan 08 23:42:07 crc kubenswrapper[4945]: else Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:07 crc kubenswrapper[4945]: fi Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:07 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:07 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:07 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:07 crc kubenswrapper[4945]: # support updates Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.326021 4945 scope.go:117] "RemoveContainer" containerID="324e6e48dab6c7d527635ca30c759a02e9ba3563bc130abb4ddbc3f4fe3ceb54" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.327218 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-ee25-account-create-update-wfvwd" podUID="da59c88c-d7a5-4fb4-8b32-4c73be685b4f" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.340534 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84e9-account-create-update-jm7gq"] Jan 08 23:42:07 crc kubenswrapper[4945]: W0108 23:42:07.357174 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e74d9ee_5e3b_4205_8ac4_1729d495861b.slice/crio-5d3adb5d0637936558186af86a2ffea6ae1e4b280e4032115f715139b8c50b50 WatchSource:0}: Error finding container 5d3adb5d0637936558186af86a2ffea6ae1e4b280e4032115f715139b8c50b50: Status 404 returned error can't find the container with id 5d3adb5d0637936558186af86a2ffea6ae1e4b280e4032115f715139b8c50b50 Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.363300 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90046452-437b-4666-83a8-e8ee09bfc932" (UID: "90046452-437b-4666-83a8-e8ee09bfc932"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.366564 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:07 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: if [ -n "glance" ]; then Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="glance" Jan 08 23:42:07 crc kubenswrapper[4945]: else Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:07 crc kubenswrapper[4945]: fi Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:07 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:07 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:07 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:07 crc kubenswrapper[4945]: # support updates Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.367583 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.367861 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-d4b0-account-create-update-k68cp" podUID="0e74d9ee-5e3b-4205-8ac4-1729d495861b" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.385596 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.389411 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.389815 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" (UID: "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.399754 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee25-account-create-update-wfvwd"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.427036 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.427070 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.427145 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.427197 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data podName:e920b84a-bd1b-4649-9cc0-e3b239d6a5b9 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:11.427182001 +0000 UTC m=+1601.738340947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data") pod "rabbitmq-cell1-server-0" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9") : configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.434437 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "90046452-437b-4666-83a8-e8ee09bfc932" (UID: "90046452-437b-4666-83a8-e8ee09bfc932"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.452435 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" (UID: "ca0aa3d3-8093-42f1-9fa6-ad3883441ab2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.458392 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.458634 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.458890 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.458919 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.461493 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.465817 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-96f5cc787-k4zdr"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.466087 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-96f5cc787-k4zdr" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-httpd" containerID="cri-o://5f876eb0a75e5d6e0f574148126dfbb45b4fa70619ddd67c99cc1342b57a8d1f" gracePeriod=30 Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.466443 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-96f5cc787-k4zdr" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-server" containerID="cri-o://d385c8c468d076b34c28f8eb6f3dfb6aeddb5e1ac200b92a6cb0997c736e762e" gracePeriod=30 Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.476186 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.480451 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.480512 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.487962 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d4b0-account-create-update-k68cp"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.492274 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-nh2p7"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.503180 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "90046452-437b-4666-83a8-e8ee09bfc932" (UID: "90046452-437b-4666-83a8-e8ee09bfc932"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.504838 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-nh2p7"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.528285 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-combined-ca-bundle\") pod \"60fef7df-b0da-45e7-8dfe-434dacea4715\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.528475 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7mxd\" (UniqueName: \"kubernetes.io/projected/60fef7df-b0da-45e7-8dfe-434dacea4715-kube-api-access-w7mxd\") pod \"60fef7df-b0da-45e7-8dfe-434dacea4715\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.528654 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config-secret\") pod \"60fef7df-b0da-45e7-8dfe-434dacea4715\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.528699 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config\") pod \"60fef7df-b0da-45e7-8dfe-434dacea4715\" (UID: \"60fef7df-b0da-45e7-8dfe-434dacea4715\") " Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.529346 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.529375 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/90046452-437b-4666-83a8-e8ee09bfc932-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.529385 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.536266 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fef7df-b0da-45e7-8dfe-434dacea4715-kube-api-access-w7mxd" (OuterVolumeSpecName: "kube-api-access-w7mxd") pod "60fef7df-b0da-45e7-8dfe-434dacea4715" (UID: "60fef7df-b0da-45e7-8dfe-434dacea4715"). InnerVolumeSpecName "kube-api-access-w7mxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.569973 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "60fef7df-b0da-45e7-8dfe-434dacea4715" (UID: "60fef7df-b0da-45e7-8dfe-434dacea4715"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.579177 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2784-account-create-update-ll7qt"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.592229 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60fef7df-b0da-45e7-8dfe-434dacea4715" (UID: "60fef7df-b0da-45e7-8dfe-434dacea4715"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.626925 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w9nlx"] Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.631163 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7mxd\" (UniqueName: \"kubernetes.io/projected/60fef7df-b0da-45e7-8dfe-434dacea4715-kube-api-access-w7mxd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.631189 4945 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.631199 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.631259 4945 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.631311 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts podName:eda15927-fac6-455b-8615-24f8f535c80a nodeName:}" failed. No retries permitted until 2026-01-08 23:42:08.631295823 +0000 UTC m=+1598.942454769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts") pod "root-account-create-update-xp9kx" (UID: "eda15927-fac6-455b-8615-24f8f535c80a") : configmap "openstack-cell1-scripts" not found Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.647137 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:07 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: if [ -n "nova_api" ]; then Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="nova_api" Jan 08 23:42:07 crc kubenswrapper[4945]: else Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:07 crc kubenswrapper[4945]: fi Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:07 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:07 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:07 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:07 crc kubenswrapper[4945]: # support updates Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.649256 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-2784-account-create-update-ll7qt" podUID="b55a1df4-3a27-411d-b6ff-8c72c20f4d05" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.675953 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "60fef7df-b0da-45e7-8dfe-434dacea4715" (UID: "60fef7df-b0da-45e7-8dfe-434dacea4715"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.730152 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-96f5cc787-k4zdr" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.166:8080/healthcheck\": dial tcp 10.217.0.166:8080: connect: connection refused" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.730702 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-96f5cc787-k4zdr" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.166:8080/healthcheck\": dial tcp 10.217.0.166:8080: connect: connection refused" Jan 08 23:42:07 crc kubenswrapper[4945]: I0108 23:42:07.733701 4945 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/60fef7df-b0da-45e7-8dfe-434dacea4715-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.763357 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:07 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: if [ -n "nova_cell0" ]; then Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="nova_cell0" Jan 08 23:42:07 crc kubenswrapper[4945]: else Jan 08 23:42:07 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:07 crc kubenswrapper[4945]: fi Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:07 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:07 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:07 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:07 crc kubenswrapper[4945]: # support updates Jan 08 23:42:07 crc kubenswrapper[4945]: Jan 08 23:42:07 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.764501 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" podUID="8e6d9797-d686-44c0-918a-f17bac13b874" Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.836295 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.836383 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:11.836364189 +0000 UTC m=+1602.147523135 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-scripts" not found Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.836426 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 08 23:42:07 crc kubenswrapper[4945]: E0108 23:42:07.836447 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:11.836441181 +0000 UTC m=+1602.147600127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-config" not found Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.027420 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008a988d-6834-4bbe-b9ec-333cfe1c534c" path="/var/lib/kubelet/pods/008a988d-6834-4bbe-b9ec-333cfe1c534c/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.029586 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242587ad-03ea-45d9-be99-4deb624ce107" path="/var/lib/kubelet/pods/242587ad-03ea-45d9-be99-4deb624ce107/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.031885 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3459aa58-67f7-4d0c-a3ae-3a53bf5404b3" path="/var/lib/kubelet/pods/3459aa58-67f7-4d0c-a3ae-3a53bf5404b3/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.033631 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" path="/var/lib/kubelet/pods/36817bdb-e28c-495c-9e26-005e53f3cc2a/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.035326 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51598250-e998-4bd9-8846-179741f8c0b9" path="/var/lib/kubelet/pods/51598250-e998-4bd9-8846-179741f8c0b9/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.036725 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e9f018-13fb-40ef-bc38-0453684d5e6c" path="/var/lib/kubelet/pods/58e9f018-13fb-40ef-bc38-0453684d5e6c/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.038208 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e9d1095-f2ce-463c-9f99-f4d8a10b834b" path="/var/lib/kubelet/pods/5e9d1095-f2ce-463c-9f99-f4d8a10b834b/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.039040 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fef7df-b0da-45e7-8dfe-434dacea4715" path="/var/lib/kubelet/pods/60fef7df-b0da-45e7-8dfe-434dacea4715/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.039688 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="688e13f9-5653-41da-ba2a-541ffaa8cec9" path="/var/lib/kubelet/pods/688e13f9-5653-41da-ba2a-541ffaa8cec9/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.040623 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739bfff7-d0fc-41c9-a590-ae8dae65a02c" path="/var/lib/kubelet/pods/739bfff7-d0fc-41c9-a590-ae8dae65a02c/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.042210 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4205b5b-eedb-4e63-9535-452815f376f6" path="/var/lib/kubelet/pods/c4205b5b-eedb-4e63-9535-452815f376f6/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.042806 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db3e91f4-16b7-4a04-a23d-f3299d6781e5" path="/var/lib/kubelet/pods/db3e91f4-16b7-4a04-a23d-f3299d6781e5/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.046620 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df214629-0f2c-4a4c-af2a-0f69e06a0899" path="/var/lib/kubelet/pods/df214629-0f2c-4a4c-af2a-0f69e06a0899/volumes" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.082967 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l7prv"] Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.101597 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-l7prv"] Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.149901 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bp9dq"] Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.150480 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerName="ovsdbserver-nb" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150500 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerName="ovsdbserver-nb" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.150521 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="ovsdbserver-sb" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150528 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="ovsdbserver-sb" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.150540 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150546 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.150569 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerName="init" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150577 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerName="init" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.150592 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerName="dnsmasq-dns" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150598 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerName="dnsmasq-dns" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.150607 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150613 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.150626 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242587ad-03ea-45d9-be99-4deb624ce107" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150632 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="242587ad-03ea-45d9-be99-4deb624ce107" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150829 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerName="ovsdbserver-nb" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150854 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150868 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="90046452-437b-4666-83a8-e8ee09bfc932" containerName="ovsdbserver-sb" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150878 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="242587ad-03ea-45d9-be99-4deb624ce107" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150887 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" containerName="dnsmasq-dns" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.150896 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="36817bdb-e28c-495c-9e26-005e53f3cc2a" containerName="openstack-network-exporter" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.151667 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.167505 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.171470 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" event={"ID":"8e6d9797-d686-44c0-918a-f17bac13b874","Type":"ContainerStarted","Data":"7fada45949056355c390abae84920c2de26411f1ba68881c7a048906d1ac1f6d"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.179970 4945 generic.go:334] "Generic (PLEG): container finished" podID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerID="7ecbd9fcee16f88440831e6610cfef4e63350db971f4e6808a4543a0b4ed747b" exitCode=143 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.180140 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adb334a7-9a7f-4e20-9dc8-092b9372bb10","Type":"ContainerDied","Data":"7ecbd9fcee16f88440831e6610cfef4e63350db971f4e6808a4543a0b4ed747b"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.190153 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bp9dq"] Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.194217 4945 generic.go:334] "Generic (PLEG): container finished" podID="5ede73cf-0521-442e-8f01-b63d8d9b4725" containerID="aab0536ef7d9d6c8e9048d5c7063401c083ca8fef6235e9b02f49cd7abccc975" exitCode=0 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.194330 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5ede73cf-0521-442e-8f01-b63d8d9b4725","Type":"ContainerDied","Data":"aab0536ef7d9d6c8e9048d5c7063401c083ca8fef6235e9b02f49cd7abccc975"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.203114 4945 generic.go:334] "Generic (PLEG): container finished" podID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerID="d385c8c468d076b34c28f8eb6f3dfb6aeddb5e1ac200b92a6cb0997c736e762e" exitCode=0 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.203146 4945 generic.go:334] "Generic (PLEG): container finished" podID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerID="5f876eb0a75e5d6e0f574148126dfbb45b4fa70619ddd67c99cc1342b57a8d1f" exitCode=0 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.203216 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-96f5cc787-k4zdr" event={"ID":"6dc39aab-86d6-45f6-b565-3da5375a1983","Type":"ContainerDied","Data":"d385c8c468d076b34c28f8eb6f3dfb6aeddb5e1ac200b92a6cb0997c736e762e"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.203249 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-96f5cc787-k4zdr" event={"ID":"6dc39aab-86d6-45f6-b565-3da5375a1983","Type":"ContainerDied","Data":"5f876eb0a75e5d6e0f574148126dfbb45b4fa70619ddd67c99cc1342b57a8d1f"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.204218 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2784-account-create-update-ll7qt" event={"ID":"b55a1df4-3a27-411d-b6ff-8c72c20f4d05","Type":"ContainerStarted","Data":"0011e3b612feffbcd9b4e10d59cbad583724c5fe9e985a2e0f207a49dfe2cfbe"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.221869 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84e9-account-create-update-jm7gq" event={"ID":"d90095c9-0666-4a15-878f-4b62280c3d00","Type":"ContainerStarted","Data":"f6a7e698ee325e87cef1c63af91160534db4796156325b9f373eefee80deec0a"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.236386 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee25-account-create-update-wfvwd" event={"ID":"da59c88c-d7a5-4fb4-8b32-4c73be685b4f","Type":"ContainerStarted","Data":"ac153fe98a05ae48706179d49832c779b511da28d0ca4b3e23d77335faf159e7"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.249840 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts\") pod \"root-account-create-update-bp9dq\" (UID: \"14229729-9655-4484-a4d2-eabe80fc5abb\") " pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.251065 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzpcj\" (UniqueName: \"kubernetes.io/projected/14229729-9655-4484-a4d2-eabe80fc5abb-kube-api-access-rzpcj\") pod \"root-account-create-update-bp9dq\" (UID: \"14229729-9655-4484-a4d2-eabe80fc5abb\") " pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.309255 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.319764 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d4b0-account-create-update-k68cp" event={"ID":"0e74d9ee-5e3b-4205-8ac4-1729d495861b","Type":"ContainerStarted","Data":"5d3adb5d0637936558186af86a2ffea6ae1e4b280e4032115f715139b8c50b50"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.356193 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts\") pod \"root-account-create-update-bp9dq\" (UID: \"14229729-9655-4484-a4d2-eabe80fc5abb\") " pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.356315 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzpcj\" (UniqueName: \"kubernetes.io/projected/14229729-9655-4484-a4d2-eabe80fc5abb-kube-api-access-rzpcj\") pod \"root-account-create-update-bp9dq\" (UID: \"14229729-9655-4484-a4d2-eabe80fc5abb\") " pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.358400 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts\") pod \"root-account-create-update-bp9dq\" (UID: \"14229729-9655-4484-a4d2-eabe80fc5abb\") " pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.380198 4945 generic.go:334] "Generic (PLEG): container finished" podID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerID="d25ef862eebe28203473e7e0b4d587e0913d21309620357f26a462668c27fa9d" exitCode=0 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.380289 4945 generic.go:334] "Generic (PLEG): container finished" podID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerID="bef7fb091982995ae682a74e3650d7a2edfd9a14a59f6cae0ab48178e3d0612d" exitCode=143 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.380333 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-654744c45f-2rmcg" event={"ID":"046bb87c-2b1c-46eb-9db3-78270701ec34","Type":"ContainerDied","Data":"d25ef862eebe28203473e7e0b4d587e0913d21309620357f26a462668c27fa9d"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.380369 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-654744c45f-2rmcg" event={"ID":"046bb87c-2b1c-46eb-9db3-78270701ec34","Type":"ContainerDied","Data":"bef7fb091982995ae682a74e3650d7a2edfd9a14a59f6cae0ab48178e3d0612d"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.386846 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzpcj\" (UniqueName: \"kubernetes.io/projected/14229729-9655-4484-a4d2-eabe80fc5abb-kube-api-access-rzpcj\") pod \"root-account-create-update-bp9dq\" (UID: \"14229729-9655-4484-a4d2-eabe80fc5abb\") " pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.454477 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5" exitCode=0 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.456332 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c" exitCode=0 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.456402 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.456432 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.459131 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-combined-ca-bundle\") pod \"5ede73cf-0521-442e-8f01-b63d8d9b4725\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.459433 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-config-data\") pod \"5ede73cf-0521-442e-8f01-b63d8d9b4725\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.459605 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-vencrypt-tls-certs\") pod \"5ede73cf-0521-442e-8f01-b63d8d9b4725\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.459680 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbxzg\" (UniqueName: \"kubernetes.io/projected/5ede73cf-0521-442e-8f01-b63d8d9b4725-kube-api-access-nbxzg\") pod \"5ede73cf-0521-442e-8f01-b63d8d9b4725\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.459762 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-nova-novncproxy-tls-certs\") pod \"5ede73cf-0521-442e-8f01-b63d8d9b4725\" (UID: \"5ede73cf-0521-442e-8f01-b63d8d9b4725\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.480844 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ede73cf-0521-442e-8f01-b63d8d9b4725-kube-api-access-nbxzg" (OuterVolumeSpecName: "kube-api-access-nbxzg") pod "5ede73cf-0521-442e-8f01-b63d8d9b4725" (UID: "5ede73cf-0521-442e-8f01-b63d8d9b4725"). InnerVolumeSpecName "kube-api-access-nbxzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.483807 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_90046452-437b-4666-83a8-e8ee09bfc932/ovsdbserver-sb/0.log" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.483966 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"90046452-437b-4666-83a8-e8ee09bfc932","Type":"ContainerDied","Data":"7d43672c12a5f4688955b254c86537cba58ced58f27461f80fabfeed77024de5"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.484037 4945 scope.go:117] "RemoveContainer" containerID="d9317ebca4ad485773a2bb68895a59989eaf7ea993a34e6617668bb4a89d52c9" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.484205 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.498564 4945 generic.go:334] "Generic (PLEG): container finished" podID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerID="a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1" exitCode=0 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.498641 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a","Type":"ContainerDied","Data":"a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.503080 4945 generic.go:334] "Generic (PLEG): container finished" podID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerID="f4d36b471e22f23698a2643337f5b4f19fbe9b1e28ec3042b9fac4a9d84d2ae4" exitCode=0 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.503104 4945 generic.go:334] "Generic (PLEG): container finished" podID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerID="cbeb77761327d37cd9d6f433a669d8d50075b45d4a2017862cf0fb4848999998" exitCode=143 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.503136 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" event={"ID":"dbe6e840-6658-49dd-b547-c58c4bc1479a","Type":"ContainerDied","Data":"f4d36b471e22f23698a2643337f5b4f19fbe9b1e28ec3042b9fac4a9d84d2ae4"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.503156 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" event={"ID":"dbe6e840-6658-49dd-b547-c58c4bc1479a","Type":"ContainerDied","Data":"cbeb77761327d37cd9d6f433a669d8d50075b45d4a2017862cf0fb4848999998"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.506587 4945 generic.go:334] "Generic (PLEG): container finished" podID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerID="08ca0607ce584cf8045417f996764ffc392d3261d41883a5078094c48ae1c950" exitCode=143 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.506623 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75cbb987cb-dt6t6" event={"ID":"227e0b3d-d5ba-4265-a7b9-0419deb61603","Type":"ContainerDied","Data":"08ca0607ce584cf8045417f996764ffc392d3261d41883a5078094c48ae1c950"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.508039 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.510837 4945 generic.go:334] "Generic (PLEG): container finished" podID="ea1eec40-294d-4749-bdb2-678289eeb815" containerID="18f1d04e9c438af26a72f40f614d629c0cba6569447b80e683d6b701672e3900" exitCode=143 Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.511081 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1eec40-294d-4749-bdb2-678289eeb815","Type":"ContainerDied","Data":"18f1d04e9c438af26a72f40f614d629c0cba6569447b80e683d6b701672e3900"} Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.516520 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.537487 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-config-data" (OuterVolumeSpecName: "config-data") pod "5ede73cf-0521-442e-8f01-b63d8d9b4725" (UID: "5ede73cf-0521-442e-8f01-b63d8d9b4725"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.567178 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.567495 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbxzg\" (UniqueName: \"kubernetes.io/projected/5ede73cf-0521-442e-8f01-b63d8d9b4725-kube-api-access-nbxzg\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.567571 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.567625 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data podName:71eb40d2-e481-445d-99ea-948b918b862d nodeName:}" failed. No retries permitted until 2026-01-08 23:42:12.567607867 +0000 UTC m=+1602.878766813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data") pod "rabbitmq-server-0" (UID: "71eb40d2-e481-445d-99ea-948b918b862d") : configmap "rabbitmq-config-data" not found Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.592833 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ede73cf-0521-442e-8f01-b63d8d9b4725" (UID: "5ede73cf-0521-442e-8f01-b63d8d9b4725"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.650726 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "5ede73cf-0521-442e-8f01-b63d8d9b4725" (UID: "5ede73cf-0521-442e-8f01-b63d8d9b4725"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.669291 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.669330 4945 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.669395 4945 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.669446 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts podName:eda15927-fac6-455b-8615-24f8f535c80a nodeName:}" failed. No retries permitted until 2026-01-08 23:42:10.669431367 +0000 UTC m=+1600.980590313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts") pod "root-account-create-update-xp9kx" (UID: "eda15927-fac6-455b-8615-24f8f535c80a") : configmap "openstack-cell1-scripts" not found Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.688648 4945 scope.go:117] "RemoveContainer" containerID="230e3a7208f60ae6d4c8e1d0b533c31b74f6940ce43a05272ebfb0601f8979a9" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.689154 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "5ede73cf-0521-442e-8f01-b63d8d9b4725" (UID: "5ede73cf-0521-442e-8f01-b63d8d9b4725"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.752040 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.758779 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.776388 4945 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ede73cf-0521-442e-8f01-b63d8d9b4725-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.805524 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.823370 4945 scope.go:117] "RemoveContainer" containerID="e62730e74d5750aea233a6665a98c38e3fab63a273b9b58230cb2e7f26de724f" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.858195 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.860922 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.880437 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data-custom\") pod \"046bb87c-2b1c-46eb-9db3-78270701ec34\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.880481 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046bb87c-2b1c-46eb-9db3-78270701ec34-logs\") pod \"046bb87c-2b1c-46eb-9db3-78270701ec34\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.880669 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data\") pod \"046bb87c-2b1c-46eb-9db3-78270701ec34\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.880770 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csjpd\" (UniqueName: \"kubernetes.io/projected/046bb87c-2b1c-46eb-9db3-78270701ec34-kube-api-access-csjpd\") pod \"046bb87c-2b1c-46eb-9db3-78270701ec34\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.880808 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-combined-ca-bundle\") pod \"046bb87c-2b1c-46eb-9db3-78270701ec34\" (UID: \"046bb87c-2b1c-46eb-9db3-78270701ec34\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.882027 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046bb87c-2b1c-46eb-9db3-78270701ec34-logs" (OuterVolumeSpecName: "logs") pod "046bb87c-2b1c-46eb-9db3-78270701ec34" (UID: "046bb87c-2b1c-46eb-9db3-78270701ec34"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.886916 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "046bb87c-2b1c-46eb-9db3-78270701ec34" (UID: "046bb87c-2b1c-46eb-9db3-78270701ec34"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.892834 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046bb87c-2b1c-46eb-9db3-78270701ec34-kube-api-access-csjpd" (OuterVolumeSpecName: "kube-api-access-csjpd") pod "046bb87c-2b1c-46eb-9db3-78270701ec34" (UID: "046bb87c-2b1c-46eb-9db3-78270701ec34"). InnerVolumeSpecName "kube-api-access-csjpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.895502 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.929693 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "046bb87c-2b1c-46eb-9db3-78270701ec34" (UID: "046bb87c-2b1c-46eb-9db3-78270701ec34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.965251 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data" (OuterVolumeSpecName: "config-data") pod "046bb87c-2b1c-46eb-9db3-78270701ec34" (UID: "046bb87c-2b1c-46eb-9db3-78270701ec34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.967623 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12 is running failed: container process not found" containerID="a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.968042 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12 is running failed: container process not found" containerID="a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.968428 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12 is running failed: container process not found" containerID="a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 08 23:42:08 crc kubenswrapper[4945]: E0108 23:42:08.968471 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="37125f43-8fb6-4625-a260-8d43cdbe167a" containerName="nova-cell0-conductor-conductor" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983130 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vhwj\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-kube-api-access-9vhwj\") pod \"6dc39aab-86d6-45f6-b565-3da5375a1983\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983192 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbe6e840-6658-49dd-b547-c58c4bc1479a-logs\") pod \"dbe6e840-6658-49dd-b547-c58c4bc1479a\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983220 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-public-tls-certs\") pod \"6dc39aab-86d6-45f6-b565-3da5375a1983\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983250 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90095c9-0666-4a15-878f-4b62280c3d00-operator-scripts\") pod \"d90095c9-0666-4a15-878f-4b62280c3d00\" (UID: \"d90095c9-0666-4a15-878f-4b62280c3d00\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983349 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-run-httpd\") pod \"6dc39aab-86d6-45f6-b565-3da5375a1983\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983395 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-log-httpd\") pod \"6dc39aab-86d6-45f6-b565-3da5375a1983\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983438 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-config-data\") pod \"6dc39aab-86d6-45f6-b565-3da5375a1983\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983459 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pcrq\" (UniqueName: \"kubernetes.io/projected/dbe6e840-6658-49dd-b547-c58c4bc1479a-kube-api-access-9pcrq\") pod \"dbe6e840-6658-49dd-b547-c58c4bc1479a\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983485 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-internal-tls-certs\") pod \"6dc39aab-86d6-45f6-b565-3da5375a1983\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983502 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-etc-swift\") pod \"6dc39aab-86d6-45f6-b565-3da5375a1983\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983561 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-combined-ca-bundle\") pod \"6dc39aab-86d6-45f6-b565-3da5375a1983\" (UID: \"6dc39aab-86d6-45f6-b565-3da5375a1983\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983579 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data-custom\") pod \"dbe6e840-6658-49dd-b547-c58c4bc1479a\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983617 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data\") pod \"dbe6e840-6658-49dd-b547-c58c4bc1479a\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983676 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grj55\" (UniqueName: \"kubernetes.io/projected/d90095c9-0666-4a15-878f-4b62280c3d00-kube-api-access-grj55\") pod \"d90095c9-0666-4a15-878f-4b62280c3d00\" (UID: \"d90095c9-0666-4a15-878f-4b62280c3d00\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.983698 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-combined-ca-bundle\") pod \"dbe6e840-6658-49dd-b547-c58c4bc1479a\" (UID: \"dbe6e840-6658-49dd-b547-c58c4bc1479a\") " Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.984144 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csjpd\" (UniqueName: \"kubernetes.io/projected/046bb87c-2b1c-46eb-9db3-78270701ec34-kube-api-access-csjpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.984162 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.984174 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.984183 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/046bb87c-2b1c-46eb-9db3-78270701ec34-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.984193 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046bb87c-2b1c-46eb-9db3-78270701ec34-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.984793 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6dc39aab-86d6-45f6-b565-3da5375a1983" (UID: "6dc39aab-86d6-45f6-b565-3da5375a1983"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.984763 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d90095c9-0666-4a15-878f-4b62280c3d00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d90095c9-0666-4a15-878f-4b62280c3d00" (UID: "d90095c9-0666-4a15-878f-4b62280c3d00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.985105 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbe6e840-6658-49dd-b547-c58c4bc1479a-logs" (OuterVolumeSpecName: "logs") pod "dbe6e840-6658-49dd-b547-c58c4bc1479a" (UID: "dbe6e840-6658-49dd-b547-c58c4bc1479a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.986782 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-kube-api-access-9vhwj" (OuterVolumeSpecName: "kube-api-access-9vhwj") pod "6dc39aab-86d6-45f6-b565-3da5375a1983" (UID: "6dc39aab-86d6-45f6-b565-3da5375a1983"). InnerVolumeSpecName "kube-api-access-9vhwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.993072 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6dc39aab-86d6-45f6-b565-3da5375a1983" (UID: "6dc39aab-86d6-45f6-b565-3da5375a1983"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:08 crc kubenswrapper[4945]: I0108 23:42:08.998164 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90095c9-0666-4a15-878f-4b62280c3d00-kube-api-access-grj55" (OuterVolumeSpecName: "kube-api-access-grj55") pod "d90095c9-0666-4a15-878f-4b62280c3d00" (UID: "d90095c9-0666-4a15-878f-4b62280c3d00"). InnerVolumeSpecName "kube-api-access-grj55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.002298 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dbe6e840-6658-49dd-b547-c58c4bc1479a" (UID: "dbe6e840-6658-49dd-b547-c58c4bc1479a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.002753 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6dc39aab-86d6-45f6-b565-3da5375a1983" (UID: "6dc39aab-86d6-45f6-b565-3da5375a1983"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.017924 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe6e840-6658-49dd-b547-c58c4bc1479a-kube-api-access-9pcrq" (OuterVolumeSpecName: "kube-api-access-9pcrq") pod "dbe6e840-6658-49dd-b547-c58c4bc1479a" (UID: "dbe6e840-6658-49dd-b547-c58c4bc1479a"). InnerVolumeSpecName "kube-api-access-9pcrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.031436 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbe6e840-6658-49dd-b547-c58c4bc1479a" (UID: "dbe6e840-6658-49dd-b547-c58c4bc1479a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.048062 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-config-data" (OuterVolumeSpecName: "config-data") pod "6dc39aab-86d6-45f6-b565-3da5375a1983" (UID: "6dc39aab-86d6-45f6-b565-3da5375a1983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.050402 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data" (OuterVolumeSpecName: "config-data") pod "dbe6e840-6658-49dd-b547-c58c4bc1479a" (UID: "dbe6e840-6658-49dd-b547-c58c4bc1479a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.060033 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6dc39aab-86d6-45f6-b565-3da5375a1983" (UID: "6dc39aab-86d6-45f6-b565-3da5375a1983"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: E0108 23:42:09.063484 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37125f43_8fb6_4625_a260_8d43cdbe167a.slice/crio-a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12.scope\": RecentStats: unable to find data in memory cache]" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.075762 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6dc39aab-86d6-45f6-b565-3da5375a1983" (UID: "6dc39aab-86d6-45f6-b565-3da5375a1983"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.077563 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dc39aab-86d6-45f6-b565-3da5375a1983" (UID: "6dc39aab-86d6-45f6-b565-3da5375a1983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.087507 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088036 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088366 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088388 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grj55\" (UniqueName: \"kubernetes.io/projected/d90095c9-0666-4a15-878f-4b62280c3d00-kube-api-access-grj55\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088404 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6e840-6658-49dd-b547-c58c4bc1479a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088416 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vhwj\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-kube-api-access-9vhwj\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088428 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbe6e840-6658-49dd-b547-c58c4bc1479a-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088439 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088450 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90095c9-0666-4a15-878f-4b62280c3d00-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088463 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088472 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6dc39aab-86d6-45f6-b565-3da5375a1983-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088482 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088492 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pcrq\" (UniqueName: \"kubernetes.io/projected/dbe6e840-6658-49dd-b547-c58c4bc1479a-kube-api-access-9pcrq\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088503 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6dc39aab-86d6-45f6-b565-3da5375a1983-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.088515 4945 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6dc39aab-86d6-45f6-b565-3da5375a1983-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.199295 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.207810 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.220576 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.247116 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.255677 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.280350 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.293534 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg5h9\" (UniqueName: \"kubernetes.io/projected/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-kube-api-access-lg5h9\") pod \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\" (UID: \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.293593 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-operator-scripts\") pod \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\" (UID: \"da59c88c-d7a5-4fb4-8b32-4c73be685b4f\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.296259 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da59c88c-d7a5-4fb4-8b32-4c73be685b4f" (UID: "da59c88c-d7a5-4fb4-8b32-4c73be685b4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.303469 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-kube-api-access-lg5h9" (OuterVolumeSpecName: "kube-api-access-lg5h9") pod "da59c88c-d7a5-4fb4-8b32-4c73be685b4f" (UID: "da59c88c-d7a5-4fb4-8b32-4c73be685b4f"). InnerVolumeSpecName "kube-api-access-lg5h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.393687 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.163:8776/healthcheck\": read tcp 10.217.0.2:39222->10.217.0.163:8776: read: connection reset by peer" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397569 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwvmr\" (UniqueName: \"kubernetes.io/projected/8e6d9797-d686-44c0-918a-f17bac13b874-kube-api-access-qwvmr\") pod \"8e6d9797-d686-44c0-918a-f17bac13b874\" (UID: \"8e6d9797-d686-44c0-918a-f17bac13b874\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397643 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e6d9797-d686-44c0-918a-f17bac13b874-operator-scripts\") pod \"8e6d9797-d686-44c0-918a-f17bac13b874\" (UID: \"8e6d9797-d686-44c0-918a-f17bac13b874\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397680 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-operator-scripts\") pod \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\" (UID: \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397699 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pssp8\" (UniqueName: \"kubernetes.io/projected/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-kube-api-access-pssp8\") pod \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\" (UID: \"b55a1df4-3a27-411d-b6ff-8c72c20f4d05\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397760 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs8zl\" (UniqueName: \"kubernetes.io/projected/179a20bc-72ca-4f86-8cd0-5c6df9211839-kube-api-access-rs8zl\") pod \"179a20bc-72ca-4f86-8cd0-5c6df9211839\" (UID: \"179a20bc-72ca-4f86-8cd0-5c6df9211839\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397893 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74d9ee-5e3b-4205-8ac4-1729d495861b-operator-scripts\") pod \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\" (UID: \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397927 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179a20bc-72ca-4f86-8cd0-5c6df9211839-operator-scripts\") pod \"179a20bc-72ca-4f86-8cd0-5c6df9211839\" (UID: \"179a20bc-72ca-4f86-8cd0-5c6df9211839\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397955 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6xmd\" (UniqueName: \"kubernetes.io/projected/a6e1114b-d949-45e8-a640-d448d52ea983-kube-api-access-f6xmd\") pod \"a6e1114b-d949-45e8-a640-d448d52ea983\" (UID: \"a6e1114b-d949-45e8-a640-d448d52ea983\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.397976 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6e1114b-d949-45e8-a640-d448d52ea983-operator-scripts\") pod \"a6e1114b-d949-45e8-a640-d448d52ea983\" (UID: \"a6e1114b-d949-45e8-a640-d448d52ea983\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.398009 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbfbs\" (UniqueName: \"kubernetes.io/projected/0e74d9ee-5e3b-4205-8ac4-1729d495861b-kube-api-access-dbfbs\") pod \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\" (UID: \"0e74d9ee-5e3b-4205-8ac4-1729d495861b\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.398289 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b55a1df4-3a27-411d-b6ff-8c72c20f4d05" (UID: "b55a1df4-3a27-411d-b6ff-8c72c20f4d05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.398353 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e6d9797-d686-44c0-918a-f17bac13b874-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e6d9797-d686-44c0-918a-f17bac13b874" (UID: "8e6d9797-d686-44c0-918a-f17bac13b874"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.398882 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6e1114b-d949-45e8-a640-d448d52ea983-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6e1114b-d949-45e8-a640-d448d52ea983" (UID: "a6e1114b-d949-45e8-a640-d448d52ea983"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.399008 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.399027 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg5h9\" (UniqueName: \"kubernetes.io/projected/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-kube-api-access-lg5h9\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.399039 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da59c88c-d7a5-4fb4-8b32-4c73be685b4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.399048 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e6d9797-d686-44c0-918a-f17bac13b874-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.399355 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e74d9ee-5e3b-4205-8ac4-1729d495861b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e74d9ee-5e3b-4205-8ac4-1729d495861b" (UID: "0e74d9ee-5e3b-4205-8ac4-1729d495861b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.399636 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/179a20bc-72ca-4f86-8cd0-5c6df9211839-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "179a20bc-72ca-4f86-8cd0-5c6df9211839" (UID: "179a20bc-72ca-4f86-8cd0-5c6df9211839"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.403962 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6d9797-d686-44c0-918a-f17bac13b874-kube-api-access-qwvmr" (OuterVolumeSpecName: "kube-api-access-qwvmr") pod "8e6d9797-d686-44c0-918a-f17bac13b874" (UID: "8e6d9797-d686-44c0-918a-f17bac13b874"). InnerVolumeSpecName "kube-api-access-qwvmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.404339 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-kube-api-access-pssp8" (OuterVolumeSpecName: "kube-api-access-pssp8") pod "b55a1df4-3a27-411d-b6ff-8c72c20f4d05" (UID: "b55a1df4-3a27-411d-b6ff-8c72c20f4d05"). InnerVolumeSpecName "kube-api-access-pssp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.405929 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6e1114b-d949-45e8-a640-d448d52ea983-kube-api-access-f6xmd" (OuterVolumeSpecName: "kube-api-access-f6xmd") pod "a6e1114b-d949-45e8-a640-d448d52ea983" (UID: "a6e1114b-d949-45e8-a640-d448d52ea983"). InnerVolumeSpecName "kube-api-access-f6xmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.406261 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179a20bc-72ca-4f86-8cd0-5c6df9211839-kube-api-access-rs8zl" (OuterVolumeSpecName: "kube-api-access-rs8zl") pod "179a20bc-72ca-4f86-8cd0-5c6df9211839" (UID: "179a20bc-72ca-4f86-8cd0-5c6df9211839"). InnerVolumeSpecName "kube-api-access-rs8zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.408083 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e74d9ee-5e3b-4205-8ac4-1729d495861b-kube-api-access-dbfbs" (OuterVolumeSpecName: "kube-api-access-dbfbs") pod "0e74d9ee-5e3b-4205-8ac4-1729d495861b" (UID: "0e74d9ee-5e3b-4205-8ac4-1729d495861b"). InnerVolumeSpecName "kube-api-access-dbfbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.511041 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e74d9ee-5e3b-4205-8ac4-1729d495861b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.511077 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/179a20bc-72ca-4f86-8cd0-5c6df9211839-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.511087 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6xmd\" (UniqueName: \"kubernetes.io/projected/a6e1114b-d949-45e8-a640-d448d52ea983-kube-api-access-f6xmd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.511097 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbfbs\" (UniqueName: \"kubernetes.io/projected/0e74d9ee-5e3b-4205-8ac4-1729d495861b-kube-api-access-dbfbs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.511107 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6e1114b-d949-45e8-a640-d448d52ea983-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.511117 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwvmr\" (UniqueName: \"kubernetes.io/projected/8e6d9797-d686-44c0-918a-f17bac13b874-kube-api-access-qwvmr\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.511126 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pssp8\" (UniqueName: \"kubernetes.io/projected/b55a1df4-3a27-411d-b6ff-8c72c20f4d05-kube-api-access-pssp8\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.511135 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs8zl\" (UniqueName: \"kubernetes.io/projected/179a20bc-72ca-4f86-8cd0-5c6df9211839-kube-api-access-rs8zl\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.568268 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84e9-account-create-update-jm7gq" event={"ID":"d90095c9-0666-4a15-878f-4b62280c3d00","Type":"ContainerDied","Data":"f6a7e698ee325e87cef1c63af91160534db4796156325b9f373eefee80deec0a"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.568392 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84e9-account-create-update-jm7gq" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.577941 4945 generic.go:334] "Generic (PLEG): container finished" podID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerID="ebe090f7ada13f633224e0bbcee404b72c09adfb8c09163bb99a6a8d5ca17ea4" exitCode=0 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.578095 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04a2b873-3034-4b9f-9daf-81db6749d45f","Type":"ContainerDied","Data":"ebe090f7ada13f633224e0bbcee404b72c09adfb8c09163bb99a6a8d5ca17ea4"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.595815 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" event={"ID":"8e6d9797-d686-44c0-918a-f17bac13b874","Type":"ContainerDied","Data":"7fada45949056355c390abae84920c2de26411f1ba68881c7a048906d1ac1f6d"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.595839 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9ea6-account-create-update-w9nlx" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.615498 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xp9kx" event={"ID":"eda15927-fac6-455b-8615-24f8f535c80a","Type":"ContainerDied","Data":"34016ce4644f1818736731b4ae3db44588e4cd1bd35c17ca2b8137980fa726f2"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.615549 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34016ce4644f1818736731b4ae3db44588e4cd1bd35c17ca2b8137980fa726f2" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.618807 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5ede73cf-0521-442e-8f01-b63d8d9b4725","Type":"ContainerDied","Data":"2c6e8cda26c18989fb1201112dda148e360fdcb1f3022385c3fe2f5a5422b309"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.618820 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.618840 4945 scope.go:117] "RemoveContainer" containerID="aab0536ef7d9d6c8e9048d5c7063401c083ca8fef6235e9b02f49cd7abccc975" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.639497 4945 generic.go:334] "Generic (PLEG): container finished" podID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerID="f67a08265ea88bae6d39299224d2a2604f867f86d99af833fa2c5deefc166ff7" exitCode=0 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.639570 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2eb23b1e-c7b1-465a-a91c-6042942e604a","Type":"ContainerDied","Data":"f67a08265ea88bae6d39299224d2a2604f867f86d99af833fa2c5deefc166ff7"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.647346 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2784-account-create-update-ll7qt" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.649833 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d684-account-create-update-ww984" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.653692 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d4b0-account-create-update-k68cp" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.647311 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2784-account-create-update-ll7qt" event={"ID":"b55a1df4-3a27-411d-b6ff-8c72c20f4d05","Type":"ContainerDied","Data":"0011e3b612feffbcd9b4e10d59cbad583724c5fe9e985a2e0f207a49dfe2cfbe"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.680308 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d684-account-create-update-ww984" event={"ID":"a6e1114b-d949-45e8-a640-d448d52ea983","Type":"ContainerDied","Data":"6e8322015f6a0a99328fa09314da7c008da8b6ba4f119d57da79ed1388682f3f"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.680338 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d4b0-account-create-update-k68cp" event={"ID":"0e74d9ee-5e3b-4205-8ac4-1729d495861b","Type":"ContainerDied","Data":"5d3adb5d0637936558186af86a2ffea6ae1e4b280e4032115f715139b8c50b50"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.680913 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.681195 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9674718-110d-4241-a199-9663979defde" containerName="ceilometer-central-agent" containerID="cri-o://587e368516c01890b1b0a081641da4fdf10374a3be177548851e1df43ac2b62d" gracePeriod=30 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.685004 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9674718-110d-4241-a199-9663979defde" containerName="sg-core" containerID="cri-o://a2db183acfe6ab165940694a43d75dcfd08108c8d4af23e41358e09eb336e30e" gracePeriod=30 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.685133 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9674718-110d-4241-a199-9663979defde" containerName="proxy-httpd" containerID="cri-o://93c9d2ce6e3251c442de8342769f7f316659735ad8ce35d3e69e890d9d23a3e3" gracePeriod=30 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.685201 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9674718-110d-4241-a199-9663979defde" containerName="ceilometer-notification-agent" containerID="cri-o://90a4f2d6b6f7481844beacbb10433c3f569585443e447333d0efb309134aacf4" gracePeriod=30 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.800921 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" event={"ID":"dbe6e840-6658-49dd-b547-c58c4bc1479a","Type":"ContainerDied","Data":"d50dc51ad1c2502ea175e1218552265a69045edc7e39e9e62e65174f82d15ec9"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.801134 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-f5458d448-xj5lz" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.826008 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.826254 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" containerName="kube-state-metrics" containerID="cri-o://a01a9cdaf1e6dcbf50fb7c8fdd46ee8450d6a7d635fd11e06d3b74c301d9e2af" gracePeriod=30 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.880075 4945 generic.go:334] "Generic (PLEG): container finished" podID="37125f43-8fb6-4625-a260-8d43cdbe167a" containerID="a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12" exitCode=0 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.880170 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"37125f43-8fb6-4625-a260-8d43cdbe167a","Type":"ContainerDied","Data":"a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.905566 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4ee0-account-create-update-pmzqj"] Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.942006 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4ee0-account-create-update-pmzqj"] Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.945911 4945 generic.go:334] "Generic (PLEG): container finished" podID="d4fe48ad-f532-4c60-b88b-95a894d0b519" containerID="6864af6cbe8f647a9f0c28948b92d59f1746e14059ad5a2b80f2933cb34799cc" exitCode=0 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.946008 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4fe48ad-f532-4c60-b88b-95a894d0b519","Type":"ContainerDied","Data":"6864af6cbe8f647a9f0c28948b92d59f1746e14059ad5a2b80f2933cb34799cc"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.946033 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4fe48ad-f532-4c60-b88b-95a894d0b519","Type":"ContainerDied","Data":"52028ca7c043619fc7c81c5925741eabcc44472143fdc093e2303c4af094b997"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.946064 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52028ca7c043619fc7c81c5925741eabcc44472143fdc093e2303c4af094b997" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.965191 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.965502 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="d9574582-49aa-48ec-8b43-bc55ed78a3d1" containerName="memcached" containerID="cri-o://01e6f6d29fbc37e10b337041462136b759e49d0ec4a3d75774556109c7c74797" gracePeriod=30 Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.970551 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.973479 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c1913ce-ea65-4745-baf8-621191c50b55-logs\") pod \"3c1913ce-ea65-4745-baf8-621191c50b55\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.973579 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-combined-ca-bundle\") pod \"3c1913ce-ea65-4745-baf8-621191c50b55\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.973610 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-internal-tls-certs\") pod \"3c1913ce-ea65-4745-baf8-621191c50b55\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.973678 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-scripts\") pod \"3c1913ce-ea65-4745-baf8-621191c50b55\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.973736 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-config-data\") pod \"3c1913ce-ea65-4745-baf8-621191c50b55\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.974747 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c1913ce-ea65-4745-baf8-621191c50b55-logs" (OuterVolumeSpecName: "logs") pod "3c1913ce-ea65-4745-baf8-621191c50b55" (UID: "3c1913ce-ea65-4745-baf8-621191c50b55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.975313 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c1913ce-ea65-4745-baf8-621191c50b55-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.990329 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.990550 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-654744c45f-2rmcg" event={"ID":"046bb87c-2b1c-46eb-9db3-78270701ec34","Type":"ContainerDied","Data":"fc4a65f0c3dad5723294cedeb1f3ce72310aa1704e670c29260684deb0f7c9c3"} Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.990624 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-654744c45f-2rmcg" Jan 08 23:42:09 crc kubenswrapper[4945]: I0108 23:42:09.991205 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-scripts" (OuterVolumeSpecName: "scripts") pod "3c1913ce-ea65-4745-baf8-621191c50b55" (UID: "3c1913ce-ea65-4745-baf8-621191c50b55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.017189 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-96f5cc787-k4zdr" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.050523 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ee25-account-create-update-wfvwd" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.076493 4945 scope.go:117] "RemoveContainer" containerID="f4d36b471e22f23698a2643337f5b4f19fbe9b1e28ec3042b9fac4a9d84d2ae4" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.078515 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90046452-437b-4666-83a8-e8ee09bfc932" path="/var/lib/kubelet/pods/90046452-437b-4666-83a8-e8ee09bfc932/volumes" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.079135 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf98b083-95c0-4e25-b0c2-e5064ebde5fd" path="/var/lib/kubelet/pods/bf98b083-95c0-4e25-b0c2-e5064ebde5fd/volumes" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.080491 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksk2n\" (UniqueName: \"kubernetes.io/projected/eda15927-fac6-455b-8615-24f8f535c80a-kube-api-access-ksk2n\") pod \"eda15927-fac6-455b-8615-24f8f535c80a\" (UID: \"eda15927-fac6-455b-8615-24f8f535c80a\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.080622 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts\") pod \"eda15927-fac6-455b-8615-24f8f535c80a\" (UID: \"eda15927-fac6-455b-8615-24f8f535c80a\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.080849 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-public-tls-certs\") pod \"3c1913ce-ea65-4745-baf8-621191c50b55\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.080933 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfz5z\" (UniqueName: \"kubernetes.io/projected/3c1913ce-ea65-4745-baf8-621191c50b55-kube-api-access-tfz5z\") pod \"3c1913ce-ea65-4745-baf8-621191c50b55\" (UID: \"3c1913ce-ea65-4745-baf8-621191c50b55\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.081393 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.085362 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eda15927-fac6-455b-8615-24f8f535c80a" (UID: "eda15927-fac6-455b-8615-24f8f535c80a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.086202 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c1913ce-ea65-4745-baf8-621191c50b55" (UID: "3c1913ce-ea65-4745-baf8-621191c50b55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.087570 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda15927-fac6-455b-8615-24f8f535c80a-kube-api-access-ksk2n" (OuterVolumeSpecName: "kube-api-access-ksk2n") pod "eda15927-fac6-455b-8615-24f8f535c80a" (UID: "eda15927-fac6-455b-8615-24f8f535c80a"). InnerVolumeSpecName "kube-api-access-ksk2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.091632 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1913ce-ea65-4745-baf8-621191c50b55-kube-api-access-tfz5z" (OuterVolumeSpecName: "kube-api-access-tfz5z") pod "3c1913ce-ea65-4745-baf8-621191c50b55" (UID: "3c1913ce-ea65-4745-baf8-621191c50b55"). InnerVolumeSpecName "kube-api-access-tfz5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.098912 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0aa3d3-8093-42f1-9fa6-ad3883441ab2" path="/var/lib/kubelet/pods/ca0aa3d3-8093-42f1-9fa6-ad3883441ab2/volumes" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.100612 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3bb0-account-create-update-4s5gt" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.125961 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.134286 4945 generic.go:334] "Generic (PLEG): container finished" podID="3c1913ce-ea65-4745-baf8-621191c50b55" containerID="bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5" exitCode=0 Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.134376 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d4586c964-cfb7b" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.147855 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-config-data" (OuterVolumeSpecName: "config-data") pod "3c1913ce-ea65-4745-baf8-621191c50b55" (UID: "3c1913ce-ea65-4745-baf8-621191c50b55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.159100 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:53398->10.217.0.204:8775: read: connection reset by peer" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.159518 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": read tcp 10.217.0.2:53392->10.217.0.204:8775: read: connection reset by peer" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.182982 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksk2n\" (UniqueName: \"kubernetes.io/projected/eda15927-fac6-455b-8615-24f8f535c80a-kube-api-access-ksk2n\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.183024 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eda15927-fac6-455b-8615-24f8f535c80a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.183033 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.183045 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.183058 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfz5z\" (UniqueName: \"kubernetes.io/projected/3c1913ce-ea65-4745-baf8-621191c50b55-kube-api-access-tfz5z\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.190141 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d56225670b178c36792afe8b5e40b521c40f21fe6ec9c747b20bf8f26e4c3640" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.214489 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d56225670b178c36792afe8b5e40b521c40f21fe6ec9c747b20bf8f26e4c3640" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.223259 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3c1913ce-ea65-4745-baf8-621191c50b55" (UID: "3c1913ce-ea65-4745-baf8-621191c50b55"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.224508 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d56225670b178c36792afe8b5e40b521c40f21fe6ec9c747b20bf8f26e4c3640" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.224572 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9f823122-da64-4ac4-aa14-96bc8f2f9c1c" containerName="nova-scheduler-scheduler" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.283898 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-default\") pod \"d4fe48ad-f532-4c60-b88b-95a894d0b519\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.283941 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-galera-tls-certs\") pod \"d4fe48ad-f532-4c60-b88b-95a894d0b519\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.284089 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"d4fe48ad-f532-4c60-b88b-95a894d0b519\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.284147 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-combined-ca-bundle\") pod \"d4fe48ad-f532-4c60-b88b-95a894d0b519\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.284187 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-generated\") pod \"d4fe48ad-f532-4c60-b88b-95a894d0b519\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.284334 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-kolla-config\") pod \"d4fe48ad-f532-4c60-b88b-95a894d0b519\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.284633 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86rx5\" (UniqueName: \"kubernetes.io/projected/d4fe48ad-f532-4c60-b88b-95a894d0b519-kube-api-access-86rx5\") pod \"d4fe48ad-f532-4c60-b88b-95a894d0b519\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.284727 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-operator-scripts\") pod \"d4fe48ad-f532-4c60-b88b-95a894d0b519\" (UID: \"d4fe48ad-f532-4c60-b88b-95a894d0b519\") " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.285455 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.286766 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4fe48ad-f532-4c60-b88b-95a894d0b519" (UID: "d4fe48ad-f532-4c60-b88b-95a894d0b519"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.287138 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "d4fe48ad-f532-4c60-b88b-95a894d0b519" (UID: "d4fe48ad-f532-4c60-b88b-95a894d0b519"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.286865 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "d4fe48ad-f532-4c60-b88b-95a894d0b519" (UID: "d4fe48ad-f532-4c60-b88b-95a894d0b519"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.287208 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d4fe48ad-f532-4c60-b88b-95a894d0b519" (UID: "d4fe48ad-f532-4c60-b88b-95a894d0b519"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.298426 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fe48ad-f532-4c60-b88b-95a894d0b519-kube-api-access-86rx5" (OuterVolumeSpecName: "kube-api-access-86rx5") pod "d4fe48ad-f532-4c60-b88b-95a894d0b519" (UID: "d4fe48ad-f532-4c60-b88b-95a894d0b519"). InnerVolumeSpecName "kube-api-access-86rx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.325825 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.333214 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.335242 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.335328 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="ovn-northd" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.341378 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "d4fe48ad-f532-4c60-b88b-95a894d0b519" (UID: "d4fe48ad-f532-4c60-b88b-95a894d0b519"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.390634 4945 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.390663 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86rx5\" (UniqueName: \"kubernetes.io/projected/d4fe48ad-f532-4c60-b88b-95a894d0b519-kube-api-access-86rx5\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.390673 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.390681 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.390702 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.390711 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4fe48ad-f532-4c60-b88b-95a894d0b519-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.398118 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4fe48ad-f532-4c60-b88b-95a894d0b519" (UID: "d4fe48ad-f532-4c60-b88b-95a894d0b519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.433169 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75cbb987cb-dt6t6" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.158:9311/healthcheck\": dial tcp 10.217.0.158:9311: connect: connection refused" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.436764 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75cbb987cb-dt6t6" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.158:9311/healthcheck\": dial tcp 10.217.0.158:9311: connect: connection refused" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.449973 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.483612 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3c1913ce-ea65-4745-baf8-621191c50b55" (UID: "3c1913ce-ea65-4745-baf8-621191c50b55"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.495390 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c1913ce-ea65-4745-baf8-621191c50b55-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.495445 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.495460 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.513467 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8a7142f351f49dd941a579a875be8ea9073637a6c87204db987a41fea1ed6c9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.527382 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8a7142f351f49dd941a579a875be8ea9073637a6c87204db987a41fea1ed6c9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.542303 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8a7142f351f49dd941a579a875be8ea9073637a6c87204db987a41fea1ed6c9f" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.542379 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7b8f132e-3fda-4a38-8416-1055a62e7552" containerName="nova-cell1-conductor-conductor" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.598801 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "d4fe48ad-f532-4c60-b88b-95a894d0b519" (UID: "d4fe48ad-f532-4c60-b88b-95a894d0b519"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:10 crc kubenswrapper[4945]: I0108 23:42:10.707229 4945 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4fe48ad-f532-4c60-b88b-95a894d0b519-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.754044 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:10 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:10 crc kubenswrapper[4945]: Jan 08 23:42:10 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:10 crc kubenswrapper[4945]: Jan 08 23:42:10 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:10 crc kubenswrapper[4945]: Jan 08 23:42:10 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:10 crc kubenswrapper[4945]: Jan 08 23:42:10 crc kubenswrapper[4945]: if [ -n "" ]; then Jan 08 23:42:10 crc kubenswrapper[4945]: GRANT_DATABASE="" Jan 08 23:42:10 crc kubenswrapper[4945]: else Jan 08 23:42:10 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:10 crc kubenswrapper[4945]: fi Jan 08 23:42:10 crc kubenswrapper[4945]: Jan 08 23:42:10 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:10 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:10 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:10 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:10 crc kubenswrapper[4945]: # support updates Jan 08 23:42:10 crc kubenswrapper[4945]: Jan 08 23:42:10 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:10 crc kubenswrapper[4945]: E0108 23:42:10.755141 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-bp9dq" podUID="14229729-9655-4484-a4d2-eabe80fc5abb" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.149186 4945 generic.go:334] "Generic (PLEG): container finished" podID="3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" containerID="a01a9cdaf1e6dcbf50fb7c8fdd46ee8450d6a7d635fd11e06d3b74c301d9e2af" exitCode=2 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.152264 4945 generic.go:334] "Generic (PLEG): container finished" podID="7b8f132e-3fda-4a38-8416-1055a62e7552" containerID="8a7142f351f49dd941a579a875be8ea9073637a6c87204db987a41fea1ed6c9f" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.155460 4945 generic.go:334] "Generic (PLEG): container finished" podID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerID="a3b7d465ce7932bc7a2c58b3a8d58a6d40a84dc47fe41a15fffd4d2e75e42570" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.162087 4945 generic.go:334] "Generic (PLEG): container finished" podID="9f823122-da64-4ac4-aa14-96bc8f2f9c1c" containerID="d56225670b178c36792afe8b5e40b521c40f21fe6ec9c747b20bf8f26e4c3640" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.167275 4945 generic.go:334] "Generic (PLEG): container finished" podID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerID="79c3e5ad5b8d05cf65c473b2c9291f7836e5f788b3ab861c6aaa651a1b04f94d" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.178377 4945 generic.go:334] "Generic (PLEG): container finished" podID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerID="0e98c349bd4bb46a725fbf926d7b62bd3253a7708fe210e97d08c5303e0c8116" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.180667 4945 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-bp9dq" secret="" err="secret \"galera-openstack-dockercfg-4tbhl\" not found" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.182874 4945 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 08 23:42:11 crc kubenswrapper[4945]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 08 23:42:11 crc kubenswrapper[4945]: Jan 08 23:42:11 crc kubenswrapper[4945]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 08 23:42:11 crc kubenswrapper[4945]: Jan 08 23:42:11 crc kubenswrapper[4945]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 08 23:42:11 crc kubenswrapper[4945]: Jan 08 23:42:11 crc kubenswrapper[4945]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 08 23:42:11 crc kubenswrapper[4945]: Jan 08 23:42:11 crc kubenswrapper[4945]: if [ -n "" ]; then Jan 08 23:42:11 crc kubenswrapper[4945]: GRANT_DATABASE="" Jan 08 23:42:11 crc kubenswrapper[4945]: else Jan 08 23:42:11 crc kubenswrapper[4945]: GRANT_DATABASE="*" Jan 08 23:42:11 crc kubenswrapper[4945]: fi Jan 08 23:42:11 crc kubenswrapper[4945]: Jan 08 23:42:11 crc kubenswrapper[4945]: # going for maximum compatibility here: Jan 08 23:42:11 crc kubenswrapper[4945]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 08 23:42:11 crc kubenswrapper[4945]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 08 23:42:11 crc kubenswrapper[4945]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 08 23:42:11 crc kubenswrapper[4945]: # support updates Jan 08 23:42:11 crc kubenswrapper[4945]: Jan 08 23:42:11 crc kubenswrapper[4945]: $MYSQL_CMD < logger="UnhandledError" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.184185 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-bp9dq" podUID="14229729-9655-4484-a4d2-eabe80fc5abb" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.203966 4945 generic.go:334] "Generic (PLEG): container finished" podID="c9674718-110d-4241-a199-9663979defde" containerID="93c9d2ce6e3251c442de8342769f7f316659735ad8ce35d3e69e890d9d23a3e3" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.204013 4945 generic.go:334] "Generic (PLEG): container finished" podID="c9674718-110d-4241-a199-9663979defde" containerID="a2db183acfe6ab165940694a43d75dcfd08108c8d4af23e41358e09eb336e30e" exitCode=2 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.204021 4945 generic.go:334] "Generic (PLEG): container finished" podID="c9674718-110d-4241-a199-9663979defde" containerID="90a4f2d6b6f7481844beacbb10433c3f569585443e447333d0efb309134aacf4" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.204030 4945 generic.go:334] "Generic (PLEG): container finished" podID="c9674718-110d-4241-a199-9663979defde" containerID="587e368516c01890b1b0a081641da4fdf10374a3be177548851e1df43ac2b62d" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.209721 4945 generic.go:334] "Generic (PLEG): container finished" podID="ea1eec40-294d-4749-bdb2-678289eeb815" containerID="360c56ebe46377c61cd806e3349a6a48d19f706ccb293962a2db56915941150d" exitCode=0 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.209838 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xp9kx" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.218737 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.228749 4945 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.228839 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts podName:14229729-9655-4484-a4d2-eabe80fc5abb nodeName:}" failed. No retries permitted until 2026-01-08 23:42:11.728817784 +0000 UTC m=+1602.039976730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts") pod "root-account-create-update-bp9dq" (UID: "14229729-9655-4484-a4d2-eabe80fc5abb") : configmap "openstack-scripts" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.229896 4945 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.229s" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.230013 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4ee0-account-create-update-8b2hj"] Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.230745 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerName="barbican-keystone-listener" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.230873 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerName="barbican-keystone-listener" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.231046 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" containerName="galera" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.231146 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" containerName="galera" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.231282 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerName="barbican-worker" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.231349 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerName="barbican-worker" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.231423 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" containerName="mysql-bootstrap" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.231566 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" containerName="mysql-bootstrap" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.231636 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-httpd" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.231687 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-httpd" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.231758 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" containerName="placement-log" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.231809 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" containerName="placement-log" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.231907 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerName="barbican-worker-log" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.232071 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerName="barbican-worker-log" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.232144 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerName="barbican-keystone-listener-log" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.232196 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerName="barbican-keystone-listener-log" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.232267 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-server" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.232320 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-server" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.232372 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ede73cf-0521-442e-8f01-b63d8d9b4725" containerName="nova-cell1-novncproxy-novncproxy" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.232428 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ede73cf-0521-442e-8f01-b63d8d9b4725" containerName="nova-cell1-novncproxy-novncproxy" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.232483 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" containerName="placement-api" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.232542 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" containerName="placement-api" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.232818 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerName="barbican-worker-log" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.232897 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-httpd" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.232959 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" containerName="placement-log" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.233041 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" containerName="galera" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.233096 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" containerName="placement-api" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.233215 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ede73cf-0521-442e-8f01-b63d8d9b4725" containerName="nova-cell1-novncproxy-novncproxy" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.233276 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" containerName="proxy-server" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.233375 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerName="barbican-keystone-listener" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.233485 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" containerName="barbican-keystone-listener-log" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.233659 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" containerName="barbican-worker" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.234562 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-96f5cc787-k4zdr" event={"ID":"6dc39aab-86d6-45f6-b565-3da5375a1983","Type":"ContainerDied","Data":"f3190a7601d598152a37af396c7904b88f6f42e507bb39ea42ef20cf86513a75"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.234781 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.234795 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ee25-account-create-update-wfvwd" event={"ID":"da59c88c-d7a5-4fb4-8b32-4c73be685b4f","Type":"ContainerDied","Data":"ac153fe98a05ae48706179d49832c779b511da28d0ca4b3e23d77335faf159e7"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.235036 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4ee0-account-create-update-8b2hj"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.235218 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bzgn7"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.235386 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3bb0-account-create-update-4s5gt" event={"ID":"179a20bc-72ca-4f86-8cd0-5c6df9211839","Type":"ContainerDied","Data":"f34715e2f1d4a845c572bcd6537dda003609d026e4ecbea6811c93eb8b0e9c47"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.235593 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ntq4n"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.236812 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ntq4n"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.236849 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4586c964-cfb7b" event={"ID":"3c1913ce-ea65-4745-baf8-621191c50b55","Type":"ContainerDied","Data":"bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.236884 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bzgn7"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.236908 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-76b55d6f4b-r5hn5"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.236927 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d4586c964-cfb7b" event={"ID":"3c1913ce-ea65-4745-baf8-621191c50b55","Type":"ContainerDied","Data":"0dd038e71808305ae5cd0a96a285dd6bff4b3665e314c0a22f1ac19bea822116"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.236941 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f","Type":"ContainerDied","Data":"a01a9cdaf1e6dcbf50fb7c8fdd46ee8450d6a7d635fd11e06d3b74c301d9e2af"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.236956 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.236973 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4ee0-account-create-update-8b2hj"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237001 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f","Type":"ContainerDied","Data":"636798b16a7f8c3cac6ee9996c87f0b424866c21bc5ca5bf4e07577e3c9209b7"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237015 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="636798b16a7f8c3cac6ee9996c87f0b424866c21bc5ca5bf4e07577e3c9209b7" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237027 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7b8f132e-3fda-4a38-8416-1055a62e7552","Type":"ContainerDied","Data":"8a7142f351f49dd941a579a875be8ea9073637a6c87204db987a41fea1ed6c9f"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237041 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59","Type":"ContainerDied","Data":"a3b7d465ce7932bc7a2c58b3a8d58a6d40a84dc47fe41a15fffd4d2e75e42570"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237057 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59","Type":"ContainerDied","Data":"c4b86bccde7fdb0dfc9ae6da6e63a7341e8e8c06f867a7589561600f2c94a74b"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237067 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b86bccde7fdb0dfc9ae6da6e63a7341e8e8c06f867a7589561600f2c94a74b" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237076 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-77g9h"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237089 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04a2b873-3034-4b9f-9daf-81db6749d45f","Type":"ContainerDied","Data":"987a8193c41368802ddc04d8a7e7ea647df691a013531235be3b2292760ae5a8"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237102 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="987a8193c41368802ddc04d8a7e7ea647df691a013531235be3b2292760ae5a8" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237116 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f823122-da64-4ac4-aa14-96bc8f2f9c1c","Type":"ContainerDied","Data":"d56225670b178c36792afe8b5e40b521c40f21fe6ec9c747b20bf8f26e4c3640"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237133 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-77g9h"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237147 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75cbb987cb-dt6t6" event={"ID":"227e0b3d-d5ba-4265-a7b9-0419deb61603","Type":"ContainerDied","Data":"79c3e5ad5b8d05cf65c473b2c9291f7836e5f788b3ab861c6aaa651a1b04f94d"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237164 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bp9dq"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237183 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75cbb987cb-dt6t6" event={"ID":"227e0b3d-d5ba-4265-a7b9-0419deb61603","Type":"ContainerDied","Data":"2da37058507d7a534cf16b3f5044db2537edd629add3c64225613b95da29412a"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237212 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da37058507d7a534cf16b3f5044db2537edd629add3c64225613b95da29412a" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237222 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"37125f43-8fb6-4625-a260-8d43cdbe167a","Type":"ContainerDied","Data":"38f6edcfeff92d06f3873899b55e6209e9e1cfe81f7f395e7ccbe1da1f8cea93"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237236 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f6edcfeff92d06f3873899b55e6209e9e1cfe81f7f395e7ccbe1da1f8cea93" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237263 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adb334a7-9a7f-4e20-9dc8-092b9372bb10","Type":"ContainerDied","Data":"0e98c349bd4bb46a725fbf926d7b62bd3253a7708fe210e97d08c5303e0c8116"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237282 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84e9-account-create-update-jm7gq"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237296 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"adb334a7-9a7f-4e20-9dc8-092b9372bb10","Type":"ContainerDied","Data":"665fe41f5487878e4b55c461ebbb750341c8f9d90fab5cb6adbbe8a57d0023b7"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237306 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="665fe41f5487878e4b55c461ebbb750341c8f9d90fab5cb6adbbe8a57d0023b7" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237319 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84e9-account-create-update-jm7gq"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237333 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bp9dq" event={"ID":"14229729-9655-4484-a4d2-eabe80fc5abb","Type":"ContainerStarted","Data":"f2c65691782a398633edbb1461463cea7e96760216f29019a06addb1a77c9527"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237375 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerDied","Data":"93c9d2ce6e3251c442de8342769f7f316659735ad8ce35d3e69e890d9d23a3e3"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237401 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerDied","Data":"a2db183acfe6ab165940694a43d75dcfd08108c8d4af23e41358e09eb336e30e"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237415 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237432 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerDied","Data":"90a4f2d6b6f7481844beacbb10433c3f569585443e447333d0efb309134aacf4"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237443 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerDied","Data":"587e368516c01890b1b0a081641da4fdf10374a3be177548851e1df43ac2b62d"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237455 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237487 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2784-account-create-update-ll7qt"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237482 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-76b55d6f4b-r5hn5" podUID="842a2e91-c7e4-4435-aa81-c1a888cf6a51" containerName="keystone-api" containerID="cri-o://1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c" gracePeriod=30 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237500 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2eb23b1e-c7b1-465a-a91c-6042942e604a","Type":"ContainerDied","Data":"f21a8fdb04987cf84a6d8456e54051fb39ff64db73069ab1690dc9716b7d0579"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237637 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f21a8fdb04987cf84a6d8456e54051fb39ff64db73069ab1690dc9716b7d0579" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237661 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2784-account-create-update-ll7qt"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237690 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1eec40-294d-4749-bdb2-678289eeb815","Type":"ContainerDied","Data":"360c56ebe46377c61cd806e3349a6a48d19f706ccb293962a2db56915941150d"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237708 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea1eec40-294d-4749-bdb2-678289eeb815","Type":"ContainerDied","Data":"d1be8831fb2d1f360911853538b8a8df8d3a293e14572192185ec4e8fd705714"} Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237722 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1be8831fb2d1f360911853538b8a8df8d3a293e14572192185ec4e8fd705714" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237733 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d4b0-account-create-update-k68cp"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237745 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d4b0-account-create-update-k68cp"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237760 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w9nlx"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237770 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9ea6-account-create-update-w9nlx"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237809 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-f5458d448-xj5lz"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237823 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-f5458d448-xj5lz"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237836 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d684-account-create-update-ww984"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237845 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d684-account-create-update-ww984"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.237880 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bp9dq"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.239343 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.245598 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.265803 4945 scope.go:117] "RemoveContainer" containerID="cbeb77761327d37cd9d6f433a669d8d50075b45d4a2017862cf0fb4848999998" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.312949 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.333478 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-654744c45f-2rmcg"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.333784 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-654744c45f-2rmcg"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.345397 4945 scope.go:117] "RemoveContainer" containerID="d25ef862eebe28203473e7e0b4d587e0913d21309620357f26a462668c27fa9d" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.351546 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.409933 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.424038 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d4586c964-cfb7b"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.434761 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.437556 4945 scope.go:117] "RemoveContainer" containerID="bef7fb091982995ae682a74e3650d7a2edfd9a14a59f6cae0ab48178e3d0612d" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.438689 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d4586c964-cfb7b"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452602 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-internal-tls-certs\") pod \"2eb23b1e-c7b1-465a-a91c-6042942e604a\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452707 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpwxs\" (UniqueName: \"kubernetes.io/projected/2eb23b1e-c7b1-465a-a91c-6042942e604a-kube-api-access-zpwxs\") pod \"2eb23b1e-c7b1-465a-a91c-6042942e604a\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452725 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-combined-ca-bundle\") pod \"2eb23b1e-c7b1-465a-a91c-6042942e604a\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452750 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndqg2\" (UniqueName: \"kubernetes.io/projected/37125f43-8fb6-4625-a260-8d43cdbe167a-kube-api-access-ndqg2\") pod \"37125f43-8fb6-4625-a260-8d43cdbe167a\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452770 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-config-data\") pod \"2eb23b1e-c7b1-465a-a91c-6042942e604a\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452820 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-config-data\") pod \"37125f43-8fb6-4625-a260-8d43cdbe167a\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452836 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-combined-ca-bundle\") pod \"37125f43-8fb6-4625-a260-8d43cdbe167a\" (UID: \"37125f43-8fb6-4625-a260-8d43cdbe167a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452869 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"2eb23b1e-c7b1-465a-a91c-6042942e604a\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452915 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-scripts\") pod \"2eb23b1e-c7b1-465a-a91c-6042942e604a\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.452983 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-logs\") pod \"2eb23b1e-c7b1-465a-a91c-6042942e604a\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.453065 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-httpd-run\") pod \"2eb23b1e-c7b1-465a-a91c-6042942e604a\" (UID: \"2eb23b1e-c7b1-465a-a91c-6042942e604a\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.453341 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zczzl\" (UniqueName: \"kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl\") pod \"keystone-4ee0-account-create-update-8b2hj\" (UID: \"c3534100-dfd5-461d-a955-592caefc5fa6\") " pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.453399 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts\") pod \"keystone-4ee0-account-create-update-8b2hj\" (UID: \"c3534100-dfd5-461d-a955-592caefc5fa6\") " pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.464512 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-scripts" (OuterVolumeSpecName: "scripts") pod "2eb23b1e-c7b1-465a-a91c-6042942e604a" (UID: "2eb23b1e-c7b1-465a-a91c-6042942e604a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.465330 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2eb23b1e-c7b1-465a-a91c-6042942e604a" (UID: "2eb23b1e-c7b1-465a-a91c-6042942e604a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.465403 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.465447 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data podName:e920b84a-bd1b-4649-9cc0-e3b239d6a5b9 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:19.465431098 +0000 UTC m=+1609.776590044 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data") pod "rabbitmq-cell1-server-0" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9") : configmap "rabbitmq-cell1-config-data" not found Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.465479 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-logs" (OuterVolumeSpecName: "logs") pod "2eb23b1e-c7b1-465a-a91c-6042942e604a" (UID: "2eb23b1e-c7b1-465a-a91c-6042942e604a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.466660 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37125f43-8fb6-4625-a260-8d43cdbe167a-kube-api-access-ndqg2" (OuterVolumeSpecName: "kube-api-access-ndqg2") pod "37125f43-8fb6-4625-a260-8d43cdbe167a" (UID: "37125f43-8fb6-4625-a260-8d43cdbe167a"). InnerVolumeSpecName "kube-api-access-ndqg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.473472 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "2eb23b1e-c7b1-465a-a91c-6042942e604a" (UID: "2eb23b1e-c7b1-465a-a91c-6042942e604a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.473764 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb23b1e-c7b1-465a-a91c-6042942e604a-kube-api-access-zpwxs" (OuterVolumeSpecName: "kube-api-access-zpwxs") pod "2eb23b1e-c7b1-465a-a91c-6042942e604a" (UID: "2eb23b1e-c7b1-465a-a91c-6042942e604a"). InnerVolumeSpecName "kube-api-access-zpwxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.474037 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.498340 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xp9kx"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.498821 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.506706 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.507097 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xp9kx"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.519402 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.521978 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.523728 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eb23b1e-c7b1-465a-a91c-6042942e604a" (UID: "2eb23b1e-c7b1-465a-a91c-6042942e604a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.527198 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.529094 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.543121 4945 scope.go:117] "RemoveContainer" containerID="d385c8c468d076b34c28f8eb6f3dfb6aeddb5e1ac200b92a6cb0997c736e762e" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.543943 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-zczzl operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-4ee0-account-create-update-8b2hj" podUID="c3534100-dfd5-461d-a955-592caefc5fa6" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.555042 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ee25-account-create-update-wfvwd"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.555981 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-httpd-run\") pod \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556115 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-internal-tls-certs\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556193 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksm5b\" (UniqueName: \"kubernetes.io/projected/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-kube-api-access-ksm5b\") pod \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556267 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-combined-ca-bundle\") pod \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556363 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556470 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6nlx\" (UniqueName: \"kubernetes.io/projected/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-api-access-x6nlx\") pod \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556539 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sk7\" (UniqueName: \"kubernetes.io/projected/04a2b873-3034-4b9f-9daf-81db6749d45f-kube-api-access-n6sk7\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556613 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-combined-ca-bundle\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556770 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a2b873-3034-4b9f-9daf-81db6749d45f-logs\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556878 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-scripts\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.556959 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-config-data\") pod \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.557147 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-public-tls-certs\") pod \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.557230 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.560775 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-certs\") pod \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.560908 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-config\") pod \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\" (UID: \"3b4a4044-c9b6-49c9-98ed-446af4a3fe1f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.561062 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-logs\") pod \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.561157 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04a2b873-3034-4b9f-9daf-81db6749d45f-etc-machine-id\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.564263 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a2b873-3034-4b9f-9daf-81db6749d45f-logs" (OuterVolumeSpecName: "logs") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.568102 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.569322 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-combined-ca-bundle\") pod \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.570576 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-run-httpd\") pod \"c9674718-110d-4241-a199-9663979defde\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.570726 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-public-tls-certs\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.570797 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4pdr\" (UniqueName: \"kubernetes.io/projected/ea1eec40-294d-4749-bdb2-678289eeb815-kube-api-access-v4pdr\") pod \"ea1eec40-294d-4749-bdb2-678289eeb815\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.571024 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data-custom\") pod \"04a2b873-3034-4b9f-9daf-81db6749d45f\" (UID: \"04a2b873-3034-4b9f-9daf-81db6749d45f\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.571107 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-scripts\") pod \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\" (UID: \"bb7afdb8-52e2-4078-8a6e-5f1fea2acd59\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.571567 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-scripts\") pod \"c9674718-110d-4241-a199-9663979defde\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.569739 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04a2b873-3034-4b9f-9daf-81db6749d45f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.571351 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37125f43-8fb6-4625-a260-8d43cdbe167a" (UID: "37125f43-8fb6-4625-a260-8d43cdbe167a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.573320 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c9674718-110d-4241-a199-9663979defde" (UID: "c9674718-110d-4241-a199-9663979defde"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.577023 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-api-access-x6nlx" (OuterVolumeSpecName: "kube-api-access-x6nlx") pod "3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" (UID: "3b4a4044-c9b6-49c9-98ed-446af4a3fe1f"). InnerVolumeSpecName "kube-api-access-x6nlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.577284 4945 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.577393 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts podName:c3534100-dfd5-461d-a955-592caefc5fa6 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:12.077375032 +0000 UTC m=+1602.388533978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts") pod "keystone-4ee0-account-create-update-8b2hj" (UID: "c3534100-dfd5-461d-a955-592caefc5fa6") : configmap "openstack-scripts" not found Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.583049 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ee25-account-create-update-wfvwd"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.583395 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-logs" (OuterVolumeSpecName: "logs") pod "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" (UID: "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.583496 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" (UID: "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.571978 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts\") pod \"keystone-4ee0-account-create-update-8b2hj\" (UID: \"c3534100-dfd5-461d-a955-592caefc5fa6\") " pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.586910 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zczzl\" (UniqueName: \"kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl\") pod \"keystone-4ee0-account-create-update-8b2hj\" (UID: \"c3534100-dfd5-461d-a955-592caefc5fa6\") " pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.587910 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpwxs\" (UniqueName: \"kubernetes.io/projected/2eb23b1e-c7b1-465a-a91c-6042942e604a-kube-api-access-zpwxs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.588020 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.588878 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndqg2\" (UniqueName: \"kubernetes.io/projected/37125f43-8fb6-4625-a260-8d43cdbe167a-kube-api-access-ndqg2\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.588953 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.589047 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.589128 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.589366 4945 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04a2b873-3034-4b9f-9daf-81db6749d45f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.589542 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.590104 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.590174 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.590286 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.590344 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2eb23b1e-c7b1-465a-a91c-6042942e604a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.590406 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6nlx\" (UniqueName: \"kubernetes.io/projected/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-api-access-x6nlx\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.590468 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a2b873-3034-4b9f-9daf-81db6749d45f-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.591919 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-96f5cc787-k4zdr"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.593680 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a2b873-3034-4b9f-9daf-81db6749d45f-kube-api-access-n6sk7" (OuterVolumeSpecName: "kube-api-access-n6sk7") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "kube-api-access-n6sk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.593910 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-scripts" (OuterVolumeSpecName: "scripts") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.596686 4945 projected.go:194] Error preparing data for projected volume kube-api-access-zczzl for pod openstack/keystone-4ee0-account-create-update-8b2hj: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.596755 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl podName:c3534100-dfd5-461d-a955-592caefc5fa6 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:12.096736058 +0000 UTC m=+1602.407895004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zczzl" (UniqueName: "kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl") pod "keystone-4ee0-account-create-update-8b2hj" (UID: "c3534100-dfd5-461d-a955-592caefc5fa6") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.602695 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-96f5cc787-k4zdr"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.603204 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-scripts" (OuterVolumeSpecName: "scripts") pod "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" (UID: "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.607902 4945 scope.go:117] "RemoveContainer" containerID="5f876eb0a75e5d6e0f574148126dfbb45b4fa70619ddd67c99cc1342b57a8d1f" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.609366 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-kube-api-access-ksm5b" (OuterVolumeSpecName: "kube-api-access-ksm5b") pod "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" (UID: "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59"). InnerVolumeSpecName "kube-api-access-ksm5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.609825 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" (UID: "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.612125 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.630846 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-scripts" (OuterVolumeSpecName: "scripts") pod "c9674718-110d-4241-a199-9663979defde" (UID: "c9674718-110d-4241-a199-9663979defde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.633330 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3bb0-account-create-update-4s5gt"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.642104 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1eec40-294d-4749-bdb2-678289eeb815-kube-api-access-v4pdr" (OuterVolumeSpecName: "kube-api-access-v4pdr") pod "ea1eec40-294d-4749-bdb2-678289eeb815" (UID: "ea1eec40-294d-4749-bdb2-678289eeb815"). InnerVolumeSpecName "kube-api-access-v4pdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.650195 4945 scope.go:117] "RemoveContainer" containerID="bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.656524 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3bb0-account-create-update-4s5gt"] Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.664545 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.676265 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-config-data" (OuterVolumeSpecName: "config-data") pod "37125f43-8fb6-4625-a260-8d43cdbe167a" (UID: "37125f43-8fb6-4625-a260-8d43cdbe167a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.679380 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2eb23b1e-c7b1-465a-a91c-6042942e604a" (UID: "2eb23b1e-c7b1-465a-a91c-6042942e604a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.689139 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" (UID: "3b4a4044-c9b6-49c9-98ed-446af4a3fe1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.691810 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-config-data\") pod \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.691856 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqxsw\" (UniqueName: \"kubernetes.io/projected/227e0b3d-d5ba-4265-a7b9-0419deb61603-kube-api-access-nqxsw\") pod \"227e0b3d-d5ba-4265-a7b9-0419deb61603\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.691874 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-config-data\") pod \"c9674718-110d-4241-a199-9663979defde\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.691938 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-config-data\") pod \"7b8f132e-3fda-4a38-8416-1055a62e7552\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.691958 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ql4g\" (UniqueName: \"kubernetes.io/projected/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-kube-api-access-4ql4g\") pod \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.691983 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-combined-ca-bundle\") pod \"c9674718-110d-4241-a199-9663979defde\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.703198 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerName="galera" containerID="cri-o://a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e" gracePeriod=30 Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.705259 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1eec40-294d-4749-bdb2-678289eeb815-logs\") pod \"ea1eec40-294d-4749-bdb2-678289eeb815\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.705398 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb334a7-9a7f-4e20-9dc8-092b9372bb10-logs\") pod \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.705431 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfhbr\" (UniqueName: \"kubernetes.io/projected/7b8f132e-3fda-4a38-8416-1055a62e7552-kube-api-access-lfhbr\") pod \"7b8f132e-3fda-4a38-8416-1055a62e7552\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.705650 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-sg-core-conf-yaml\") pod \"c9674718-110d-4241-a199-9663979defde\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.705794 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-combined-ca-bundle\") pod \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\" (UID: \"9f823122-da64-4ac4-aa14-96bc8f2f9c1c\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.706360 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-combined-ca-bundle\") pod \"7b8f132e-3fda-4a38-8416-1055a62e7552\" (UID: \"7b8f132e-3fda-4a38-8416-1055a62e7552\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.706425 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-combined-ca-bundle\") pod \"ea1eec40-294d-4749-bdb2-678289eeb815\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707139 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/227e0b3d-d5ba-4265-a7b9-0419deb61603-logs\") pod \"227e0b3d-d5ba-4265-a7b9-0419deb61603\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707173 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-nova-metadata-tls-certs\") pod \"ea1eec40-294d-4749-bdb2-678289eeb815\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707213 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-ceilometer-tls-certs\") pod \"c9674718-110d-4241-a199-9663979defde\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707231 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-combined-ca-bundle\") pod \"227e0b3d-d5ba-4265-a7b9-0419deb61603\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707282 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7ng2\" (UniqueName: \"kubernetes.io/projected/c9674718-110d-4241-a199-9663979defde-kube-api-access-m7ng2\") pod \"c9674718-110d-4241-a199-9663979defde\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707315 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-public-tls-certs\") pod \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707333 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmq2x\" (UniqueName: \"kubernetes.io/projected/adb334a7-9a7f-4e20-9dc8-092b9372bb10-kube-api-access-zmq2x\") pod \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707374 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data\") pod \"227e0b3d-d5ba-4265-a7b9-0419deb61603\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707405 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-config-data\") pod \"ea1eec40-294d-4749-bdb2-678289eeb815\" (UID: \"ea1eec40-294d-4749-bdb2-678289eeb815\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707561 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea1eec40-294d-4749-bdb2-678289eeb815-logs" (OuterVolumeSpecName: "logs") pod "ea1eec40-294d-4749-bdb2-678289eeb815" (UID: "ea1eec40-294d-4749-bdb2-678289eeb815"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.708026 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb334a7-9a7f-4e20-9dc8-092b9372bb10-logs" (OuterVolumeSpecName: "logs") pod "adb334a7-9a7f-4e20-9dc8-092b9372bb10" (UID: "adb334a7-9a7f-4e20-9dc8-092b9372bb10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.707439 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-public-tls-certs\") pod \"227e0b3d-d5ba-4265-a7b9-0419deb61603\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.708506 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-log-httpd\") pod \"c9674718-110d-4241-a199-9663979defde\" (UID: \"c9674718-110d-4241-a199-9663979defde\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.708548 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-combined-ca-bundle\") pod \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.709695 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-internal-tls-certs\") pod \"227e0b3d-d5ba-4265-a7b9-0419deb61603\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.709728 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-internal-tls-certs\") pod \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.709801 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data-custom\") pod \"227e0b3d-d5ba-4265-a7b9-0419deb61603\" (UID: \"227e0b3d-d5ba-4265-a7b9-0419deb61603\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.709823 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-config-data\") pod \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\" (UID: \"adb334a7-9a7f-4e20-9dc8-092b9372bb10\") " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.718114 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227e0b3d-d5ba-4265-a7b9-0419deb61603-kube-api-access-nqxsw" (OuterVolumeSpecName: "kube-api-access-nqxsw") pod "227e0b3d-d5ba-4265-a7b9-0419deb61603" (UID: "227e0b3d-d5ba-4265-a7b9-0419deb61603"). InnerVolumeSpecName "kube-api-access-nqxsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.719780 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c9674718-110d-4241-a199-9663979defde" (UID: "c9674718-110d-4241-a199-9663979defde"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720069 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4pdr\" (UniqueName: \"kubernetes.io/projected/ea1eec40-294d-4749-bdb2-678289eeb815-kube-api-access-v4pdr\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720171 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720183 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720195 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720205 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksm5b\" (UniqueName: \"kubernetes.io/projected/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-kube-api-access-ksm5b\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720217 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720228 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6sk7\" (UniqueName: \"kubernetes.io/projected/04a2b873-3034-4b9f-9daf-81db6749d45f-kube-api-access-n6sk7\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720240 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720250 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720261 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqxsw\" (UniqueName: \"kubernetes.io/projected/227e0b3d-d5ba-4265-a7b9-0419deb61603-kube-api-access-nqxsw\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720299 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720316 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea1eec40-294d-4749-bdb2-678289eeb815-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720330 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adb334a7-9a7f-4e20-9dc8-092b9372bb10-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720345 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37125f43-8fb6-4625-a260-8d43cdbe167a-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.720355 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.727206 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" (UID: "3b4a4044-c9b6-49c9-98ed-446af4a3fe1f"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.731353 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/227e0b3d-d5ba-4265-a7b9-0419deb61603-logs" (OuterVolumeSpecName: "logs") pod "227e0b3d-d5ba-4265-a7b9-0419deb61603" (UID: "227e0b3d-d5ba-4265-a7b9-0419deb61603"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.732872 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb334a7-9a7f-4e20-9dc8-092b9372bb10-kube-api-access-zmq2x" (OuterVolumeSpecName: "kube-api-access-zmq2x") pod "adb334a7-9a7f-4e20-9dc8-092b9372bb10" (UID: "adb334a7-9a7f-4e20-9dc8-092b9372bb10"). InnerVolumeSpecName "kube-api-access-zmq2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.733017 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" (UID: "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.748581 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9674718-110d-4241-a199-9663979defde-kube-api-access-m7ng2" (OuterVolumeSpecName: "kube-api-access-m7ng2") pod "c9674718-110d-4241-a199-9663979defde" (UID: "c9674718-110d-4241-a199-9663979defde"). InnerVolumeSpecName "kube-api-access-m7ng2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.763164 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-kube-api-access-4ql4g" (OuterVolumeSpecName: "kube-api-access-4ql4g") pod "9f823122-da64-4ac4-aa14-96bc8f2f9c1c" (UID: "9f823122-da64-4ac4-aa14-96bc8f2f9c1c"). InnerVolumeSpecName "kube-api-access-4ql4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.768651 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.778181 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8f132e-3fda-4a38-8416-1055a62e7552-kube-api-access-lfhbr" (OuterVolumeSpecName: "kube-api-access-lfhbr") pod "7b8f132e-3fda-4a38-8416-1055a62e7552" (UID: "7b8f132e-3fda-4a38-8416-1055a62e7552"). InnerVolumeSpecName "kube-api-access-lfhbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.786756 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824728 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824763 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ql4g\" (UniqueName: \"kubernetes.io/projected/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-kube-api-access-4ql4g\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824776 4945 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824793 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfhbr\" (UniqueName: \"kubernetes.io/projected/7b8f132e-3fda-4a38-8416-1055a62e7552-kube-api-access-lfhbr\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824806 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824816 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/227e0b3d-d5ba-4265-a7b9-0419deb61603-logs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824826 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824840 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7ng2\" (UniqueName: \"kubernetes.io/projected/c9674718-110d-4241-a199-9663979defde-kube-api-access-m7ng2\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824850 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmq2x\" (UniqueName: \"kubernetes.io/projected/adb334a7-9a7f-4e20-9dc8-092b9372bb10-kube-api-access-zmq2x\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.824860 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9674718-110d-4241-a199-9663979defde-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.824958 4945 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.825031 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts podName:14229729-9655-4484-a4d2-eabe80fc5abb nodeName:}" failed. No retries permitted until 2026-01-08 23:42:12.825009972 +0000 UTC m=+1603.136168918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts") pod "root-account-create-update-bp9dq" (UID: "14229729-9655-4484-a4d2-eabe80fc5abb") : configmap "openstack-scripts" not found Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.825762 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "227e0b3d-d5ba-4265-a7b9-0419deb61603" (UID: "227e0b3d-d5ba-4265-a7b9-0419deb61603"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.867593 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.867749 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" (UID: "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.874611 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-config-data" (OuterVolumeSpecName: "config-data") pod "7b8f132e-3fda-4a38-8416-1055a62e7552" (UID: "7b8f132e-3fda-4a38-8416-1055a62e7552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.915984 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "227e0b3d-d5ba-4265-a7b9-0419deb61603" (UID: "227e0b3d-d5ba-4265-a7b9-0419deb61603"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.928050 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.928085 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.928094 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.928103 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.928112 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.928183 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-config: configmap "ovnnorthd-config" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.928230 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:19.928216176 +0000 UTC m=+1610.239375112 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-config" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.928731 4945 configmap.go:193] Couldn't get configMap openstack/ovnnorthd-scripts: configmap "ovnnorthd-scripts" not found Jan 08 23:42:11 crc kubenswrapper[4945]: E0108 23:42:11.929229 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts podName:eefc7456-a6c7-4442-aa3a-370a1f9b01fa nodeName:}" failed. No retries permitted until 2026-01-08 23:42:19.92920029 +0000 UTC m=+1610.240359236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts") pod "ovn-northd-0" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa") : configmap "ovnnorthd-scripts" not found Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.941592 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea1eec40-294d-4749-bdb2-678289eeb815" (UID: "ea1eec40-294d-4749-bdb2-678289eeb815"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.958685 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-config-data" (OuterVolumeSpecName: "config-data") pod "adb334a7-9a7f-4e20-9dc8-092b9372bb10" (UID: "adb334a7-9a7f-4e20-9dc8-092b9372bb10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.972271 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c9674718-110d-4241-a199-9663979defde" (UID: "c9674718-110d-4241-a199-9663979defde"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:11 crc kubenswrapper[4945]: I0108 23:42:11.994333 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" (UID: "3b4a4044-c9b6-49c9-98ed-446af4a3fe1f"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.014311 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046bb87c-2b1c-46eb-9db3-78270701ec34" path="/var/lib/kubelet/pods/046bb87c-2b1c-46eb-9db3-78270701ec34/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.015391 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e74d9ee-5e3b-4205-8ac4-1729d495861b" path="/var/lib/kubelet/pods/0e74d9ee-5e3b-4205-8ac4-1729d495861b/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.015783 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179a20bc-72ca-4f86-8cd0-5c6df9211839" path="/var/lib/kubelet/pods/179a20bc-72ca-4f86-8cd0-5c6df9211839/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.016231 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf00aca-6357-47bb-8d88-a931518afa75" path="/var/lib/kubelet/pods/2cf00aca-6357-47bb-8d88-a931518afa75/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.017442 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3773ecb5-8e59-462c-9fc6-323c779b2544" path="/var/lib/kubelet/pods/3773ecb5-8e59-462c-9fc6-323c779b2544/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.018088 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c1913ce-ea65-4745-baf8-621191c50b55" path="/var/lib/kubelet/pods/3c1913ce-ea65-4745-baf8-621191c50b55/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.018633 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ede73cf-0521-442e-8f01-b63d8d9b4725" path="/var/lib/kubelet/pods/5ede73cf-0521-442e-8f01-b63d8d9b4725/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.019873 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dc39aab-86d6-45f6-b565-3da5375a1983" path="/var/lib/kubelet/pods/6dc39aab-86d6-45f6-b565-3da5375a1983/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.020446 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6d9797-d686-44c0-918a-f17bac13b874" path="/var/lib/kubelet/pods/8e6d9797-d686-44c0-918a-f17bac13b874/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.020776 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6e1114b-d949-45e8-a640-d448d52ea983" path="/var/lib/kubelet/pods/a6e1114b-d949-45e8-a640-d448d52ea983/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.021153 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55a1df4-3a27-411d-b6ff-8c72c20f4d05" path="/var/lib/kubelet/pods/b55a1df4-3a27-411d-b6ff-8c72c20f4d05/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.022068 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fe48ad-f532-4c60-b88b-95a894d0b519" path="/var/lib/kubelet/pods/d4fe48ad-f532-4c60-b88b-95a894d0b519/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.024025 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90095c9-0666-4a15-878f-4b62280c3d00" path="/var/lib/kubelet/pods/d90095c9-0666-4a15-878f-4b62280c3d00/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.024355 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-config-data" (OuterVolumeSpecName: "config-data") pod "ea1eec40-294d-4749-bdb2-678289eeb815" (UID: "ea1eec40-294d-4749-bdb2-678289eeb815"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.024371 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da59c88c-d7a5-4fb4-8b32-4c73be685b4f" path="/var/lib/kubelet/pods/da59c88c-d7a5-4fb4-8b32-4c73be685b4f/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.025311 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe6e840-6658-49dd-b547-c58c4bc1479a" path="/var/lib/kubelet/pods/dbe6e840-6658-49dd-b547-c58c4bc1479a/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.025904 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e500c7f0-c056-45f2-816d-d904fd8e18cf" path="/var/lib/kubelet/pods/e500c7f0-c056-45f2-816d-d904fd8e18cf/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.026926 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda15927-fac6-455b-8615-24f8f535c80a" path="/var/lib/kubelet/pods/eda15927-fac6-455b-8615-24f8f535c80a/volumes" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.029063 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.029084 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.029096 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.029107 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.029116 4945 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.039676 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f823122-da64-4ac4-aa14-96bc8f2f9c1c" (UID: "9f823122-da64-4ac4-aa14-96bc8f2f9c1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.042378 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-config-data" (OuterVolumeSpecName: "config-data") pod "2eb23b1e-c7b1-465a-a91c-6042942e604a" (UID: "2eb23b1e-c7b1-465a-a91c-6042942e604a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.050727 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adb334a7-9a7f-4e20-9dc8-092b9372bb10" (UID: "adb334a7-9a7f-4e20-9dc8-092b9372bb10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.054424 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b8f132e-3fda-4a38-8416-1055a62e7552" (UID: "7b8f132e-3fda-4a38-8416-1055a62e7552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.084959 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c9674718-110d-4241-a199-9663979defde" (UID: "c9674718-110d-4241-a199-9663979defde"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.085048 4945 scope.go:117] "RemoveContainer" containerID="b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.085310 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data" (OuterVolumeSpecName: "config-data") pod "227e0b3d-d5ba-4265-a7b9-0419deb61603" (UID: "227e0b3d-d5ba-4265-a7b9-0419deb61603"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.104983 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "227e0b3d-d5ba-4265-a7b9-0419deb61603" (UID: "227e0b3d-d5ba-4265-a7b9-0419deb61603"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.130336 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zczzl\" (UniqueName: \"kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl\") pod \"keystone-4ee0-account-create-update-8b2hj\" (UID: \"c3534100-dfd5-461d-a955-592caefc5fa6\") " pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.130698 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts\") pod \"keystone-4ee0-account-create-update-8b2hj\" (UID: \"c3534100-dfd5-461d-a955-592caefc5fa6\") " pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.131757 4945 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.131844 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts podName:c3534100-dfd5-461d-a955-592caefc5fa6 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:13.131818895 +0000 UTC m=+1603.442977921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts") pod "keystone-4ee0-account-create-update-8b2hj" (UID: "c3534100-dfd5-461d-a955-592caefc5fa6") : configmap "openstack-scripts" not found Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.138332 4945 scope.go:117] "RemoveContainer" containerID="bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5" Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.138412 4945 projected.go:194] Error preparing data for projected volume kube-api-access-zczzl for pod openstack/keystone-4ee0-account-create-update-8b2hj: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.138873 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5\": container with ID starting with bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5 not found: ID does not exist" containerID="bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.138920 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5"} err="failed to get container status \"bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5\": rpc error: code = NotFound desc = could not find container \"bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5\": container with ID starting with bb6df235b3481ee3764d466ecffc6041436a32a6b1b67f636a1ce3c33bbc51e5 not found: ID does not exist" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.138949 4945 scope.go:117] "RemoveContainer" containerID="b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4" Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.139132 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl podName:c3534100-dfd5-461d-a955-592caefc5fa6 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:13.139108271 +0000 UTC m=+1603.450267217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zczzl" (UniqueName: "kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl") pod "keystone-4ee0-account-create-update-8b2hj" (UID: "c3534100-dfd5-461d-a955-592caefc5fa6") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.140329 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4\": container with ID starting with b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4 not found: ID does not exist" containerID="b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.140362 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4"} err="failed to get container status \"b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4\": rpc error: code = NotFound desc = could not find container \"b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4\": container with ID starting with b249e55df18b74f9795153d23e3a9288fae557e2d8c16aad179207bff463acb4 not found: ID does not exist" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.141081 4945 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.142130 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.142143 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.142154 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.142165 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eb23b1e-c7b1-465a-a91c-6042942e604a-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.142193 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.142261 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8f132e-3fda-4a38-8416-1055a62e7552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.161618 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "adb334a7-9a7f-4e20-9dc8-092b9372bb10" (UID: "adb334a7-9a7f-4e20-9dc8-092b9372bb10"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.163812 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-config-data" (OuterVolumeSpecName: "config-data") pod "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" (UID: "bb7afdb8-52e2-4078-8a6e-5f1fea2acd59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.164407 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.170296 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "adb334a7-9a7f-4e20-9dc8-092b9372bb10" (UID: "adb334a7-9a7f-4e20-9dc8-092b9372bb10"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.180829 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-config-data" (OuterVolumeSpecName: "config-data") pod "9f823122-da64-4ac4-aa14-96bc8f2f9c1c" (UID: "9f823122-da64-4ac4-aa14-96bc8f2f9c1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.182738 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data" (OuterVolumeSpecName: "config-data") pod "04a2b873-3034-4b9f-9daf-81db6749d45f" (UID: "04a2b873-3034-4b9f-9daf-81db6749d45f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.190175 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9674718-110d-4241-a199-9663979defde" (UID: "c9674718-110d-4241-a199-9663979defde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.210213 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ea1eec40-294d-4749-bdb2-678289eeb815" (UID: "ea1eec40-294d-4749-bdb2-678289eeb815"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.221364 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-config-data" (OuterVolumeSpecName: "config-data") pod "c9674718-110d-4241-a199-9663979defde" (UID: "c9674718-110d-4241-a199-9663979defde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.229371 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.229433 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9674718-110d-4241-a199-9663979defde","Type":"ContainerDied","Data":"9158d97ce2ecd82a7716775d25c8d38a54ff8ef920891073ddf84d4c777c8621"} Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.229645 4945 scope.go:117] "RemoveContainer" containerID="93c9d2ce6e3251c442de8342769f7f316659735ad8ce35d3e69e890d9d23a3e3" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.232952 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "227e0b3d-d5ba-4265-a7b9-0419deb61603" (UID: "227e0b3d-d5ba-4265-a7b9-0419deb61603"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.237953 4945 generic.go:334] "Generic (PLEG): container finished" podID="d9574582-49aa-48ec-8b43-bc55ed78a3d1" containerID="01e6f6d29fbc37e10b337041462136b759e49d0ec4a3d75774556109c7c74797" exitCode=0 Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.238031 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d9574582-49aa-48ec-8b43-bc55ed78a3d1","Type":"ContainerDied","Data":"01e6f6d29fbc37e10b337041462136b759e49d0ec4a3d75774556109c7c74797"} Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.240632 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9f823122-da64-4ac4-aa14-96bc8f2f9c1c","Type":"ContainerDied","Data":"87e747653007d44d2f91b54c2ba7bf74fb2bf0058b8a8ad219d5c4a5a3c771ee"} Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.240742 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246627 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246650 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/227e0b3d-d5ba-4265-a7b9-0419deb61603-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246660 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adb334a7-9a7f-4e20-9dc8-092b9372bb10-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246669 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246678 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f823122-da64-4ac4-aa14-96bc8f2f9c1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246686 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246695 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9674718-110d-4241-a199-9663979defde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246704 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246713 4945 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea1eec40-294d-4749-bdb2-678289eeb815-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.246722 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a2b873-3034-4b9f-9daf-81db6749d45f-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.252666 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7b8f132e-3fda-4a38-8416-1055a62e7552","Type":"ContainerDied","Data":"6c2771676fce4f06fa6806009437dbcca0260587466affbf887c31ab79e1fc31"} Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.252835 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.254516 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.254653 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.254694 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.254719 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.254767 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.255259 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75cbb987cb-dt6t6" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.255308 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.255341 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.265785 4945 scope.go:117] "RemoveContainer" containerID="a2db183acfe6ab165940694a43d75dcfd08108c8d4af23e41358e09eb336e30e" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.272140 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.282178 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.296869 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.460694 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.466444 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.472199 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.472281 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.474251 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.479608 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.483779 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.483820 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.495440 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.514881 4945 scope.go:117] "RemoveContainer" containerID="90a4f2d6b6f7481844beacbb10433c3f569585443e447333d0efb309134aacf4" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.523164 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.550149 4945 scope.go:117] "RemoveContainer" containerID="587e368516c01890b1b0a081641da4fdf10374a3be177548851e1df43ac2b62d" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.592243 4945 scope.go:117] "RemoveContainer" containerID="d56225670b178c36792afe8b5e40b521c40f21fe6ec9c747b20bf8f26e4c3640" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.608375 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.620650 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.627474 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.631121 4945 scope.go:117] "RemoveContainer" containerID="8a7142f351f49dd941a579a875be8ea9073637a6c87204db987a41fea1ed6c9f" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.647322 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.662579 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-config-data\") pod \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.662647 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-memcached-tls-certs\") pod \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.662672 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-combined-ca-bundle\") pod \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.662818 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7smc\" (UniqueName: \"kubernetes.io/projected/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kube-api-access-m7smc\") pod \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.662859 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kolla-config\") pod \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\" (UID: \"d9574582-49aa-48ec-8b43-bc55ed78a3d1\") " Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.663349 4945 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.663397 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data podName:71eb40d2-e481-445d-99ea-948b918b862d nodeName:}" failed. No retries permitted until 2026-01-08 23:42:20.663383648 +0000 UTC m=+1610.974542594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data") pod "rabbitmq-server-0" (UID: "71eb40d2-e481-445d-99ea-948b918b862d") : configmap "rabbitmq-config-data" not found Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.664929 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-config-data" (OuterVolumeSpecName: "config-data") pod "d9574582-49aa-48ec-8b43-bc55ed78a3d1" (UID: "d9574582-49aa-48ec-8b43-bc55ed78a3d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.669931 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d9574582-49aa-48ec-8b43-bc55ed78a3d1" (UID: "d9574582-49aa-48ec-8b43-bc55ed78a3d1"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.677592 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.686165 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kube-api-access-m7smc" (OuterVolumeSpecName: "kube-api-access-m7smc") pod "d9574582-49aa-48ec-8b43-bc55ed78a3d1" (UID: "d9574582-49aa-48ec-8b43-bc55ed78a3d1"). InnerVolumeSpecName "kube-api-access-m7smc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.698041 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.704953 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9574582-49aa-48ec-8b43-bc55ed78a3d1" (UID: "d9574582-49aa-48ec-8b43-bc55ed78a3d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.709563 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75cbb987cb-dt6t6"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.731331 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75cbb987cb-dt6t6"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.736469 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.744510 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "d9574582-49aa-48ec-8b43-bc55ed78a3d1" (UID: "d9574582-49aa-48ec-8b43-bc55ed78a3d1"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.748259 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.755211 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.760690 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.765166 4945 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.765210 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d9574582-49aa-48ec-8b43-bc55ed78a3d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.765224 4945 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.765237 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9574582-49aa-48ec-8b43-bc55ed78a3d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.765252 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7smc\" (UniqueName: \"kubernetes.io/projected/d9574582-49aa-48ec-8b43-bc55ed78a3d1-kube-api-access-m7smc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.767932 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.773519 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.780719 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.781313 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fr87r" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.786212 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.790500 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.794981 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.801650 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.807483 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.817289 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.842472 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fr87r" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" probeResult="failure" output=< Jan 08 23:42:12 crc kubenswrapper[4945]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 08 23:42:12 crc kubenswrapper[4945]: > Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.867561 4945 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 08 23:42:12 crc kubenswrapper[4945]: E0108 23:42:12.867678 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts podName:14229729-9655-4484-a4d2-eabe80fc5abb nodeName:}" failed. No retries permitted until 2026-01-08 23:42:14.867641874 +0000 UTC m=+1605.178800820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts") pod "root-account-create-update-bp9dq" (UID: "14229729-9655-4484-a4d2-eabe80fc5abb") : configmap "openstack-scripts" not found Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.967569 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts\") pod \"14229729-9655-4484-a4d2-eabe80fc5abb\" (UID: \"14229729-9655-4484-a4d2-eabe80fc5abb\") " Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.967715 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzpcj\" (UniqueName: \"kubernetes.io/projected/14229729-9655-4484-a4d2-eabe80fc5abb-kube-api-access-rzpcj\") pod \"14229729-9655-4484-a4d2-eabe80fc5abb\" (UID: \"14229729-9655-4484-a4d2-eabe80fc5abb\") " Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.968138 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14229729-9655-4484-a4d2-eabe80fc5abb" (UID: "14229729-9655-4484-a4d2-eabe80fc5abb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.968251 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14229729-9655-4484-a4d2-eabe80fc5abb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:12 crc kubenswrapper[4945]: I0108 23:42:12.970918 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14229729-9655-4484-a4d2-eabe80fc5abb-kube-api-access-rzpcj" (OuterVolumeSpecName: "kube-api-access-rzpcj") pod "14229729-9655-4484-a4d2-eabe80fc5abb" (UID: "14229729-9655-4484-a4d2-eabe80fc5abb"). InnerVolumeSpecName "kube-api-access-rzpcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.050356 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="5ede73cf-0521-442e-8f01-b63d8d9b4725" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.197:6080/vnc_lite.html\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.070013 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzpcj\" (UniqueName: \"kubernetes.io/projected/14229729-9655-4484-a4d2-eabe80fc5abb-kube-api-access-rzpcj\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.171925 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts\") pod \"keystone-4ee0-account-create-update-8b2hj\" (UID: \"c3534100-dfd5-461d-a955-592caefc5fa6\") " pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.172610 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zczzl\" (UniqueName: \"kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl\") pod \"keystone-4ee0-account-create-update-8b2hj\" (UID: \"c3534100-dfd5-461d-a955-592caefc5fa6\") " pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:13 crc kubenswrapper[4945]: E0108 23:42:13.173342 4945 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 08 23:42:13 crc kubenswrapper[4945]: E0108 23:42:13.173406 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts podName:c3534100-dfd5-461d-a955-592caefc5fa6 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:15.173370912 +0000 UTC m=+1605.484529858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts") pod "keystone-4ee0-account-create-update-8b2hj" (UID: "c3534100-dfd5-461d-a955-592caefc5fa6") : configmap "openstack-scripts" not found Jan 08 23:42:13 crc kubenswrapper[4945]: E0108 23:42:13.178169 4945 projected.go:194] Error preparing data for projected volume kube-api-access-zczzl for pod openstack/keystone-4ee0-account-create-update-8b2hj: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 08 23:42:13 crc kubenswrapper[4945]: E0108 23:42:13.178752 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl podName:c3534100-dfd5-461d-a955-592caefc5fa6 nodeName:}" failed. No retries permitted until 2026-01-08 23:42:15.178711501 +0000 UTC m=+1605.489870447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zczzl" (UniqueName: "kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl") pod "keystone-4ee0-account-create-update-8b2hj" (UID: "c3534100-dfd5-461d-a955-592caefc5fa6") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 08 23:42:13 crc kubenswrapper[4945]: E0108 23:42:13.178903 4945 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 08 23:42:13 crc kubenswrapper[4945]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-08T23:42:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 08 23:42:13 crc kubenswrapper[4945]: /etc/init.d/functions: line 589: 442 Alarm clock "$@" Jan 08 23:42:13 crc kubenswrapper[4945]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-fr87r" message=< Jan 08 23:42:13 crc kubenswrapper[4945]: Exiting ovn-controller (1) [FAILED] Jan 08 23:42:13 crc kubenswrapper[4945]: Killing ovn-controller (1) [ OK ] Jan 08 23:42:13 crc kubenswrapper[4945]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 08 23:42:13 crc kubenswrapper[4945]: 2026-01-08T23:42:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 08 23:42:13 crc kubenswrapper[4945]: /etc/init.d/functions: line 589: 442 Alarm clock "$@" Jan 08 23:42:13 crc kubenswrapper[4945]: > Jan 08 23:42:13 crc kubenswrapper[4945]: E0108 23:42:13.178955 4945 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 08 23:42:13 crc kubenswrapper[4945]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-08T23:42:05Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 08 23:42:13 crc kubenswrapper[4945]: /etc/init.d/functions: line 589: 442 Alarm clock "$@" Jan 08 23:42:13 crc kubenswrapper[4945]: > pod="openstack/ovn-controller-fr87r" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" containerID="cri-o://1b0f37807bb46059b745ff0659643ccbaa1cf4a8f20a7c025f2d1511d9fb08db" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.179087 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-fr87r" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" containerID="cri-o://1b0f37807bb46059b745ff0659643ccbaa1cf4a8f20a7c025f2d1511d9fb08db" gracePeriod=22 Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.273721 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fr87r_6c4f1760-d296-46a1-9ec8-cb64e543897c/ovn-controller/0.log" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.273756 4945 generic.go:334] "Generic (PLEG): container finished" podID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerID="1b0f37807bb46059b745ff0659643ccbaa1cf4a8f20a7c025f2d1511d9fb08db" exitCode=137 Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.273797 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r" event={"ID":"6c4f1760-d296-46a1-9ec8-cb64e543897c","Type":"ContainerDied","Data":"1b0f37807bb46059b745ff0659643ccbaa1cf4a8f20a7c025f2d1511d9fb08db"} Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.280492 4945 generic.go:334] "Generic (PLEG): container finished" podID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerID="0e4014df7512e89b5d332f842e50840d513c77310ddfa321933cdc5b307230c9" exitCode=0 Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.280535 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9","Type":"ContainerDied","Data":"0e4014df7512e89b5d332f842e50840d513c77310ddfa321933cdc5b307230c9"} Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.283411 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_eefc7456-a6c7-4442-aa3a-370a1f9b01fa/ovn-northd/0.log" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.283439 4945 generic.go:334] "Generic (PLEG): container finished" podID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerID="fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" exitCode=139 Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.283609 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eefc7456-a6c7-4442-aa3a-370a1f9b01fa","Type":"ContainerDied","Data":"fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975"} Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.288059 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bp9dq" event={"ID":"14229729-9655-4484-a4d2-eabe80fc5abb","Type":"ContainerDied","Data":"f2c65691782a398633edbb1461463cea7e96760216f29019a06addb1a77c9527"} Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.288125 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bp9dq" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.312689 4945 generic.go:334] "Generic (PLEG): container finished" podID="71eb40d2-e481-445d-99ea-948b918b862d" containerID="3c8e62ad7bb3a5c1b692e76747e535c86452a618975faa4a7349a1cd8e6445b4" exitCode=0 Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.312751 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71eb40d2-e481-445d-99ea-948b918b862d","Type":"ContainerDied","Data":"3c8e62ad7bb3a5c1b692e76747e535c86452a618975faa4a7349a1cd8e6445b4"} Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.312969 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.314287 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d9574582-49aa-48ec-8b43-bc55ed78a3d1","Type":"ContainerDied","Data":"5b60050490d930ca0f6510b2e6d366465aac37eb45b0fe3ef232a7b56a143932"} Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.314403 4945 scope.go:117] "RemoveContainer" containerID="01e6f6d29fbc37e10b337041462136b759e49d0ec4a3d75774556109c7c74797" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.314355 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.316584 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4ee0-account-create-update-8b2hj" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.359917 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bp9dq"] Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.361085 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_eefc7456-a6c7-4442-aa3a-370a1f9b01fa/ovn-northd/0.log" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.361274 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.372143 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bp9dq"] Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.421934 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4ee0-account-create-update-8b2hj"] Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.434174 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4ee0-account-create-update-8b2hj"] Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.452390 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.457615 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.478529 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config\") pod \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479168 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-confd\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479266 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-metrics-certs-tls-certs\") pod \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479294 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-rundir\") pod \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479324 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-northd-tls-certs\") pod \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479360 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrs6k\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-kube-api-access-vrs6k\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479379 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-plugins\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479399 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-server-conf\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479410 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config" (OuterVolumeSpecName: "config") pod "eefc7456-a6c7-4442-aa3a-370a1f9b01fa" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479435 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479549 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-plugins-conf\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479590 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479617 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-combined-ca-bundle\") pod \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479767 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "eefc7456-a6c7-4442-aa3a-370a1f9b01fa" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.479941 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-pod-info\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.480029 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-erlang-cookie-secret\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.480118 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-erlang-cookie\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.480196 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-tls\") pod \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\" (UID: \"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.480235 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts\") pod \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.480294 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxlld\" (UniqueName: \"kubernetes.io/projected/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-kube-api-access-cxlld\") pod \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\" (UID: \"eefc7456-a6c7-4442-aa3a-370a1f9b01fa\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.481260 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.481277 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.486234 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-kube-api-access-vrs6k" (OuterVolumeSpecName: "kube-api-access-vrs6k") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "kube-api-access-vrs6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.489017 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.490200 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.490247 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-kube-api-access-cxlld" (OuterVolumeSpecName: "kube-api-access-cxlld") pod "eefc7456-a6c7-4442-aa3a-370a1f9b01fa" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa"). InnerVolumeSpecName "kube-api-access-cxlld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.490689 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts" (OuterVolumeSpecName: "scripts") pod "eefc7456-a6c7-4442-aa3a-370a1f9b01fa" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.491712 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.494313 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.495831 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.502658 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-pod-info" (OuterVolumeSpecName: "pod-info") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.504067 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.522768 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eefc7456-a6c7-4442-aa3a-370a1f9b01fa" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.527172 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data" (OuterVolumeSpecName: "config-data") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582507 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3534100-dfd5-461d-a955-592caefc5fa6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582542 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrs6k\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-kube-api-access-vrs6k\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582553 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582562 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582571 4945 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582594 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582605 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582615 4945 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-pod-info\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582625 4945 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582635 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582643 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582652 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582662 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxlld\" (UniqueName: \"kubernetes.io/projected/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-kube-api-access-cxlld\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.582671 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zczzl\" (UniqueName: \"kubernetes.io/projected/c3534100-dfd5-461d-a955-592caefc5fa6-kube-api-access-zczzl\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.613814 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-server-conf" (OuterVolumeSpecName: "server-conf") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.614030 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "eefc7456-a6c7-4442-aa3a-370a1f9b01fa" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.618496 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.621442 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fr87r_6c4f1760-d296-46a1-9ec8-cb64e543897c/ovn-controller/0.log" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.621503 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.625714 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" (UID: "e920b84a-bd1b-4649-9cc0-e3b239d6a5b9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.633838 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "eefc7456-a6c7-4442-aa3a-370a1f9b01fa" (UID: "eefc7456-a6c7-4442-aa3a-370a1f9b01fa"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.684285 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.684340 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.684353 4945 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.684362 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefc7456-a6c7-4442-aa3a-370a1f9b01fa-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.684371 4945 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9-server-conf\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.786434 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run\") pod \"6c4f1760-d296-46a1-9ec8-cb64e543897c\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.786955 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-combined-ca-bundle\") pod \"6c4f1760-d296-46a1-9ec8-cb64e543897c\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787010 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-ovn-controller-tls-certs\") pod \"6c4f1760-d296-46a1-9ec8-cb64e543897c\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787029 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-log-ovn\") pod \"6c4f1760-d296-46a1-9ec8-cb64e543897c\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.786583 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run" (OuterVolumeSpecName: "var-run") pod "6c4f1760-d296-46a1-9ec8-cb64e543897c" (UID: "6c4f1760-d296-46a1-9ec8-cb64e543897c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787119 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c4f1760-d296-46a1-9ec8-cb64e543897c-scripts\") pod \"6c4f1760-d296-46a1-9ec8-cb64e543897c\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787161 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6c4f1760-d296-46a1-9ec8-cb64e543897c" (UID: "6c4f1760-d296-46a1-9ec8-cb64e543897c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787257 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run-ovn\") pod \"6c4f1760-d296-46a1-9ec8-cb64e543897c\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787305 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86w89\" (UniqueName: \"kubernetes.io/projected/6c4f1760-d296-46a1-9ec8-cb64e543897c-kube-api-access-86w89\") pod \"6c4f1760-d296-46a1-9ec8-cb64e543897c\" (UID: \"6c4f1760-d296-46a1-9ec8-cb64e543897c\") " Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787707 4945 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787728 4945 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.787811 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6c4f1760-d296-46a1-9ec8-cb64e543897c" (UID: "6c4f1760-d296-46a1-9ec8-cb64e543897c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.788714 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c4f1760-d296-46a1-9ec8-cb64e543897c-scripts" (OuterVolumeSpecName: "scripts") pod "6c4f1760-d296-46a1-9ec8-cb64e543897c" (UID: "6c4f1760-d296-46a1-9ec8-cb64e543897c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.795665 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4f1760-d296-46a1-9ec8-cb64e543897c-kube-api-access-86w89" (OuterVolumeSpecName: "kube-api-access-86w89") pod "6c4f1760-d296-46a1-9ec8-cb64e543897c" (UID: "6c4f1760-d296-46a1-9ec8-cb64e543897c"). InnerVolumeSpecName "kube-api-access-86w89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.846088 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c4f1760-d296-46a1-9ec8-cb64e543897c" (UID: "6c4f1760-d296-46a1-9ec8-cb64e543897c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.858129 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "6c4f1760-d296-46a1-9ec8-cb64e543897c" (UID: "6c4f1760-d296-46a1-9ec8-cb64e543897c"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.889564 4945 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c4f1760-d296-46a1-9ec8-cb64e543897c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.889598 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86w89\" (UniqueName: \"kubernetes.io/projected/6c4f1760-d296-46a1-9ec8-cb64e543897c-kube-api-access-86w89\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.889608 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.889618 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c4f1760-d296-46a1-9ec8-cb64e543897c-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.889627 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c4f1760-d296-46a1-9ec8-cb64e543897c-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:13 crc kubenswrapper[4945]: I0108 23:42:13.921932 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.012393 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" path="/var/lib/kubelet/pods/04a2b873-3034-4b9f-9daf-81db6749d45f/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.013372 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14229729-9655-4484-a4d2-eabe80fc5abb" path="/var/lib/kubelet/pods/14229729-9655-4484-a4d2-eabe80fc5abb/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.013803 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" path="/var/lib/kubelet/pods/227e0b3d-d5ba-4265-a7b9-0419deb61603/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.017015 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" path="/var/lib/kubelet/pods/2eb23b1e-c7b1-465a-a91c-6042942e604a/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.017753 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37125f43-8fb6-4625-a260-8d43cdbe167a" path="/var/lib/kubelet/pods/37125f43-8fb6-4625-a260-8d43cdbe167a/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.018711 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" path="/var/lib/kubelet/pods/3b4a4044-c9b6-49c9-98ed-446af4a3fe1f/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.019371 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8f132e-3fda-4a38-8416-1055a62e7552" path="/var/lib/kubelet/pods/7b8f132e-3fda-4a38-8416-1055a62e7552/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.019974 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f823122-da64-4ac4-aa14-96bc8f2f9c1c" path="/var/lib/kubelet/pods/9f823122-da64-4ac4-aa14-96bc8f2f9c1c/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.021086 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" path="/var/lib/kubelet/pods/adb334a7-9a7f-4e20-9dc8-092b9372bb10/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.021917 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" path="/var/lib/kubelet/pods/bb7afdb8-52e2-4078-8a6e-5f1fea2acd59/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.023342 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3534100-dfd5-461d-a955-592caefc5fa6" path="/var/lib/kubelet/pods/c3534100-dfd5-461d-a955-592caefc5fa6/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.024310 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9674718-110d-4241-a199-9663979defde" path="/var/lib/kubelet/pods/c9674718-110d-4241-a199-9663979defde/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.025421 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9574582-49aa-48ec-8b43-bc55ed78a3d1" path="/var/lib/kubelet/pods/d9574582-49aa-48ec-8b43-bc55ed78a3d1/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.026603 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" path="/var/lib/kubelet/pods/ea1eec40-294d-4749-bdb2-678289eeb815/volumes" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.052556 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.092719 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.092952 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71eb40d2-e481-445d-99ea-948b918b862d-pod-info\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093039 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-confd\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093072 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-server-conf\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093152 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2btbg\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-kube-api-access-2btbg\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093181 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-plugins-conf\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093230 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-erlang-cookie\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093279 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-tls\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093313 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71eb40d2-e481-445d-99ea-948b918b862d-erlang-cookie-secret\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093333 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.093368 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-plugins\") pod \"71eb40d2-e481-445d-99ea-948b918b862d\" (UID: \"71eb40d2-e481-445d-99ea-948b918b862d\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.094229 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.094711 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.094969 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.099605 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.105408 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71eb40d2-e481-445d-99ea-948b918b862d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.105459 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/71eb40d2-e481-445d-99ea-948b918b862d-pod-info" (OuterVolumeSpecName: "pod-info") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.105418 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-kube-api-access-2btbg" (OuterVolumeSpecName: "kube-api-access-2btbg") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "kube-api-access-2btbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.117116 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.122423 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data" (OuterVolumeSpecName: "config-data") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.146391 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-server-conf" (OuterVolumeSpecName: "server-conf") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.174088 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "71eb40d2-e481-445d-99ea-948b918b862d" (UID: "71eb40d2-e481-445d-99ea-948b918b862d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195362 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-generated\") pod \"17371a82-14e3-4830-b99f-a2b46b4f4366\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195500 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"17371a82-14e3-4830-b99f-a2b46b4f4366\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195549 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-operator-scripts\") pod \"17371a82-14e3-4830-b99f-a2b46b4f4366\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195648 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-default\") pod \"17371a82-14e3-4830-b99f-a2b46b4f4366\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195702 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-galera-tls-certs\") pod \"17371a82-14e3-4830-b99f-a2b46b4f4366\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195728 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxqmx\" (UniqueName: \"kubernetes.io/projected/17371a82-14e3-4830-b99f-a2b46b4f4366-kube-api-access-jxqmx\") pod \"17371a82-14e3-4830-b99f-a2b46b4f4366\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195775 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-combined-ca-bundle\") pod \"17371a82-14e3-4830-b99f-a2b46b4f4366\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195822 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-kolla-config\") pod \"17371a82-14e3-4830-b99f-a2b46b4f4366\" (UID: \"17371a82-14e3-4830-b99f-a2b46b4f4366\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.195921 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "17371a82-14e3-4830-b99f-a2b46b4f4366" (UID: "17371a82-14e3-4830-b99f-a2b46b4f4366"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196412 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17371a82-14e3-4830-b99f-a2b46b4f4366" (UID: "17371a82-14e3-4830-b99f-a2b46b4f4366"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196600 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2btbg\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-kube-api-access-2btbg\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196621 4945 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196631 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196642 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196650 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196659 4945 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71eb40d2-e481-445d-99ea-948b918b862d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196684 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196693 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196702 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196711 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196720 4945 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71eb40d2-e481-445d-99ea-948b918b862d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196729 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71eb40d2-e481-445d-99ea-948b918b862d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.196737 4945 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71eb40d2-e481-445d-99ea-948b918b862d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.197143 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "17371a82-14e3-4830-b99f-a2b46b4f4366" (UID: "17371a82-14e3-4830-b99f-a2b46b4f4366"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.197798 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "17371a82-14e3-4830-b99f-a2b46b4f4366" (UID: "17371a82-14e3-4830-b99f-a2b46b4f4366"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.210637 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17371a82-14e3-4830-b99f-a2b46b4f4366-kube-api-access-jxqmx" (OuterVolumeSpecName: "kube-api-access-jxqmx") pod "17371a82-14e3-4830-b99f-a2b46b4f4366" (UID: "17371a82-14e3-4830-b99f-a2b46b4f4366"). InnerVolumeSpecName "kube-api-access-jxqmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.220450 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "17371a82-14e3-4830-b99f-a2b46b4f4366" (UID: "17371a82-14e3-4830-b99f-a2b46b4f4366"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.228219 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.242485 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17371a82-14e3-4830-b99f-a2b46b4f4366" (UID: "17371a82-14e3-4830-b99f-a2b46b4f4366"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.253357 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "17371a82-14e3-4830-b99f-a2b46b4f4366" (UID: "17371a82-14e3-4830-b99f-a2b46b4f4366"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.298626 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.298668 4945 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.298708 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.298722 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.298736 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/17371a82-14e3-4830-b99f-a2b46b4f4366-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.298751 4945 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/17371a82-14e3-4830-b99f-a2b46b4f4366-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.298764 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxqmx\" (UniqueName: \"kubernetes.io/projected/17371a82-14e3-4830-b99f-a2b46b4f4366-kube-api-access-jxqmx\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.323960 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.330029 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fr87r_6c4f1760-d296-46a1-9ec8-cb64e543897c/ovn-controller/0.log" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.330125 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fr87r" event={"ID":"6c4f1760-d296-46a1-9ec8-cb64e543897c","Type":"ContainerDied","Data":"3a31c0530c0b7c42dc0de4c1f2432d9fbeb23b6869b93b080bd633d81423ac30"} Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.330156 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fr87r" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.330188 4945 scope.go:117] "RemoveContainer" containerID="1b0f37807bb46059b745ff0659643ccbaa1cf4a8f20a7c025f2d1511d9fb08db" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.336530 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.336571 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e920b84a-bd1b-4649-9cc0-e3b239d6a5b9","Type":"ContainerDied","Data":"cc2726a84e1996e305b3da7f7f3df5215be6dfd07a4a76b111658bbf2daaf6d2"} Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.338772 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_eefc7456-a6c7-4442-aa3a-370a1f9b01fa/ovn-northd/0.log" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.338869 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.338950 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"eefc7456-a6c7-4442-aa3a-370a1f9b01fa","Type":"ContainerDied","Data":"41e43fd5ffa8c59c1e5b0b7bca180a4d44bbce1523a4b568e90c29e7d4fef9f5"} Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.342418 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"71eb40d2-e481-445d-99ea-948b918b862d","Type":"ContainerDied","Data":"dda13f6fb165777de035f98f50fb1f342b1baf36040a5f518c70e5e066670cce"} Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.342465 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.351207 4945 generic.go:334] "Generic (PLEG): container finished" podID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerID="a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e" exitCode=0 Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.351297 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"17371a82-14e3-4830-b99f-a2b46b4f4366","Type":"ContainerDied","Data":"a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e"} Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.351356 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"17371a82-14e3-4830-b99f-a2b46b4f4366","Type":"ContainerDied","Data":"cf395830dc3cb44dafa10eb210f28826cfa35a8e63fa241421e4b199f3ea2047"} Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.351455 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.355434 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fr87r"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.358452 4945 generic.go:334] "Generic (PLEG): container finished" podID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerID="d5e19d3d92fe8055cf6b5088d170ddda70b5ab7d24cd3f1d890303c9f017d30d" exitCode=0 Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.358488 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6794547bf7-wqlnm" event={"ID":"3b682d87-6d87-4d38-b1c5-a5e4c3664472","Type":"ContainerDied","Data":"d5e19d3d92fe8055cf6b5088d170ddda70b5ab7d24cd3f1d890303c9f017d30d"} Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.360902 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fr87r"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.364806 4945 scope.go:117] "RemoveContainer" containerID="0e4014df7512e89b5d332f842e50840d513c77310ddfa321933cdc5b307230c9" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.378292 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.386062 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.403905 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.412178 4945 scope.go:117] "RemoveContainer" containerID="c81f4cba79646c4284e071dbc05ea1b22c10137dd94b3016f75e77dd3cfb0060" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.416894 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.433956 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.439865 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.447844 4945 scope.go:117] "RemoveContainer" containerID="a785ea69394c633a9012de634007aaa1ea39fa590a19423eeb91abe2b39b2bdf" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.450094 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.457278 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.465385 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.503120 4945 scope.go:117] "RemoveContainer" containerID="fb3097d8a9d3e193dbb4bde56076f63b970772a4be7adc268f6f465dcf3c9975" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.523710 4945 scope.go:117] "RemoveContainer" containerID="3c8e62ad7bb3a5c1b692e76747e535c86452a618975faa4a7349a1cd8e6445b4" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.543981 4945 scope.go:117] "RemoveContainer" containerID="b84caeef5cc5ad10edc0c8450a4bb95aea44a01ae1bdb0470e03aacfe261b00d" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.575243 4945 scope.go:117] "RemoveContainer" containerID="a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.596936 4945 scope.go:117] "RemoveContainer" containerID="654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.638139 4945 scope.go:117] "RemoveContainer" containerID="a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e" Jan 08 23:42:14 crc kubenswrapper[4945]: E0108 23:42:14.638535 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e\": container with ID starting with a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e not found: ID does not exist" containerID="a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.638574 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e"} err="failed to get container status \"a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e\": rpc error: code = NotFound desc = could not find container \"a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e\": container with ID starting with a6d343f7688e540d51f500254defa94f4632dc20f9dbe324ab4669bd7f25c69e not found: ID does not exist" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.638602 4945 scope.go:117] "RemoveContainer" containerID="654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848" Jan 08 23:42:14 crc kubenswrapper[4945]: E0108 23:42:14.638846 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848\": container with ID starting with 654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848 not found: ID does not exist" containerID="654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.638863 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848"} err="failed to get container status \"654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848\": rpc error: code = NotFound desc = could not find container \"654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848\": container with ID starting with 654d08b7eab848b3559e43492dbaf993129d4c0a54046399872d08ddd5022848 not found: ID does not exist" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.681645 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.808485 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-internal-tls-certs\") pod \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.808612 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-config\") pod \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.808639 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-httpd-config\") pod \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.808661 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-public-tls-certs\") pod \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.808733 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwpxn\" (UniqueName: \"kubernetes.io/projected/3b682d87-6d87-4d38-b1c5-a5e4c3664472-kube-api-access-xwpxn\") pod \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.808756 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-ovndb-tls-certs\") pod \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.808783 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-combined-ca-bundle\") pod \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\" (UID: \"3b682d87-6d87-4d38-b1c5-a5e4c3664472\") " Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.826690 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b682d87-6d87-4d38-b1c5-a5e4c3664472-kube-api-access-xwpxn" (OuterVolumeSpecName: "kube-api-access-xwpxn") pod "3b682d87-6d87-4d38-b1c5-a5e4c3664472" (UID: "3b682d87-6d87-4d38-b1c5-a5e4c3664472"). InnerVolumeSpecName "kube-api-access-xwpxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.826818 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3b682d87-6d87-4d38-b1c5-a5e4c3664472" (UID: "3b682d87-6d87-4d38-b1c5-a5e4c3664472"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.856127 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3b682d87-6d87-4d38-b1c5-a5e4c3664472" (UID: "3b682d87-6d87-4d38-b1c5-a5e4c3664472"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.856455 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3b682d87-6d87-4d38-b1c5-a5e4c3664472" (UID: "3b682d87-6d87-4d38-b1c5-a5e4c3664472"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.860433 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b682d87-6d87-4d38-b1c5-a5e4c3664472" (UID: "3b682d87-6d87-4d38-b1c5-a5e4c3664472"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.863290 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-config" (OuterVolumeSpecName: "config") pod "3b682d87-6d87-4d38-b1c5-a5e4c3664472" (UID: "3b682d87-6d87-4d38-b1c5-a5e4c3664472"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.898748 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3b682d87-6d87-4d38-b1c5-a5e4c3664472" (UID: "3b682d87-6d87-4d38-b1c5-a5e4c3664472"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.910107 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.910145 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.910158 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.910169 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.910180 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwpxn\" (UniqueName: \"kubernetes.io/projected/3b682d87-6d87-4d38-b1c5-a5e4c3664472-kube-api-access-xwpxn\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.910192 4945 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.910203 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b682d87-6d87-4d38-b1c5-a5e4c3664472-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.910701 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:42:14 crc kubenswrapper[4945]: I0108 23:42:14.953421 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.010966 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdfpv\" (UniqueName: \"kubernetes.io/projected/842a2e91-c7e4-4435-aa81-c1a888cf6a51-kube-api-access-vdfpv\") pod \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.011047 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-credential-keys\") pod \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.011080 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-internal-tls-certs\") pod \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.011100 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-public-tls-certs\") pod \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.011164 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-fernet-keys\") pod \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.011208 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-scripts\") pod \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.011237 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-combined-ca-bundle\") pod \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.011264 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-config-data\") pod \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\" (UID: \"842a2e91-c7e4-4435-aa81-c1a888cf6a51\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.015192 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "842a2e91-c7e4-4435-aa81-c1a888cf6a51" (UID: "842a2e91-c7e4-4435-aa81-c1a888cf6a51"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.015893 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/842a2e91-c7e4-4435-aa81-c1a888cf6a51-kube-api-access-vdfpv" (OuterVolumeSpecName: "kube-api-access-vdfpv") pod "842a2e91-c7e4-4435-aa81-c1a888cf6a51" (UID: "842a2e91-c7e4-4435-aa81-c1a888cf6a51"). InnerVolumeSpecName "kube-api-access-vdfpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.016461 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-scripts" (OuterVolumeSpecName: "scripts") pod "842a2e91-c7e4-4435-aa81-c1a888cf6a51" (UID: "842a2e91-c7e4-4435-aa81-c1a888cf6a51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.016487 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "842a2e91-c7e4-4435-aa81-c1a888cf6a51" (UID: "842a2e91-c7e4-4435-aa81-c1a888cf6a51"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.031169 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "842a2e91-c7e4-4435-aa81-c1a888cf6a51" (UID: "842a2e91-c7e4-4435-aa81-c1a888cf6a51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.034295 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-config-data" (OuterVolumeSpecName: "config-data") pod "842a2e91-c7e4-4435-aa81-c1a888cf6a51" (UID: "842a2e91-c7e4-4435-aa81-c1a888cf6a51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.046593 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "842a2e91-c7e4-4435-aa81-c1a888cf6a51" (UID: "842a2e91-c7e4-4435-aa81-c1a888cf6a51"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.051302 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "842a2e91-c7e4-4435-aa81-c1a888cf6a51" (UID: "842a2e91-c7e4-4435-aa81-c1a888cf6a51"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.112458 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-etc-machine-id\") pod \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.112532 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data-custom\") pod \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.112593 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-combined-ca-bundle\") pod \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.112577 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" (UID: "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.112632 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5skz\" (UniqueName: \"kubernetes.io/projected/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-kube-api-access-r5skz\") pod \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.112654 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data\") pod \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.112675 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-scripts\") pod \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\" (UID: \"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a\") " Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113294 4945 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113307 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113325 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113335 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113343 4945 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113353 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdfpv\" (UniqueName: \"kubernetes.io/projected/842a2e91-c7e4-4435-aa81-c1a888cf6a51-kube-api-access-vdfpv\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113361 4945 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113369 4945 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.113377 4945 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/842a2e91-c7e4-4435-aa81-c1a888cf6a51-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.117435 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-kube-api-access-r5skz" (OuterVolumeSpecName: "kube-api-access-r5skz") pod "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" (UID: "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a"). InnerVolumeSpecName "kube-api-access-r5skz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.117512 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-scripts" (OuterVolumeSpecName: "scripts") pod "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" (UID: "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.118309 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" (UID: "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.153291 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" (UID: "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.173767 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data" (OuterVolumeSpecName: "config-data") pod "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" (UID: "f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.214860 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.214902 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5skz\" (UniqueName: \"kubernetes.io/projected/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-kube-api-access-r5skz\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.214921 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.214932 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.214946 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.346195 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.181:8081/readyz\": context deadline exceeded" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.370041 4945 generic.go:334] "Generic (PLEG): container finished" podID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerID="cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725" exitCode=0 Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.370073 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a","Type":"ContainerDied","Data":"cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725"} Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.370112 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a","Type":"ContainerDied","Data":"0dea239b5d2847c6d6e07195e320989a3480cc645a97a80bb53980a6369e073a"} Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.370128 4945 scope.go:117] "RemoveContainer" containerID="a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.370130 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.373102 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6794547bf7-wqlnm" event={"ID":"3b682d87-6d87-4d38-b1c5-a5e4c3664472","Type":"ContainerDied","Data":"902468eada43a63143cf4f869dba61a92838abfee293b1f8c7ed3d63d557eb36"} Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.373182 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6794547bf7-wqlnm" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.380038 4945 generic.go:334] "Generic (PLEG): container finished" podID="842a2e91-c7e4-4435-aa81-c1a888cf6a51" containerID="1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c" exitCode=0 Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.380076 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76b55d6f4b-r5hn5" event={"ID":"842a2e91-c7e4-4435-aa81-c1a888cf6a51","Type":"ContainerDied","Data":"1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c"} Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.380100 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76b55d6f4b-r5hn5" event={"ID":"842a2e91-c7e4-4435-aa81-c1a888cf6a51","Type":"ContainerDied","Data":"79aec7511bd1139b828df755afb955eddb98e3070b90c950e6a4a87e109570a3"} Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.380150 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76b55d6f4b-r5hn5" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.400718 4945 scope.go:117] "RemoveContainer" containerID="cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.411979 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.419660 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.431327 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6794547bf7-wqlnm"] Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.437800 4945 scope.go:117] "RemoveContainer" containerID="a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.453731 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6794547bf7-wqlnm"] Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.453800 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-76b55d6f4b-r5hn5"] Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.453815 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-76b55d6f4b-r5hn5"] Jan 08 23:42:15 crc kubenswrapper[4945]: E0108 23:42:15.479855 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1\": container with ID starting with a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1 not found: ID does not exist" containerID="a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.479913 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1"} err="failed to get container status \"a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1\": rpc error: code = NotFound desc = could not find container \"a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1\": container with ID starting with a1ea98a5191a9c7ea8ad3313abd10e08aea6b734652be5308b7a4b1889a1edf1 not found: ID does not exist" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.479949 4945 scope.go:117] "RemoveContainer" containerID="cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725" Jan 08 23:42:15 crc kubenswrapper[4945]: E0108 23:42:15.480251 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725\": container with ID starting with cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725 not found: ID does not exist" containerID="cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.480276 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725"} err="failed to get container status \"cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725\": rpc error: code = NotFound desc = could not find container \"cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725\": container with ID starting with cbf93118fa8eb1c82045354d9a9fe146f41f2d5df8fc45d489954e2e3b4a6725 not found: ID does not exist" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.480292 4945 scope.go:117] "RemoveContainer" containerID="40e878309fb2714dc92ffc1ca85d0a0b40ba57da80d5ce071bad31bc2db4462c" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.524635 4945 scope.go:117] "RemoveContainer" containerID="d5e19d3d92fe8055cf6b5088d170ddda70b5ab7d24cd3f1d890303c9f017d30d" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.556933 4945 scope.go:117] "RemoveContainer" containerID="1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.580670 4945 scope.go:117] "RemoveContainer" containerID="1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c" Jan 08 23:42:15 crc kubenswrapper[4945]: E0108 23:42:15.581382 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c\": container with ID starting with 1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c not found: ID does not exist" containerID="1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c" Jan 08 23:42:15 crc kubenswrapper[4945]: I0108 23:42:15.581447 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c"} err="failed to get container status \"1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c\": rpc error: code = NotFound desc = could not find container \"1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c\": container with ID starting with 1c96e5de79d52bdd02863443aa5dfe69974633d8662e47791a47e4f419e2a78c not found: ID does not exist" Jan 08 23:42:16 crc kubenswrapper[4945]: I0108 23:42:16.007738 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" path="/var/lib/kubelet/pods/17371a82-14e3-4830-b99f-a2b46b4f4366/volumes" Jan 08 23:42:16 crc kubenswrapper[4945]: I0108 23:42:16.008350 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" path="/var/lib/kubelet/pods/3b682d87-6d87-4d38-b1c5-a5e4c3664472/volumes" Jan 08 23:42:16 crc kubenswrapper[4945]: I0108 23:42:16.008936 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" path="/var/lib/kubelet/pods/6c4f1760-d296-46a1-9ec8-cb64e543897c/volumes" Jan 08 23:42:16 crc kubenswrapper[4945]: I0108 23:42:16.010466 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71eb40d2-e481-445d-99ea-948b918b862d" path="/var/lib/kubelet/pods/71eb40d2-e481-445d-99ea-948b918b862d/volumes" Jan 08 23:42:16 crc kubenswrapper[4945]: I0108 23:42:16.011026 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="842a2e91-c7e4-4435-aa81-c1a888cf6a51" path="/var/lib/kubelet/pods/842a2e91-c7e4-4435-aa81-c1a888cf6a51/volumes" Jan 08 23:42:16 crc kubenswrapper[4945]: I0108 23:42:16.012067 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" path="/var/lib/kubelet/pods/e920b84a-bd1b-4649-9cc0-e3b239d6a5b9/volumes" Jan 08 23:42:16 crc kubenswrapper[4945]: I0108 23:42:16.012746 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" path="/var/lib/kubelet/pods/eefc7456-a6c7-4442-aa3a-370a1f9b01fa/volumes" Jan 08 23:42:16 crc kubenswrapper[4945]: I0108 23:42:16.013351 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" path="/var/lib/kubelet/pods/f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a/volumes" Jan 08 23:42:17 crc kubenswrapper[4945]: E0108 23:42:17.459393 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:17 crc kubenswrapper[4945]: E0108 23:42:17.460503 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:17 crc kubenswrapper[4945]: E0108 23:42:17.460729 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:17 crc kubenswrapper[4945]: E0108 23:42:17.461454 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:17 crc kubenswrapper[4945]: E0108 23:42:17.461495 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:42:17 crc kubenswrapper[4945]: E0108 23:42:17.462465 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:17 crc kubenswrapper[4945]: E0108 23:42:17.464475 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:17 crc kubenswrapper[4945]: E0108 23:42:17.464502 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:42:18 crc kubenswrapper[4945]: I0108 23:42:18.003817 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:42:18 crc kubenswrapper[4945]: E0108 23:42:18.004052 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:42:18 crc kubenswrapper[4945]: I0108 23:42:18.050291 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: i/o timeout" Jan 08 23:42:18 crc kubenswrapper[4945]: I0108 23:42:18.565935 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="71eb40d2-e481-445d-99ea-948b918b862d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: i/o timeout" Jan 08 23:42:22 crc kubenswrapper[4945]: E0108 23:42:22.458896 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:22 crc kubenswrapper[4945]: E0108 23:42:22.459740 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:22 crc kubenswrapper[4945]: E0108 23:42:22.460424 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:22 crc kubenswrapper[4945]: E0108 23:42:22.460476 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:42:22 crc kubenswrapper[4945]: E0108 23:42:22.461684 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:22 crc kubenswrapper[4945]: E0108 23:42:22.463210 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:22 crc kubenswrapper[4945]: E0108 23:42:22.464637 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:22 crc kubenswrapper[4945]: E0108 23:42:22.464705 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:42:27 crc kubenswrapper[4945]: E0108 23:42:27.459064 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:27 crc kubenswrapper[4945]: E0108 23:42:27.460449 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:27 crc kubenswrapper[4945]: E0108 23:42:27.461090 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:27 crc kubenswrapper[4945]: E0108 23:42:27.461154 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:42:27 crc kubenswrapper[4945]: E0108 23:42:27.461570 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:27 crc kubenswrapper[4945]: E0108 23:42:27.464167 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:27 crc kubenswrapper[4945]: E0108 23:42:27.466395 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:27 crc kubenswrapper[4945]: E0108 23:42:27.466455 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:42:30 crc kubenswrapper[4945]: I0108 23:42:30.009069 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:42:30 crc kubenswrapper[4945]: E0108 23:42:30.009540 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:42:32 crc kubenswrapper[4945]: E0108 23:42:32.459142 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:32 crc kubenswrapper[4945]: E0108 23:42:32.459817 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:32 crc kubenswrapper[4945]: E0108 23:42:32.460218 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 08 23:42:32 crc kubenswrapper[4945]: E0108 23:42:32.460265 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:42:32 crc kubenswrapper[4945]: E0108 23:42:32.461504 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:32 crc kubenswrapper[4945]: E0108 23:42:32.463778 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:32 crc kubenswrapper[4945]: E0108 23:42:32.465869 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 08 23:42:32 crc kubenswrapper[4945]: E0108 23:42:32.465916 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-hfhkg" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.592525 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hfhkg_195bf0c5-575f-4d8e-ac9f-50d0e4c0848f/ovs-vswitchd/0.log" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.593810 4945 generic.go:334] "Generic (PLEG): container finished" podID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" exitCode=137 Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.593859 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hfhkg" event={"ID":"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f","Type":"ContainerDied","Data":"452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d"} Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.593889 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-hfhkg" event={"ID":"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f","Type":"ContainerDied","Data":"bd80fc7604694796ec29ee56ba7f96a76e99b6653ef30dce24ec673318952c9c"} Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.593900 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd80fc7604694796ec29ee56ba7f96a76e99b6653ef30dce24ec673318952c9c" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.600235 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-hfhkg_195bf0c5-575f-4d8e-ac9f-50d0e4c0848f/ovs-vswitchd/0.log" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.600975 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.711126 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-log\") pod \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.711212 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt99d\" (UniqueName: \"kubernetes.io/projected/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-kube-api-access-zt99d\") pod \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.711288 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-scripts\") pod \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.711346 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-run\") pod \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.711363 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-etc-ovs\") pod \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.711405 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-lib\") pod \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\" (UID: \"195bf0c5-575f-4d8e-ac9f-50d0e4c0848f\") " Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.711666 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-lib" (OuterVolumeSpecName: "var-lib") pod "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" (UID: "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.711697 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-log" (OuterVolumeSpecName: "var-log") pod "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" (UID: "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.712095 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-run" (OuterVolumeSpecName: "var-run") pod "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" (UID: "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.712628 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" (UID: "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.712898 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-scripts" (OuterVolumeSpecName: "scripts") pod "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" (UID: "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.718166 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-kube-api-access-zt99d" (OuterVolumeSpecName: "kube-api-access-zt99d") pod "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" (UID: "195bf0c5-575f-4d8e-ac9f-50d0e4c0848f"). InnerVolumeSpecName "kube-api-access-zt99d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.813403 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-scripts\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.813443 4945 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-run\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.813452 4945 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.813461 4945 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-lib\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.813471 4945 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-var-log\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:35 crc kubenswrapper[4945]: I0108 23:42:35.813480 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt99d\" (UniqueName: \"kubernetes.io/projected/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f-kube-api-access-zt99d\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.314564 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.421337 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-lock\") pod \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.421407 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") pod \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.421432 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.421510 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj6rh\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-kube-api-access-kj6rh\") pod \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.421537 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-cache\") pod \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\" (UID: \"12eb7cf8-4c67-4574-a65b-dc82c7285c68\") " Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.422464 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-cache" (OuterVolumeSpecName: "cache") pod "12eb7cf8-4c67-4574-a65b-dc82c7285c68" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.422471 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-lock" (OuterVolumeSpecName: "lock") pod "12eb7cf8-4c67-4574-a65b-dc82c7285c68" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.425672 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "12eb7cf8-4c67-4574-a65b-dc82c7285c68" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.426243 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-kube-api-access-kj6rh" (OuterVolumeSpecName: "kube-api-access-kj6rh") pod "12eb7cf8-4c67-4574-a65b-dc82c7285c68" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68"). InnerVolumeSpecName "kube-api-access-kj6rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.426308 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "12eb7cf8-4c67-4574-a65b-dc82c7285c68" (UID: "12eb7cf8-4c67-4574-a65b-dc82c7285c68"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.523876 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj6rh\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-kube-api-access-kj6rh\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.523936 4945 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-cache\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.523956 4945 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/12eb7cf8-4c67-4574-a65b-dc82c7285c68-lock\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.523972 4945 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12eb7cf8-4c67-4574-a65b-dc82c7285c68-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.524039 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.546376 4945 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.615232 4945 generic.go:334] "Generic (PLEG): container finished" podID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerID="b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43" exitCode=137 Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.615319 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43"} Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.615364 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"12eb7cf8-4c67-4574-a65b-dc82c7285c68","Type":"ContainerDied","Data":"1c0f9d06b9e2dfdbc40e0336032b284914d71290109416254f93987886aeed79"} Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.615374 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-hfhkg" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.615384 4945 scope.go:117] "RemoveContainer" containerID="b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.615418 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.625477 4945 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.653469 4945 scope.go:117] "RemoveContainer" containerID="186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.659285 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-hfhkg"] Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.668755 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-hfhkg"] Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.678392 4945 scope.go:117] "RemoveContainer" containerID="ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.688198 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.698899 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.708361 4945 scope.go:117] "RemoveContainer" containerID="9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.731812 4945 scope.go:117] "RemoveContainer" containerID="47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.750730 4945 scope.go:117] "RemoveContainer" containerID="653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.770366 4945 scope.go:117] "RemoveContainer" containerID="ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.790327 4945 scope.go:117] "RemoveContainer" containerID="08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.810450 4945 scope.go:117] "RemoveContainer" containerID="0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.829199 4945 scope.go:117] "RemoveContainer" containerID="2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.848140 4945 scope.go:117] "RemoveContainer" containerID="d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.864834 4945 scope.go:117] "RemoveContainer" containerID="d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.892600 4945 scope.go:117] "RemoveContainer" containerID="5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.908862 4945 scope.go:117] "RemoveContainer" containerID="27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.925497 4945 scope.go:117] "RemoveContainer" containerID="94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.940440 4945 scope.go:117] "RemoveContainer" containerID="b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.940829 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43\": container with ID starting with b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43 not found: ID does not exist" containerID="b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.940865 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43"} err="failed to get container status \"b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43\": rpc error: code = NotFound desc = could not find container \"b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43\": container with ID starting with b52e8edb411f92c614f8d6aa60a5cb5603fdace0b3e0a587a7550deca17aba43 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.940893 4945 scope.go:117] "RemoveContainer" containerID="186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.941263 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf\": container with ID starting with 186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf not found: ID does not exist" containerID="186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.941302 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf"} err="failed to get container status \"186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf\": rpc error: code = NotFound desc = could not find container \"186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf\": container with ID starting with 186fb49fb8486d9e304c533cbcfeed9ccab02336600a6e6145f4875e589c8bcf not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.941317 4945 scope.go:117] "RemoveContainer" containerID="ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.941510 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703\": container with ID starting with ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703 not found: ID does not exist" containerID="ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.941541 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703"} err="failed to get container status \"ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703\": rpc error: code = NotFound desc = could not find container \"ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703\": container with ID starting with ab6603a3508efd8f122853e7de9d014c4096828cea6d7b8ddcd09680dfd09703 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.941560 4945 scope.go:117] "RemoveContainer" containerID="9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.941927 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150\": container with ID starting with 9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150 not found: ID does not exist" containerID="9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.941947 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150"} err="failed to get container status \"9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150\": rpc error: code = NotFound desc = could not find container \"9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150\": container with ID starting with 9d661110649d6472ccf5654410ba504c83e6d83fdf1b9e883d95cf3cdb834150 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.941973 4945 scope.go:117] "RemoveContainer" containerID="47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.942218 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253\": container with ID starting with 47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253 not found: ID does not exist" containerID="47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.942238 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253"} err="failed to get container status \"47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253\": rpc error: code = NotFound desc = could not find container \"47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253\": container with ID starting with 47712dea1c8621036b0e219b143468cdb3726cb8ddcfcb6346d22152b94ca253 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.942252 4945 scope.go:117] "RemoveContainer" containerID="653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.942440 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143\": container with ID starting with 653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143 not found: ID does not exist" containerID="653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.942461 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143"} err="failed to get container status \"653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143\": rpc error: code = NotFound desc = could not find container \"653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143\": container with ID starting with 653ebb4c08001d3fb7e913750104824287889d0831d244e7578069ca36f52143 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.942475 4945 scope.go:117] "RemoveContainer" containerID="ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.942929 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5\": container with ID starting with ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5 not found: ID does not exist" containerID="ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.942947 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5"} err="failed to get container status \"ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5\": rpc error: code = NotFound desc = could not find container \"ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5\": container with ID starting with ebf5f6750c5f563b79b21cfb01492b314ed6c060af246db80a7f95a0bac985e5 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.942972 4945 scope.go:117] "RemoveContainer" containerID="08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.943231 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3\": container with ID starting with 08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3 not found: ID does not exist" containerID="08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.943257 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3"} err="failed to get container status \"08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3\": rpc error: code = NotFound desc = could not find container \"08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3\": container with ID starting with 08d3886a5fc242d28f1801ee0870927673393d82a3aaceb6a629f6836c33bea3 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.943274 4945 scope.go:117] "RemoveContainer" containerID="0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.943509 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067\": container with ID starting with 0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067 not found: ID does not exist" containerID="0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.943536 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067"} err="failed to get container status \"0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067\": rpc error: code = NotFound desc = could not find container \"0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067\": container with ID starting with 0987f6c6262bc7794b82933b10e6814fc8b5c2349c5a58187b58990fed1c1067 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.943555 4945 scope.go:117] "RemoveContainer" containerID="2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.943769 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6\": container with ID starting with 2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6 not found: ID does not exist" containerID="2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.943807 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6"} err="failed to get container status \"2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6\": rpc error: code = NotFound desc = could not find container \"2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6\": container with ID starting with 2153a439bf6606bed6abd7d5895369e18116740cf720ca93faed1900066b52d6 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.943822 4945 scope.go:117] "RemoveContainer" containerID="d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.944114 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c\": container with ID starting with d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c not found: ID does not exist" containerID="d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.944140 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c"} err="failed to get container status \"d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c\": rpc error: code = NotFound desc = could not find container \"d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c\": container with ID starting with d6483b681f8297d29a12085fb8f2557002353017201640c168545f0d40ae970c not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.944157 4945 scope.go:117] "RemoveContainer" containerID="d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.944361 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e\": container with ID starting with d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e not found: ID does not exist" containerID="d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.944382 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e"} err="failed to get container status \"d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e\": rpc error: code = NotFound desc = could not find container \"d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e\": container with ID starting with d8b368fe8df8fd31efcb913ab13e714d53a8cb5596d52d646a4d27b08fc38a4e not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.944410 4945 scope.go:117] "RemoveContainer" containerID="5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.944690 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa\": container with ID starting with 5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa not found: ID does not exist" containerID="5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.944718 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa"} err="failed to get container status \"5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa\": rpc error: code = NotFound desc = could not find container \"5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa\": container with ID starting with 5c6715331a0d405b0d603ba4999b7b101becbf1593c09d544be436b391b2a9fa not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.944736 4945 scope.go:117] "RemoveContainer" containerID="27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.945036 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9\": container with ID starting with 27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9 not found: ID does not exist" containerID="27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.945061 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9"} err="failed to get container status \"27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9\": rpc error: code = NotFound desc = could not find container \"27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9\": container with ID starting with 27f56df8852defe9ab1399615875fbe1e46ea8941ff605f08e9f61879ff1c6b9 not found: ID does not exist" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.945091 4945 scope.go:117] "RemoveContainer" containerID="94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c" Jan 08 23:42:36 crc kubenswrapper[4945]: E0108 23:42:36.945325 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c\": container with ID starting with 94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c not found: ID does not exist" containerID="94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c" Jan 08 23:42:36 crc kubenswrapper[4945]: I0108 23:42:36.945356 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c"} err="failed to get container status \"94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c\": rpc error: code = NotFound desc = could not find container \"94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c\": container with ID starting with 94d63f7570f82deb07c6532e9070f8e84e1960af5420e611a448105b6a81f23c not found: ID does not exist" Jan 08 23:42:38 crc kubenswrapper[4945]: I0108 23:42:38.012548 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" path="/var/lib/kubelet/pods/12eb7cf8-4c67-4574-a65b-dc82c7285c68/volumes" Jan 08 23:42:38 crc kubenswrapper[4945]: I0108 23:42:38.017426 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" path="/var/lib/kubelet/pods/195bf0c5-575f-4d8e-ac9f-50d0e4c0848f/volumes" Jan 08 23:42:43 crc kubenswrapper[4945]: I0108 23:42:43.000569 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:42:43 crc kubenswrapper[4945]: E0108 23:42:43.000978 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:42:57 crc kubenswrapper[4945]: I0108 23:42:57.000597 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:42:57 crc kubenswrapper[4945]: E0108 23:42:57.001281 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:43:10 crc kubenswrapper[4945]: I0108 23:43:10.010188 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:43:10 crc kubenswrapper[4945]: E0108 23:43:10.011255 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:43:24 crc kubenswrapper[4945]: I0108 23:43:24.000842 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:43:24 crc kubenswrapper[4945]: E0108 23:43:24.001825 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.440362 4945 scope.go:117] "RemoveContainer" containerID="78dedf13b581dfc3615764faf2c2b69436f59e182f5e1737cf0c1f55ca61fdce" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.487305 4945 scope.go:117] "RemoveContainer" containerID="8a22cd90e0b9c42a59cf3f72b9b8ccd520ce04e8c67e4b3827083fb7ebad819d" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.527759 4945 scope.go:117] "RemoveContainer" containerID="efd2bc7b040f659f7206782fec4c8107ca83b21da56fad17b1b8b0d43bd222ba" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.552236 4945 scope.go:117] "RemoveContainer" containerID="85a51471e64c9c8202d37be77ba6c842abfb6fe410974730af0489e3a2774c94" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.601029 4945 scope.go:117] "RemoveContainer" containerID="9fda8d168a8a8990734d92d727af0ec3fa5063723a1ad94364162854eb8f42c8" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.629572 4945 scope.go:117] "RemoveContainer" containerID="c31deb01de0248bacf86cd4619d48921e63f82d7fd0acb625f30b485b1583b9c" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.648210 4945 scope.go:117] "RemoveContainer" containerID="dfbb378f03def2a6587de835fb231d3a876888d83ae0b3bf039ebc2fea867371" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.680182 4945 scope.go:117] "RemoveContainer" containerID="41f18d58019ccf0e8de3811d8fb7972fe95dfcd60b5d75a34716b91d48b13c31" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.696196 4945 scope.go:117] "RemoveContainer" containerID="67b80221022e04916251de3638954da70b4cf4aa9fe4a41c3f21b1506de9ebdc" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.715285 4945 scope.go:117] "RemoveContainer" containerID="616b374a5eae55d59d9108e2215178983988ca2e990a507b0a9a4f74f2df67bc" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.735134 4945 scope.go:117] "RemoveContainer" containerID="78d6d25e434b638f130918731f0903e985aeff175b1cf38b27837112f11da21d" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.754215 4945 scope.go:117] "RemoveContainer" containerID="032c7dd5d1e5a08219574c8bc61072aa124998bdab8472e816f29e879abaab35" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.772329 4945 scope.go:117] "RemoveContainer" containerID="d9bcffeedebb5f2415e6032ae74495e88aa2cce79304fc735c99dd55983ce56a" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.796690 4945 scope.go:117] "RemoveContainer" containerID="452f5d461d7f4fdb72de8ff9fb188e0fb395fc2f8752ec3b6e881255f745ad8d" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.814104 4945 scope.go:117] "RemoveContainer" containerID="e899cbf27092607cae12a7dc93241e5ed129cce95626267ff8c2d829e7a8cddb" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.843811 4945 scope.go:117] "RemoveContainer" containerID="2aeb79cff0dbb2629fd63ffcddab5f0c962f21df339af5b74c3440576cf61f80" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.863696 4945 scope.go:117] "RemoveContainer" containerID="d5dfa1736e28bd5c1e8e475e83ae1a58305fb151056fd8aa88095ae069f33e9f" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.881455 4945 scope.go:117] "RemoveContainer" containerID="2614a5ec47ba81e99516320fdc35932c6d008ee50a312bfe0b63f7e25b3f97d5" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.897736 4945 scope.go:117] "RemoveContainer" containerID="6864af6cbe8f647a9f0c28948b92d59f1746e14059ad5a2b80f2933cb34799cc" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.916324 4945 scope.go:117] "RemoveContainer" containerID="d2e978ab712082f3949094db7841350523945e36f34a236aa78d8eccecab2130" Jan 08 23:43:32 crc kubenswrapper[4945]: I0108 23:43:32.988905 4945 scope.go:117] "RemoveContainer" containerID="7ba4688fc263c1aed714211d1515d8de9f81d9d620cdc2a0a4a94d12379599b0" Jan 08 23:43:33 crc kubenswrapper[4945]: I0108 23:43:33.056532 4945 scope.go:117] "RemoveContainer" containerID="172d50744242a84f78b56e1d2950591ec8b0ca4bdf178eafbed1aa93b04df962" Jan 08 23:43:33 crc kubenswrapper[4945]: I0108 23:43:33.078683 4945 scope.go:117] "RemoveContainer" containerID="6ddbf977bb97c0edecd692a6bbf6552971918da00054953461c2a54d0288f64a" Jan 08 23:43:39 crc kubenswrapper[4945]: I0108 23:43:39.000427 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:43:39 crc kubenswrapper[4945]: E0108 23:43:39.001295 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:43:53 crc kubenswrapper[4945]: I0108 23:43:53.001434 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:43:53 crc kubenswrapper[4945]: E0108 23:43:53.004927 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:44:08 crc kubenswrapper[4945]: I0108 23:44:08.000829 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:44:08 crc kubenswrapper[4945]: E0108 23:44:08.003140 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:44:22 crc kubenswrapper[4945]: I0108 23:44:22.000704 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:44:22 crc kubenswrapper[4945]: E0108 23:44:22.001944 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.432711 4945 scope.go:117] "RemoveContainer" containerID="0f812bd914d4fa33163b30da0f6e0fb39998a79862f6047a13bf3435d8c67e95" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.480614 4945 scope.go:117] "RemoveContainer" containerID="11c4411a90c0748f03440e5a2641a144c020846bb16a39aafa38e17332ca7787" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.518218 4945 scope.go:117] "RemoveContainer" containerID="6f22349d33e5ca7679067ee358e025809e2066d028072d0e655a6e5e5971a3f7" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.542409 4945 scope.go:117] "RemoveContainer" containerID="6d3373416d06d09c31beb8c4a908f1408aa4767816ccbdd4c183de5420b32d29" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.584634 4945 scope.go:117] "RemoveContainer" containerID="7f05985840a6a4eb2c433f2c9e15631157f43722ad528cbef5036e64fa0f8ae3" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.601393 4945 scope.go:117] "RemoveContainer" containerID="507ed9d48a17db04852cb6c40036d1609461dd561a6a0987bc82a1477858fb25" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.637518 4945 scope.go:117] "RemoveContainer" containerID="33a4af0b086fbfcabe69a8e7c7988dfc06b32af17d4f4a43c92a55eb98b2ab95" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.667209 4945 scope.go:117] "RemoveContainer" containerID="aa21910ea6f2611e24574bdfa939c904fc8185af95c178b62d819c74b5854c82" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.710260 4945 scope.go:117] "RemoveContainer" containerID="78612bfad4ece0fb4a3a9659acbaf6e6379b58f80b5f8ccc1a0f0574671af085" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.731924 4945 scope.go:117] "RemoveContainer" containerID="79c3e5ad5b8d05cf65c473b2c9291f7836e5f788b3ab861c6aaa651a1b04f94d" Jan 08 23:44:33 crc kubenswrapper[4945]: I0108 23:44:33.762677 4945 scope.go:117] "RemoveContainer" containerID="08ca0607ce584cf8045417f996764ffc392d3261d41883a5078094c48ae1c950" Jan 08 23:44:34 crc kubenswrapper[4945]: I0108 23:44:34.000822 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:44:34 crc kubenswrapper[4945]: E0108 23:44:34.001145 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:44:45 crc kubenswrapper[4945]: I0108 23:44:45.001283 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:44:45 crc kubenswrapper[4945]: E0108 23:44:45.002320 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:44:57 crc kubenswrapper[4945]: I0108 23:44:57.000472 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:44:57 crc kubenswrapper[4945]: E0108 23:44:57.001147 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.152407 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt"] Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153033 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8f132e-3fda-4a38-8416-1055a62e7552" containerName="nova-cell1-conductor-conductor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153046 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8f132e-3fda-4a38-8416-1055a62e7552" containerName="nova-cell1-conductor-conductor" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153057 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-server" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153063 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-server" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153071 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-updater" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153078 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-updater" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153085 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerName="glance-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153090 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerName="glance-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153106 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerName="glance-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153113 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerName="glance-log" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153120 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153127 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153134 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="ovn-northd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153203 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="ovn-northd" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153218 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9674718-110d-4241-a199-9663979defde" containerName="ceilometer-notification-agent" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153226 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9674718-110d-4241-a199-9663979defde" containerName="ceilometer-notification-agent" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153275 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerName="glance-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153284 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerName="glance-log" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153296 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153304 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153316 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153323 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153338 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153345 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153352 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153361 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-api" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153372 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-server" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153379 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-server" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153393 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f823122-da64-4ac4-aa14-96bc8f2f9c1c" containerName="nova-scheduler-scheduler" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153401 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f823122-da64-4ac4-aa14-96bc8f2f9c1c" containerName="nova-scheduler-scheduler" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153411 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerName="mysql-bootstrap" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153418 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerName="mysql-bootstrap" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153425 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerName="galera" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153433 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerName="galera" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153445 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="swift-recon-cron" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153452 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="swift-recon-cron" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153461 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="rsync" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153466 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="rsync" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153473 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37125f43-8fb6-4625-a260-8d43cdbe167a" containerName="nova-cell0-conductor-conductor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153480 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="37125f43-8fb6-4625-a260-8d43cdbe167a" containerName="nova-cell0-conductor-conductor" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153493 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153501 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153511 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153518 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153527 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153533 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153545 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153552 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153568 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153580 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-log" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153593 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71eb40d2-e481-445d-99ea-948b918b862d" containerName="rabbitmq" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153600 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="71eb40d2-e481-445d-99ea-948b918b862d" containerName="rabbitmq" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153609 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9674718-110d-4241-a199-9663979defde" containerName="ceilometer-central-agent" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153615 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9674718-110d-4241-a199-9663979defde" containerName="ceilometer-central-agent" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153623 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerName="glance-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153629 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerName="glance-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153671 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9674718-110d-4241-a199-9663979defde" containerName="proxy-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153677 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9674718-110d-4241-a199-9663979defde" containerName="proxy-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153688 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-expirer" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153694 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-expirer" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153703 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerName="probe" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153708 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerName="probe" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153720 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153725 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153734 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842a2e91-c7e4-4435-aa81-c1a888cf6a51" containerName="keystone-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153740 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="842a2e91-c7e4-4435-aa81-c1a888cf6a51" containerName="keystone-api" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153748 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-updater" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153754 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-updater" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153765 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9574582-49aa-48ec-8b43-bc55ed78a3d1" containerName="memcached" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153770 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9574582-49aa-48ec-8b43-bc55ed78a3d1" containerName="memcached" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153779 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153785 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153796 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server-init" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153802 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server-init" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153811 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153817 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153823 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153829 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153839 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9674718-110d-4241-a199-9663979defde" containerName="sg-core" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153848 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9674718-110d-4241-a199-9663979defde" containerName="sg-core" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153860 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerName="rabbitmq" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153865 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerName="rabbitmq" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153872 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-reaper" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153878 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-reaper" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153885 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerName="cinder-scheduler" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153891 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerName="cinder-scheduler" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153902 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="openstack-network-exporter" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153910 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="openstack-network-exporter" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153922 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" containerName="kube-state-metrics" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153930 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" containerName="kube-state-metrics" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153941 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerName="setup-container" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153949 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerName="setup-container" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153957 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71eb40d2-e481-445d-99ea-948b918b862d" containerName="setup-container" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153963 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="71eb40d2-e481-445d-99ea-948b918b862d" containerName="setup-container" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.153974 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.153981 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.154009 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-server" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154017 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-server" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.154027 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerName="neutron-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154032 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerName="neutron-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.154038 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerName="neutron-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154044 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerName="neutron-api" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.154054 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154059 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: E0108 23:45:00.154069 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-metadata" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154075 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-metadata" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154208 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154222 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerName="probe" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154230 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9674718-110d-4241-a199-9663979defde" containerName="ceilometer-central-agent" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154238 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e920b84a-bd1b-4649-9cc0-e3b239d6a5b9" containerName="rabbitmq" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154245 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="842a2e91-c7e4-4435-aa81-c1a888cf6a51" containerName="keystone-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154254 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8f132e-3fda-4a38-8416-1055a62e7552" containerName="nova-cell1-conductor-conductor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154263 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerName="glance-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154271 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-server" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154277 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154287 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-updater" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154295 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154302 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-expirer" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154309 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154318 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="71eb40d2-e481-445d-99ea-948b918b862d" containerName="rabbitmq" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154327 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerName="glance-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154335 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerName="neutron-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154342 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb334a7-9a7f-4e20-9dc8-092b9372bb10" containerName="nova-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154348 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-reaper" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154358 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154366 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154374 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb23b1e-c7b1-465a-a91c-6042942e604a" containerName="glance-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154385 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="openstack-network-exporter" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154393 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7afdb8-52e2-4078-8a6e-5f1fea2acd59" containerName="glance-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154401 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovs-vswitchd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154408 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154416 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefc7456-a6c7-4442-aa3a-370a1f9b01fa" containerName="ovn-northd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154425 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="rsync" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154432 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9674718-110d-4241-a199-9663979defde" containerName="sg-core" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154442 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4a4044-c9b6-49c9-98ed-446af4a3fe1f" containerName="kube-state-metrics" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154451 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9674718-110d-4241-a199-9663979defde" containerName="proxy-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154461 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f823122-da64-4ac4-aa14-96bc8f2f9c1c" containerName="nova-scheduler-scheduler" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154471 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9674718-110d-4241-a199-9663979defde" containerName="ceilometer-notification-agent" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154478 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-log" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154488 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="195bf0c5-575f-4d8e-ac9f-50d0e4c0848f" containerName="ovsdb-server" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154493 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="object-auditor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154500 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-replicator" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154510 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1eec40-294d-4749-bdb2-678289eeb815" containerName="nova-metadata-metadata" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154516 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9574582-49aa-48ec-8b43-bc55ed78a3d1" containerName="memcached" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154522 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="17371a82-14e3-4830-b99f-a2b46b4f4366" containerName="galera" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154529 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b4fe3f-7a34-4d6f-ba54-88ccea77b97a" containerName="cinder-scheduler" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154538 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f1760-d296-46a1-9ec8-cb64e543897c" containerName="ovn-controller" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154544 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-updater" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154552 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a2b873-3034-4b9f-9daf-81db6749d45f" containerName="cinder-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154561 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="container-server" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154569 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b682d87-6d87-4d38-b1c5-a5e4c3664472" containerName="neutron-httpd" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154580 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="account-server" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154589 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb7cf8-4c67-4574-a65b-dc82c7285c68" containerName="swift-recon-cron" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154601 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="227e0b3d-d5ba-4265-a7b9-0419deb61603" containerName="barbican-api" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.154609 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="37125f43-8fb6-4625-a260-8d43cdbe167a" containerName="nova-cell0-conductor-conductor" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.155456 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.157364 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.157684 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.165985 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt"] Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.226175 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-secret-volume\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.226223 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z6bv\" (UniqueName: \"kubernetes.io/projected/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-kube-api-access-7z6bv\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.226315 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-config-volume\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.327808 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-secret-volume\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.327859 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z6bv\" (UniqueName: \"kubernetes.io/projected/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-kube-api-access-7z6bv\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.327923 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-config-volume\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.328964 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-config-volume\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.334322 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-secret-volume\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.346694 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z6bv\" (UniqueName: \"kubernetes.io/projected/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-kube-api-access-7z6bv\") pod \"collect-profiles-29465265-6qmnt\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.520275 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:00 crc kubenswrapper[4945]: I0108 23:45:00.935328 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt"] Jan 08 23:45:00 crc kubenswrapper[4945]: W0108 23:45:00.939589 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a458a2_0117_4cc1_ac1d_68ba7e11e19d.slice/crio-fa70a7725a1df176dce84f83bee7d09aff83828a9ec78779e025c1a421d0aa74 WatchSource:0}: Error finding container fa70a7725a1df176dce84f83bee7d09aff83828a9ec78779e025c1a421d0aa74: Status 404 returned error can't find the container with id fa70a7725a1df176dce84f83bee7d09aff83828a9ec78779e025c1a421d0aa74 Jan 08 23:45:01 crc kubenswrapper[4945]: I0108 23:45:01.276554 4945 generic.go:334] "Generic (PLEG): container finished" podID="53a458a2-0117-4cc1-ac1d-68ba7e11e19d" containerID="e404b5a4fae8d88fbac084072f791afe4130b3b03fd5b8f30fffbf1a4d31c699" exitCode=0 Jan 08 23:45:01 crc kubenswrapper[4945]: I0108 23:45:01.276614 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" event={"ID":"53a458a2-0117-4cc1-ac1d-68ba7e11e19d","Type":"ContainerDied","Data":"e404b5a4fae8d88fbac084072f791afe4130b3b03fd5b8f30fffbf1a4d31c699"} Jan 08 23:45:01 crc kubenswrapper[4945]: I0108 23:45:01.276642 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" event={"ID":"53a458a2-0117-4cc1-ac1d-68ba7e11e19d","Type":"ContainerStarted","Data":"fa70a7725a1df176dce84f83bee7d09aff83828a9ec78779e025c1a421d0aa74"} Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.570794 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.765604 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z6bv\" (UniqueName: \"kubernetes.io/projected/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-kube-api-access-7z6bv\") pod \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.765791 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-config-volume\") pod \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.765856 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-secret-volume\") pod \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\" (UID: \"53a458a2-0117-4cc1-ac1d-68ba7e11e19d\") " Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.766666 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-config-volume" (OuterVolumeSpecName: "config-volume") pod "53a458a2-0117-4cc1-ac1d-68ba7e11e19d" (UID: "53a458a2-0117-4cc1-ac1d-68ba7e11e19d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.771909 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "53a458a2-0117-4cc1-ac1d-68ba7e11e19d" (UID: "53a458a2-0117-4cc1-ac1d-68ba7e11e19d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.772869 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-kube-api-access-7z6bv" (OuterVolumeSpecName: "kube-api-access-7z6bv") pod "53a458a2-0117-4cc1-ac1d-68ba7e11e19d" (UID: "53a458a2-0117-4cc1-ac1d-68ba7e11e19d"). InnerVolumeSpecName "kube-api-access-7z6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.868054 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.868080 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 08 23:45:02 crc kubenswrapper[4945]: I0108 23:45:02.868090 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z6bv\" (UniqueName: \"kubernetes.io/projected/53a458a2-0117-4cc1-ac1d-68ba7e11e19d-kube-api-access-7z6bv\") on node \"crc\" DevicePath \"\"" Jan 08 23:45:03 crc kubenswrapper[4945]: I0108 23:45:03.291980 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" event={"ID":"53a458a2-0117-4cc1-ac1d-68ba7e11e19d","Type":"ContainerDied","Data":"fa70a7725a1df176dce84f83bee7d09aff83828a9ec78779e025c1a421d0aa74"} Jan 08 23:45:03 crc kubenswrapper[4945]: I0108 23:45:03.292330 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa70a7725a1df176dce84f83bee7d09aff83828a9ec78779e025c1a421d0aa74" Jan 08 23:45:03 crc kubenswrapper[4945]: I0108 23:45:03.292048 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt" Jan 08 23:45:08 crc kubenswrapper[4945]: I0108 23:45:08.000720 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:45:08 crc kubenswrapper[4945]: E0108 23:45:08.001330 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:45:20 crc kubenswrapper[4945]: I0108 23:45:20.004018 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:45:20 crc kubenswrapper[4945]: E0108 23:45:20.005278 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:45:33 crc kubenswrapper[4945]: I0108 23:45:33.894947 4945 scope.go:117] "RemoveContainer" containerID="d8bca21aed5c1e2f7ef4338d1b56a3c8c184051de9f2c3af10a793a5c16ee47e" Jan 08 23:45:33 crc kubenswrapper[4945]: I0108 23:45:33.930457 4945 scope.go:117] "RemoveContainer" containerID="654dfd0dae13b6eca5059e86d0a2d97564f19b2b01579e9aca119bd08b290b5c" Jan 08 23:45:33 crc kubenswrapper[4945]: I0108 23:45:33.946479 4945 scope.go:117] "RemoveContainer" containerID="a3b7d465ce7932bc7a2c58b3a8d58a6d40a84dc47fe41a15fffd4d2e75e42570" Jan 08 23:45:33 crc kubenswrapper[4945]: I0108 23:45:33.964779 4945 scope.go:117] "RemoveContainer" containerID="f67a08265ea88bae6d39299224d2a2604f867f86d99af833fa2c5deefc166ff7" Jan 08 23:45:33 crc kubenswrapper[4945]: I0108 23:45:33.987177 4945 scope.go:117] "RemoveContainer" containerID="a7c4378b65d96c48e0e459a696a2e715a7cd896621e7b23334ef422367eb67d9" Jan 08 23:45:34 crc kubenswrapper[4945]: I0108 23:45:34.008294 4945 scope.go:117] "RemoveContainer" containerID="617e103fd47ab70027896060185afd85b04295da494b0f4b35c58a7ba8a8d5e8" Jan 08 23:45:34 crc kubenswrapper[4945]: I0108 23:45:34.029551 4945 scope.go:117] "RemoveContainer" containerID="30c511260ed86f2ae0035a3b1b0f8391850c39679881c6648ca7e4cd3152fef5" Jan 08 23:45:34 crc kubenswrapper[4945]: I0108 23:45:34.045310 4945 scope.go:117] "RemoveContainer" containerID="6f17b0b722fa6f30d2fca7b5200e82aaf9a3bc45f98bb1e38f697f910cb272f7" Jan 08 23:45:34 crc kubenswrapper[4945]: I0108 23:45:34.068313 4945 scope.go:117] "RemoveContainer" containerID="51776af5292a91f8828255736d607dd27da9610cd5dd1bc6a3f0b5f8b09b65f8" Jan 08 23:45:34 crc kubenswrapper[4945]: I0108 23:45:34.092381 4945 scope.go:117] "RemoveContainer" containerID="e9530af93f88c2e40ecb927cafcfe87968eb9889ade8ee091f469a28c020a331" Jan 08 23:45:34 crc kubenswrapper[4945]: I0108 23:45:34.114560 4945 scope.go:117] "RemoveContainer" containerID="15641b43f05f79e74699bfe52baf19315b239ba529af80999ae5807b2745e479" Jan 08 23:45:34 crc kubenswrapper[4945]: I0108 23:45:34.130080 4945 scope.go:117] "RemoveContainer" containerID="ebe090f7ada13f633224e0bbcee404b72c09adfb8c09163bb99a6a8d5ca17ea4" Jan 08 23:45:35 crc kubenswrapper[4945]: I0108 23:45:35.000618 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:45:35 crc kubenswrapper[4945]: E0108 23:45:35.000859 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:45:46 crc kubenswrapper[4945]: I0108 23:45:46.000776 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:45:46 crc kubenswrapper[4945]: E0108 23:45:46.001618 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:46:00 crc kubenswrapper[4945]: I0108 23:46:00.005519 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:46:00 crc kubenswrapper[4945]: E0108 23:46:00.006841 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:46:14 crc kubenswrapper[4945]: I0108 23:46:14.002326 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:46:14 crc kubenswrapper[4945]: I0108 23:46:14.853619 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"2aa3b6ab2306a98ace51571c2901bf3e4aeb0a371e3372d9716e50ba1bf0a86a"} Jan 08 23:46:34 crc kubenswrapper[4945]: I0108 23:46:34.241200 4945 scope.go:117] "RemoveContainer" containerID="316a5588848ace74d609cd5f706561fc84f271353677b57401496daabd9df679" Jan 08 23:46:34 crc kubenswrapper[4945]: I0108 23:46:34.273699 4945 scope.go:117] "RemoveContainer" containerID="16666bd37cccb6c678f4cdaa8b7f7e672a90275083cdba3da149f39b5238de37" Jan 08 23:46:34 crc kubenswrapper[4945]: I0108 23:46:34.289906 4945 scope.go:117] "RemoveContainer" containerID="db272f3a2cb229ff56011cfe4fa4d375bf8c3d30ed815cac7cd6b0c9c56a3120" Jan 08 23:46:34 crc kubenswrapper[4945]: I0108 23:46:34.321021 4945 scope.go:117] "RemoveContainer" containerID="a23c2fb78905e6eefe47c919eaab8d235a726c00827e870bd1d89890f26ddd78" Jan 08 23:46:34 crc kubenswrapper[4945]: I0108 23:46:34.364554 4945 scope.go:117] "RemoveContainer" containerID="a01a9cdaf1e6dcbf50fb7c8fdd46ee8450d6a7d635fd11e06d3b74c301d9e2af" Jan 08 23:46:34 crc kubenswrapper[4945]: I0108 23:46:34.378854 4945 scope.go:117] "RemoveContainer" containerID="a5a4dac963c906e9eb3223e2b1f096d4341464484bbc31fe53f0dde93c50bf12" Jan 08 23:46:34 crc kubenswrapper[4945]: I0108 23:46:34.394402 4945 scope.go:117] "RemoveContainer" containerID="1f410de8214b57d3744bd57097664e1e1a6c42f3f45a62aa5538c148ff68273e" Jan 08 23:47:34 crc kubenswrapper[4945]: I0108 23:47:34.488782 4945 scope.go:117] "RemoveContainer" containerID="7ecbd9fcee16f88440831e6610cfef4e63350db971f4e6808a4543a0b4ed747b" Jan 08 23:47:34 crc kubenswrapper[4945]: I0108 23:47:34.508552 4945 scope.go:117] "RemoveContainer" containerID="0e98c349bd4bb46a725fbf926d7b62bd3253a7708fe210e97d08c5303e0c8116" Jan 08 23:47:34 crc kubenswrapper[4945]: I0108 23:47:34.524374 4945 scope.go:117] "RemoveContainer" containerID="8082cabfd87114362cc4ea66d61572fa084a1efc7431b0bed7df7375b3fc0b20" Jan 08 23:47:34 crc kubenswrapper[4945]: I0108 23:47:34.569578 4945 scope.go:117] "RemoveContainer" containerID="360c56ebe46377c61cd806e3349a6a48d19f706ccb293962a2db56915941150d" Jan 08 23:47:34 crc kubenswrapper[4945]: I0108 23:47:34.585980 4945 scope.go:117] "RemoveContainer" containerID="18f1d04e9c438af26a72f40f614d629c0cba6569447b80e683d6b701672e3900" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.333416 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nsbxr"] Jan 08 23:48:21 crc kubenswrapper[4945]: E0108 23:48:21.335635 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a458a2-0117-4cc1-ac1d-68ba7e11e19d" containerName="collect-profiles" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.335663 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a458a2-0117-4cc1-ac1d-68ba7e11e19d" containerName="collect-profiles" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.337620 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a458a2-0117-4cc1-ac1d-68ba7e11e19d" containerName="collect-profiles" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.339074 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.342401 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsbxr"] Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.459300 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-catalog-content\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.459446 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5frlj\" (UniqueName: \"kubernetes.io/projected/c054138b-3f4b-49d3-8c3e-214d85f0f349-kube-api-access-5frlj\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.459571 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-utilities\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.561341 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-catalog-content\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.561704 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5frlj\" (UniqueName: \"kubernetes.io/projected/c054138b-3f4b-49d3-8c3e-214d85f0f349-kube-api-access-5frlj\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.561761 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-utilities\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.561949 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-catalog-content\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.562289 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-utilities\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.588301 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5frlj\" (UniqueName: \"kubernetes.io/projected/c054138b-3f4b-49d3-8c3e-214d85f0f349-kube-api-access-5frlj\") pod \"community-operators-nsbxr\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.662487 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:21 crc kubenswrapper[4945]: I0108 23:48:21.944949 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nsbxr"] Jan 08 23:48:22 crc kubenswrapper[4945]: I0108 23:48:22.261819 4945 generic.go:334] "Generic (PLEG): container finished" podID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerID="490157cd7aa1d5234ef81e7636f004224cd9489679e441662475f670dc26c797" exitCode=0 Jan 08 23:48:22 crc kubenswrapper[4945]: I0108 23:48:22.261864 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsbxr" event={"ID":"c054138b-3f4b-49d3-8c3e-214d85f0f349","Type":"ContainerDied","Data":"490157cd7aa1d5234ef81e7636f004224cd9489679e441662475f670dc26c797"} Jan 08 23:48:22 crc kubenswrapper[4945]: I0108 23:48:22.261901 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsbxr" event={"ID":"c054138b-3f4b-49d3-8c3e-214d85f0f349","Type":"ContainerStarted","Data":"742b6065f8bc77844de95be4574a07d1e3bfb2c6eaeae8d10e11c0bd3146203a"} Jan 08 23:48:22 crc kubenswrapper[4945]: I0108 23:48:22.264483 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 08 23:48:23 crc kubenswrapper[4945]: I0108 23:48:23.268792 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsbxr" event={"ID":"c054138b-3f4b-49d3-8c3e-214d85f0f349","Type":"ContainerStarted","Data":"e9680725ecfc89b1fa40b2e6526e6c27e7246d9744e8f552508115c2ab69ce70"} Jan 08 23:48:24 crc kubenswrapper[4945]: I0108 23:48:24.277707 4945 generic.go:334] "Generic (PLEG): container finished" podID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerID="e9680725ecfc89b1fa40b2e6526e6c27e7246d9744e8f552508115c2ab69ce70" exitCode=0 Jan 08 23:48:24 crc kubenswrapper[4945]: I0108 23:48:24.277800 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsbxr" event={"ID":"c054138b-3f4b-49d3-8c3e-214d85f0f349","Type":"ContainerDied","Data":"e9680725ecfc89b1fa40b2e6526e6c27e7246d9744e8f552508115c2ab69ce70"} Jan 08 23:48:25 crc kubenswrapper[4945]: I0108 23:48:25.285166 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsbxr" event={"ID":"c054138b-3f4b-49d3-8c3e-214d85f0f349","Type":"ContainerStarted","Data":"0f760756ba07a70886d4dc5f451460b70ea7125e2ac1fdb04f9c2c5566b7623f"} Jan 08 23:48:25 crc kubenswrapper[4945]: I0108 23:48:25.301623 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nsbxr" podStartSLOduration=1.7737385190000001 podStartE2EDuration="4.301603898s" podCreationTimestamp="2026-01-08 23:48:21 +0000 UTC" firstStartedPulling="2026-01-08 23:48:22.264222286 +0000 UTC m=+1972.575381232" lastFinishedPulling="2026-01-08 23:48:24.792087675 +0000 UTC m=+1975.103246611" observedRunningTime="2026-01-08 23:48:25.298528893 +0000 UTC m=+1975.609687859" watchObservedRunningTime="2026-01-08 23:48:25.301603898 +0000 UTC m=+1975.612762844" Jan 08 23:48:31 crc kubenswrapper[4945]: I0108 23:48:31.663162 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:31 crc kubenswrapper[4945]: I0108 23:48:31.663857 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:31 crc kubenswrapper[4945]: I0108 23:48:31.739925 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:32 crc kubenswrapper[4945]: I0108 23:48:32.373352 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:32 crc kubenswrapper[4945]: I0108 23:48:32.420976 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsbxr"] Jan 08 23:48:34 crc kubenswrapper[4945]: I0108 23:48:34.342706 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nsbxr" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerName="registry-server" containerID="cri-o://0f760756ba07a70886d4dc5f451460b70ea7125e2ac1fdb04f9c2c5566b7623f" gracePeriod=2 Jan 08 23:48:35 crc kubenswrapper[4945]: I0108 23:48:35.354974 4945 generic.go:334] "Generic (PLEG): container finished" podID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerID="0f760756ba07a70886d4dc5f451460b70ea7125e2ac1fdb04f9c2c5566b7623f" exitCode=0 Jan 08 23:48:35 crc kubenswrapper[4945]: I0108 23:48:35.355035 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsbxr" event={"ID":"c054138b-3f4b-49d3-8c3e-214d85f0f349","Type":"ContainerDied","Data":"0f760756ba07a70886d4dc5f451460b70ea7125e2ac1fdb04f9c2c5566b7623f"} Jan 08 23:48:35 crc kubenswrapper[4945]: I0108 23:48:35.823742 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:35 crc kubenswrapper[4945]: I0108 23:48:35.964827 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-catalog-content\") pod \"c054138b-3f4b-49d3-8c3e-214d85f0f349\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " Jan 08 23:48:35 crc kubenswrapper[4945]: I0108 23:48:35.964933 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5frlj\" (UniqueName: \"kubernetes.io/projected/c054138b-3f4b-49d3-8c3e-214d85f0f349-kube-api-access-5frlj\") pod \"c054138b-3f4b-49d3-8c3e-214d85f0f349\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " Jan 08 23:48:35 crc kubenswrapper[4945]: I0108 23:48:35.964987 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-utilities\") pod \"c054138b-3f4b-49d3-8c3e-214d85f0f349\" (UID: \"c054138b-3f4b-49d3-8c3e-214d85f0f349\") " Jan 08 23:48:35 crc kubenswrapper[4945]: I0108 23:48:35.966264 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-utilities" (OuterVolumeSpecName: "utilities") pod "c054138b-3f4b-49d3-8c3e-214d85f0f349" (UID: "c054138b-3f4b-49d3-8c3e-214d85f0f349"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:48:35 crc kubenswrapper[4945]: I0108 23:48:35.975951 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c054138b-3f4b-49d3-8c3e-214d85f0f349-kube-api-access-5frlj" (OuterVolumeSpecName: "kube-api-access-5frlj") pod "c054138b-3f4b-49d3-8c3e-214d85f0f349" (UID: "c054138b-3f4b-49d3-8c3e-214d85f0f349"). InnerVolumeSpecName "kube-api-access-5frlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.027355 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c054138b-3f4b-49d3-8c3e-214d85f0f349" (UID: "c054138b-3f4b-49d3-8c3e-214d85f0f349"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.066232 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.066271 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5frlj\" (UniqueName: \"kubernetes.io/projected/c054138b-3f4b-49d3-8c3e-214d85f0f349-kube-api-access-5frlj\") on node \"crc\" DevicePath \"\"" Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.066284 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c054138b-3f4b-49d3-8c3e-214d85f0f349-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.365048 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nsbxr" event={"ID":"c054138b-3f4b-49d3-8c3e-214d85f0f349","Type":"ContainerDied","Data":"742b6065f8bc77844de95be4574a07d1e3bfb2c6eaeae8d10e11c0bd3146203a"} Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.365634 4945 scope.go:117] "RemoveContainer" containerID="0f760756ba07a70886d4dc5f451460b70ea7125e2ac1fdb04f9c2c5566b7623f" Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.365836 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nsbxr" Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.396552 4945 scope.go:117] "RemoveContainer" containerID="e9680725ecfc89b1fa40b2e6526e6c27e7246d9744e8f552508115c2ab69ce70" Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.407707 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nsbxr"] Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.416699 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nsbxr"] Jan 08 23:48:36 crc kubenswrapper[4945]: I0108 23:48:36.430200 4945 scope.go:117] "RemoveContainer" containerID="490157cd7aa1d5234ef81e7636f004224cd9489679e441662475f670dc26c797" Jan 08 23:48:38 crc kubenswrapper[4945]: I0108 23:48:38.010894 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" path="/var/lib/kubelet/pods/c054138b-3f4b-49d3-8c3e-214d85f0f349/volumes" Jan 08 23:48:43 crc kubenswrapper[4945]: I0108 23:48:43.578302 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:48:43 crc kubenswrapper[4945]: I0108 23:48:43.578669 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:49:13 crc kubenswrapper[4945]: I0108 23:49:13.578355 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:49:13 crc kubenswrapper[4945]: I0108 23:49:13.579147 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:49:43 crc kubenswrapper[4945]: I0108 23:49:43.578782 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:49:43 crc kubenswrapper[4945]: I0108 23:49:43.580208 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:49:43 crc kubenswrapper[4945]: I0108 23:49:43.580289 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:49:43 crc kubenswrapper[4945]: I0108 23:49:43.580912 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2aa3b6ab2306a98ace51571c2901bf3e4aeb0a371e3372d9716e50ba1bf0a86a"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:49:43 crc kubenswrapper[4945]: I0108 23:49:43.580976 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://2aa3b6ab2306a98ace51571c2901bf3e4aeb0a371e3372d9716e50ba1bf0a86a" gracePeriod=600 Jan 08 23:49:43 crc kubenswrapper[4945]: I0108 23:49:43.834591 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="2aa3b6ab2306a98ace51571c2901bf3e4aeb0a371e3372d9716e50ba1bf0a86a" exitCode=0 Jan 08 23:49:43 crc kubenswrapper[4945]: I0108 23:49:43.834686 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"2aa3b6ab2306a98ace51571c2901bf3e4aeb0a371e3372d9716e50ba1bf0a86a"} Jan 08 23:49:43 crc kubenswrapper[4945]: I0108 23:49:43.834942 4945 scope.go:117] "RemoveContainer" containerID="e936f360571c335d07e21cea249ffade1a6c21ee65b999c4233ec5371762e2b8" Jan 08 23:49:44 crc kubenswrapper[4945]: I0108 23:49:44.846778 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0"} Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.191550 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-htfmq"] Jan 08 23:50:06 crc kubenswrapper[4945]: E0108 23:50:06.192539 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerName="registry-server" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.192557 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerName="registry-server" Jan 08 23:50:06 crc kubenswrapper[4945]: E0108 23:50:06.192569 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerName="extract-content" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.192577 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerName="extract-content" Jan 08 23:50:06 crc kubenswrapper[4945]: E0108 23:50:06.192592 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerName="extract-utilities" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.192600 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerName="extract-utilities" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.192826 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c054138b-3f4b-49d3-8c3e-214d85f0f349" containerName="registry-server" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.194268 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.205721 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-htfmq"] Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.325697 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5fh\" (UniqueName: \"kubernetes.io/projected/5dec49b0-c422-434f-a028-1e7ca546f090-kube-api-access-ns5fh\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.325767 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-catalog-content\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.325874 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-utilities\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.427765 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-utilities\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.427929 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5fh\" (UniqueName: \"kubernetes.io/projected/5dec49b0-c422-434f-a028-1e7ca546f090-kube-api-access-ns5fh\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.427963 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-catalog-content\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.428540 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-utilities\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.428604 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-catalog-content\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.459588 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5fh\" (UniqueName: \"kubernetes.io/projected/5dec49b0-c422-434f-a028-1e7ca546f090-kube-api-access-ns5fh\") pod \"certified-operators-htfmq\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.524914 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:06 crc kubenswrapper[4945]: I0108 23:50:06.862756 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-htfmq"] Jan 08 23:50:07 crc kubenswrapper[4945]: I0108 23:50:07.024364 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htfmq" event={"ID":"5dec49b0-c422-434f-a028-1e7ca546f090","Type":"ContainerStarted","Data":"be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e"} Jan 08 23:50:07 crc kubenswrapper[4945]: I0108 23:50:07.024724 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htfmq" event={"ID":"5dec49b0-c422-434f-a028-1e7ca546f090","Type":"ContainerStarted","Data":"6ce636631522f8353ea5cdd1430d532407279033ade4ba9166df5804e754cd40"} Jan 08 23:50:08 crc kubenswrapper[4945]: I0108 23:50:08.032008 4945 generic.go:334] "Generic (PLEG): container finished" podID="5dec49b0-c422-434f-a028-1e7ca546f090" containerID="be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e" exitCode=0 Jan 08 23:50:08 crc kubenswrapper[4945]: I0108 23:50:08.032044 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htfmq" event={"ID":"5dec49b0-c422-434f-a028-1e7ca546f090","Type":"ContainerDied","Data":"be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e"} Jan 08 23:50:09 crc kubenswrapper[4945]: I0108 23:50:09.041606 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htfmq" event={"ID":"5dec49b0-c422-434f-a028-1e7ca546f090","Type":"ContainerStarted","Data":"2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5"} Jan 08 23:50:10 crc kubenswrapper[4945]: I0108 23:50:10.049960 4945 generic.go:334] "Generic (PLEG): container finished" podID="5dec49b0-c422-434f-a028-1e7ca546f090" containerID="2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5" exitCode=0 Jan 08 23:50:10 crc kubenswrapper[4945]: I0108 23:50:10.050066 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htfmq" event={"ID":"5dec49b0-c422-434f-a028-1e7ca546f090","Type":"ContainerDied","Data":"2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5"} Jan 08 23:50:11 crc kubenswrapper[4945]: I0108 23:50:11.061472 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htfmq" event={"ID":"5dec49b0-c422-434f-a028-1e7ca546f090","Type":"ContainerStarted","Data":"6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9"} Jan 08 23:50:11 crc kubenswrapper[4945]: I0108 23:50:11.086368 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-htfmq" podStartSLOduration=2.6417626050000003 podStartE2EDuration="5.08634871s" podCreationTimestamp="2026-01-08 23:50:06 +0000 UTC" firstStartedPulling="2026-01-08 23:50:08.034076753 +0000 UTC m=+2078.345235709" lastFinishedPulling="2026-01-08 23:50:10.478662868 +0000 UTC m=+2080.789821814" observedRunningTime="2026-01-08 23:50:11.077745139 +0000 UTC m=+2081.388904095" watchObservedRunningTime="2026-01-08 23:50:11.08634871 +0000 UTC m=+2081.397507656" Jan 08 23:50:16 crc kubenswrapper[4945]: I0108 23:50:16.525931 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:16 crc kubenswrapper[4945]: I0108 23:50:16.526314 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:16 crc kubenswrapper[4945]: I0108 23:50:16.579089 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:17 crc kubenswrapper[4945]: I0108 23:50:17.150024 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:17 crc kubenswrapper[4945]: I0108 23:50:17.191430 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-htfmq"] Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.121721 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-htfmq" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" containerName="registry-server" containerID="cri-o://6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9" gracePeriod=2 Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.534491 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.629823 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-catalog-content\") pod \"5dec49b0-c422-434f-a028-1e7ca546f090\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.629895 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-utilities\") pod \"5dec49b0-c422-434f-a028-1e7ca546f090\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.629977 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns5fh\" (UniqueName: \"kubernetes.io/projected/5dec49b0-c422-434f-a028-1e7ca546f090-kube-api-access-ns5fh\") pod \"5dec49b0-c422-434f-a028-1e7ca546f090\" (UID: \"5dec49b0-c422-434f-a028-1e7ca546f090\") " Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.630730 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-utilities" (OuterVolumeSpecName: "utilities") pod "5dec49b0-c422-434f-a028-1e7ca546f090" (UID: "5dec49b0-c422-434f-a028-1e7ca546f090"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.656318 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dec49b0-c422-434f-a028-1e7ca546f090-kube-api-access-ns5fh" (OuterVolumeSpecName: "kube-api-access-ns5fh") pod "5dec49b0-c422-434f-a028-1e7ca546f090" (UID: "5dec49b0-c422-434f-a028-1e7ca546f090"). InnerVolumeSpecName "kube-api-access-ns5fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.689984 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dec49b0-c422-434f-a028-1e7ca546f090" (UID: "5dec49b0-c422-434f-a028-1e7ca546f090"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.731823 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.731885 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dec49b0-c422-434f-a028-1e7ca546f090-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:50:19 crc kubenswrapper[4945]: I0108 23:50:19.731900 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns5fh\" (UniqueName: \"kubernetes.io/projected/5dec49b0-c422-434f-a028-1e7ca546f090-kube-api-access-ns5fh\") on node \"crc\" DevicePath \"\"" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.131947 4945 generic.go:334] "Generic (PLEG): container finished" podID="5dec49b0-c422-434f-a028-1e7ca546f090" containerID="6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9" exitCode=0 Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.132021 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htfmq" event={"ID":"5dec49b0-c422-434f-a028-1e7ca546f090","Type":"ContainerDied","Data":"6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9"} Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.132043 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-htfmq" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.132075 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-htfmq" event={"ID":"5dec49b0-c422-434f-a028-1e7ca546f090","Type":"ContainerDied","Data":"6ce636631522f8353ea5cdd1430d532407279033ade4ba9166df5804e754cd40"} Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.132103 4945 scope.go:117] "RemoveContainer" containerID="6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.155614 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-htfmq"] Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.157707 4945 scope.go:117] "RemoveContainer" containerID="2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.160656 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-htfmq"] Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.173853 4945 scope.go:117] "RemoveContainer" containerID="be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.199760 4945 scope.go:117] "RemoveContainer" containerID="6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9" Jan 08 23:50:20 crc kubenswrapper[4945]: E0108 23:50:20.200422 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9\": container with ID starting with 6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9 not found: ID does not exist" containerID="6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.200516 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9"} err="failed to get container status \"6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9\": rpc error: code = NotFound desc = could not find container \"6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9\": container with ID starting with 6fbbf0bcf946e3aff38765f15e990a801843e7a51951b7aeb5cb6c809fde86c9 not found: ID does not exist" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.200545 4945 scope.go:117] "RemoveContainer" containerID="2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5" Jan 08 23:50:20 crc kubenswrapper[4945]: E0108 23:50:20.201388 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5\": container with ID starting with 2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5 not found: ID does not exist" containerID="2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.201908 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5"} err="failed to get container status \"2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5\": rpc error: code = NotFound desc = could not find container \"2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5\": container with ID starting with 2a579b0eae3381ac35ba867db0e318d30cc197037315776d46c138bd5c124ab5 not found: ID does not exist" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.201939 4945 scope.go:117] "RemoveContainer" containerID="be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e" Jan 08 23:50:20 crc kubenswrapper[4945]: E0108 23:50:20.202499 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e\": container with ID starting with be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e not found: ID does not exist" containerID="be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e" Jan 08 23:50:20 crc kubenswrapper[4945]: I0108 23:50:20.202552 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e"} err="failed to get container status \"be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e\": rpc error: code = NotFound desc = could not find container \"be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e\": container with ID starting with be94c704fcc12fd895c3eca967b3e555c60b9c36ca166db5519f4c89a915d33e not found: ID does not exist" Jan 08 23:50:22 crc kubenswrapper[4945]: I0108 23:50:22.009828 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" path="/var/lib/kubelet/pods/5dec49b0-c422-434f-a028-1e7ca546f090/volumes" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.664522 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5ks8n"] Jan 08 23:50:48 crc kubenswrapper[4945]: E0108 23:50:48.665506 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" containerName="extract-content" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.665523 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" containerName="extract-content" Jan 08 23:50:48 crc kubenswrapper[4945]: E0108 23:50:48.665539 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" containerName="extract-utilities" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.665545 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" containerName="extract-utilities" Jan 08 23:50:48 crc kubenswrapper[4945]: E0108 23:50:48.665551 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" containerName="registry-server" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.665557 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" containerName="registry-server" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.665719 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dec49b0-c422-434f-a028-1e7ca546f090" containerName="registry-server" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.666802 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.676352 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ks8n"] Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.862596 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-catalog-content\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.862719 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-utilities\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.862868 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rxck\" (UniqueName: \"kubernetes.io/projected/63ec2f41-5c86-4984-b5e1-931048bef5c1-kube-api-access-4rxck\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.963958 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-catalog-content\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.964066 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-utilities\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.964101 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rxck\" (UniqueName: \"kubernetes.io/projected/63ec2f41-5c86-4984-b5e1-931048bef5c1-kube-api-access-4rxck\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.964401 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-catalog-content\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.964618 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-utilities\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.987447 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rxck\" (UniqueName: \"kubernetes.io/projected/63ec2f41-5c86-4984-b5e1-931048bef5c1-kube-api-access-4rxck\") pod \"redhat-marketplace-5ks8n\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:48 crc kubenswrapper[4945]: I0108 23:50:48.993056 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:49 crc kubenswrapper[4945]: I0108 23:50:49.445964 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ks8n"] Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.352140 4945 generic.go:334] "Generic (PLEG): container finished" podID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerID="6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b" exitCode=0 Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.352200 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ks8n" event={"ID":"63ec2f41-5c86-4984-b5e1-931048bef5c1","Type":"ContainerDied","Data":"6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b"} Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.352524 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ks8n" event={"ID":"63ec2f41-5c86-4984-b5e1-931048bef5c1","Type":"ContainerStarted","Data":"0aa424a8b92dd79b15616cee5b6a1593dfb96910bbcc9fb7fedfad65ddba2e8f"} Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.875206 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4tdng"] Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.877124 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.883717 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4tdng"] Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.995923 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-utilities\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.996634 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-catalog-content\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:50 crc kubenswrapper[4945]: I0108 23:50:50.996736 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjnx6\" (UniqueName: \"kubernetes.io/projected/64cbbf1a-592e-4018-a4e3-78023ab25ef5-kube-api-access-pjnx6\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.099735 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-catalog-content\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.099824 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjnx6\" (UniqueName: \"kubernetes.io/projected/64cbbf1a-592e-4018-a4e3-78023ab25ef5-kube-api-access-pjnx6\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.100285 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-catalog-content\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.100335 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-utilities\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.100821 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-utilities\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.121105 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjnx6\" (UniqueName: \"kubernetes.io/projected/64cbbf1a-592e-4018-a4e3-78023ab25ef5-kube-api-access-pjnx6\") pod \"redhat-operators-4tdng\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.203038 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.364096 4945 generic.go:334] "Generic (PLEG): container finished" podID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerID="93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b" exitCode=0 Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.364152 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ks8n" event={"ID":"63ec2f41-5c86-4984-b5e1-931048bef5c1","Type":"ContainerDied","Data":"93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b"} Jan 08 23:50:51 crc kubenswrapper[4945]: I0108 23:50:51.651283 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4tdng"] Jan 08 23:50:51 crc kubenswrapper[4945]: W0108 23:50:51.656567 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64cbbf1a_592e_4018_a4e3_78023ab25ef5.slice/crio-589a907b8f634a1185bd5f8a1a3b19e79dba68ced2b5f50d3473f78c3486abd7 WatchSource:0}: Error finding container 589a907b8f634a1185bd5f8a1a3b19e79dba68ced2b5f50d3473f78c3486abd7: Status 404 returned error can't find the container with id 589a907b8f634a1185bd5f8a1a3b19e79dba68ced2b5f50d3473f78c3486abd7 Jan 08 23:50:52 crc kubenswrapper[4945]: I0108 23:50:52.372312 4945 generic.go:334] "Generic (PLEG): container finished" podID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerID="8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9" exitCode=0 Jan 08 23:50:52 crc kubenswrapper[4945]: I0108 23:50:52.372363 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tdng" event={"ID":"64cbbf1a-592e-4018-a4e3-78023ab25ef5","Type":"ContainerDied","Data":"8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9"} Jan 08 23:50:52 crc kubenswrapper[4945]: I0108 23:50:52.372698 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tdng" event={"ID":"64cbbf1a-592e-4018-a4e3-78023ab25ef5","Type":"ContainerStarted","Data":"589a907b8f634a1185bd5f8a1a3b19e79dba68ced2b5f50d3473f78c3486abd7"} Jan 08 23:50:52 crc kubenswrapper[4945]: I0108 23:50:52.375861 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ks8n" event={"ID":"63ec2f41-5c86-4984-b5e1-931048bef5c1","Type":"ContainerStarted","Data":"50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3"} Jan 08 23:50:52 crc kubenswrapper[4945]: I0108 23:50:52.412753 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5ks8n" podStartSLOduration=2.9699987820000002 podStartE2EDuration="4.412733854s" podCreationTimestamp="2026-01-08 23:50:48 +0000 UTC" firstStartedPulling="2026-01-08 23:50:50.355286868 +0000 UTC m=+2120.666445814" lastFinishedPulling="2026-01-08 23:50:51.79802194 +0000 UTC m=+2122.109180886" observedRunningTime="2026-01-08 23:50:52.407777782 +0000 UTC m=+2122.718936728" watchObservedRunningTime="2026-01-08 23:50:52.412733854 +0000 UTC m=+2122.723892800" Jan 08 23:50:53 crc kubenswrapper[4945]: I0108 23:50:53.382884 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tdng" event={"ID":"64cbbf1a-592e-4018-a4e3-78023ab25ef5","Type":"ContainerStarted","Data":"ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2"} Jan 08 23:50:54 crc kubenswrapper[4945]: I0108 23:50:54.403737 4945 generic.go:334] "Generic (PLEG): container finished" podID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerID="ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2" exitCode=0 Jan 08 23:50:54 crc kubenswrapper[4945]: I0108 23:50:54.403968 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tdng" event={"ID":"64cbbf1a-592e-4018-a4e3-78023ab25ef5","Type":"ContainerDied","Data":"ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2"} Jan 08 23:50:55 crc kubenswrapper[4945]: I0108 23:50:55.415634 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tdng" event={"ID":"64cbbf1a-592e-4018-a4e3-78023ab25ef5","Type":"ContainerStarted","Data":"d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920"} Jan 08 23:50:55 crc kubenswrapper[4945]: I0108 23:50:55.448226 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4tdng" podStartSLOduration=2.876073543 podStartE2EDuration="5.448204449s" podCreationTimestamp="2026-01-08 23:50:50 +0000 UTC" firstStartedPulling="2026-01-08 23:50:52.374225199 +0000 UTC m=+2122.685384145" lastFinishedPulling="2026-01-08 23:50:54.946356105 +0000 UTC m=+2125.257515051" observedRunningTime="2026-01-08 23:50:55.444628381 +0000 UTC m=+2125.755787337" watchObservedRunningTime="2026-01-08 23:50:55.448204449 +0000 UTC m=+2125.759363405" Jan 08 23:50:58 crc kubenswrapper[4945]: I0108 23:50:58.993582 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:58 crc kubenswrapper[4945]: I0108 23:50:58.994027 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:59 crc kubenswrapper[4945]: I0108 23:50:59.041341 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:59 crc kubenswrapper[4945]: I0108 23:50:59.498593 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:50:59 crc kubenswrapper[4945]: I0108 23:50:59.555417 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ks8n"] Jan 08 23:51:01 crc kubenswrapper[4945]: I0108 23:51:01.203212 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:51:01 crc kubenswrapper[4945]: I0108 23:51:01.203752 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:51:01 crc kubenswrapper[4945]: I0108 23:51:01.264690 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:51:01 crc kubenswrapper[4945]: I0108 23:51:01.454619 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5ks8n" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerName="registry-server" containerID="cri-o://50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3" gracePeriod=2 Jan 08 23:51:01 crc kubenswrapper[4945]: I0108 23:51:01.498887 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:51:01 crc kubenswrapper[4945]: I0108 23:51:01.859132 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4tdng"] Jan 08 23:51:02 crc kubenswrapper[4945]: I0108 23:51:02.963109 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.073261 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-utilities\") pod \"63ec2f41-5c86-4984-b5e1-931048bef5c1\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.073477 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-catalog-content\") pod \"63ec2f41-5c86-4984-b5e1-931048bef5c1\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.073539 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rxck\" (UniqueName: \"kubernetes.io/projected/63ec2f41-5c86-4984-b5e1-931048bef5c1-kube-api-access-4rxck\") pod \"63ec2f41-5c86-4984-b5e1-931048bef5c1\" (UID: \"63ec2f41-5c86-4984-b5e1-931048bef5c1\") " Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.074134 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-utilities" (OuterVolumeSpecName: "utilities") pod "63ec2f41-5c86-4984-b5e1-931048bef5c1" (UID: "63ec2f41-5c86-4984-b5e1-931048bef5c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.078980 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ec2f41-5c86-4984-b5e1-931048bef5c1-kube-api-access-4rxck" (OuterVolumeSpecName: "kube-api-access-4rxck") pod "63ec2f41-5c86-4984-b5e1-931048bef5c1" (UID: "63ec2f41-5c86-4984-b5e1-931048bef5c1"). InnerVolumeSpecName "kube-api-access-4rxck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.101929 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63ec2f41-5c86-4984-b5e1-931048bef5c1" (UID: "63ec2f41-5c86-4984-b5e1-931048bef5c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.175868 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.176160 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rxck\" (UniqueName: \"kubernetes.io/projected/63ec2f41-5c86-4984-b5e1-931048bef5c1-kube-api-access-4rxck\") on node \"crc\" DevicePath \"\"" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.176266 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ec2f41-5c86-4984-b5e1-931048bef5c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.472131 4945 generic.go:334] "Generic (PLEG): container finished" podID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerID="50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3" exitCode=0 Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.472288 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ks8n" event={"ID":"63ec2f41-5c86-4984-b5e1-931048bef5c1","Type":"ContainerDied","Data":"50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3"} Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.472352 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ks8n" event={"ID":"63ec2f41-5c86-4984-b5e1-931048bef5c1","Type":"ContainerDied","Data":"0aa424a8b92dd79b15616cee5b6a1593dfb96910bbcc9fb7fedfad65ddba2e8f"} Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.472382 4945 scope.go:117] "RemoveContainer" containerID="50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.472418 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4tdng" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerName="registry-server" containerID="cri-o://d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920" gracePeriod=2 Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.472929 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ks8n" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.490760 4945 scope.go:117] "RemoveContainer" containerID="93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.504454 4945 scope.go:117] "RemoveContainer" containerID="6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.627856 4945 scope.go:117] "RemoveContainer" containerID="50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3" Jan 08 23:51:03 crc kubenswrapper[4945]: E0108 23:51:03.633562 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3\": container with ID starting with 50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3 not found: ID does not exist" containerID="50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.633837 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3"} err="failed to get container status \"50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3\": rpc error: code = NotFound desc = could not find container \"50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3\": container with ID starting with 50a713f03ede5277fa07b7e8e39183c17eaeee421d7afa1e986867ff77f8c5a3 not found: ID does not exist" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.633930 4945 scope.go:117] "RemoveContainer" containerID="93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b" Jan 08 23:51:03 crc kubenswrapper[4945]: E0108 23:51:03.634516 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b\": container with ID starting with 93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b not found: ID does not exist" containerID="93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.634542 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b"} err="failed to get container status \"93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b\": rpc error: code = NotFound desc = could not find container \"93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b\": container with ID starting with 93d4baec0831d4c145c690eb7c2d6e655a4967c669762a46cc07aef1ba43593b not found: ID does not exist" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.634558 4945 scope.go:117] "RemoveContainer" containerID="6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b" Jan 08 23:51:03 crc kubenswrapper[4945]: E0108 23:51:03.634803 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b\": container with ID starting with 6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b not found: ID does not exist" containerID="6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.634839 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b"} err="failed to get container status \"6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b\": rpc error: code = NotFound desc = could not find container \"6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b\": container with ID starting with 6f19da382af8842b809bff61d4f77bea5748b3b03981a0459db1d2d860a9b51b not found: ID does not exist" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.636778 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ks8n"] Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.641743 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ks8n"] Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.898225 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.994621 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-catalog-content\") pod \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.994703 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-utilities\") pod \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.994735 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjnx6\" (UniqueName: \"kubernetes.io/projected/64cbbf1a-592e-4018-a4e3-78023ab25ef5-kube-api-access-pjnx6\") pod \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\" (UID: \"64cbbf1a-592e-4018-a4e3-78023ab25ef5\") " Jan 08 23:51:03 crc kubenswrapper[4945]: I0108 23:51:03.995770 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-utilities" (OuterVolumeSpecName: "utilities") pod "64cbbf1a-592e-4018-a4e3-78023ab25ef5" (UID: "64cbbf1a-592e-4018-a4e3-78023ab25ef5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.002084 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64cbbf1a-592e-4018-a4e3-78023ab25ef5-kube-api-access-pjnx6" (OuterVolumeSpecName: "kube-api-access-pjnx6") pod "64cbbf1a-592e-4018-a4e3-78023ab25ef5" (UID: "64cbbf1a-592e-4018-a4e3-78023ab25ef5"). InnerVolumeSpecName "kube-api-access-pjnx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.011740 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" path="/var/lib/kubelet/pods/63ec2f41-5c86-4984-b5e1-931048bef5c1/volumes" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.097064 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.097108 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjnx6\" (UniqueName: \"kubernetes.io/projected/64cbbf1a-592e-4018-a4e3-78023ab25ef5-kube-api-access-pjnx6\") on node \"crc\" DevicePath \"\"" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.481851 4945 generic.go:334] "Generic (PLEG): container finished" podID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerID="d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920" exitCode=0 Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.481923 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4tdng" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.481948 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tdng" event={"ID":"64cbbf1a-592e-4018-a4e3-78023ab25ef5","Type":"ContainerDied","Data":"d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920"} Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.482530 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4tdng" event={"ID":"64cbbf1a-592e-4018-a4e3-78023ab25ef5","Type":"ContainerDied","Data":"589a907b8f634a1185bd5f8a1a3b19e79dba68ced2b5f50d3473f78c3486abd7"} Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.482564 4945 scope.go:117] "RemoveContainer" containerID="d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.506665 4945 scope.go:117] "RemoveContainer" containerID="ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.532568 4945 scope.go:117] "RemoveContainer" containerID="8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.551538 4945 scope.go:117] "RemoveContainer" containerID="d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920" Jan 08 23:51:04 crc kubenswrapper[4945]: E0108 23:51:04.552027 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920\": container with ID starting with d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920 not found: ID does not exist" containerID="d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.552065 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920"} err="failed to get container status \"d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920\": rpc error: code = NotFound desc = could not find container \"d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920\": container with ID starting with d0572da5cd794b0ba6123a5422f5d44be098f58b63468fe2c468261fb7e6d920 not found: ID does not exist" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.552249 4945 scope.go:117] "RemoveContainer" containerID="ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2" Jan 08 23:51:04 crc kubenswrapper[4945]: E0108 23:51:04.552710 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2\": container with ID starting with ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2 not found: ID does not exist" containerID="ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.552763 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2"} err="failed to get container status \"ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2\": rpc error: code = NotFound desc = could not find container \"ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2\": container with ID starting with ee4bd89a7ecfaa0140fed7906083134d0b46b5f9d8bcb96135ee24bfc8367aa2 not found: ID does not exist" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.552800 4945 scope.go:117] "RemoveContainer" containerID="8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9" Jan 08 23:51:04 crc kubenswrapper[4945]: E0108 23:51:04.553331 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9\": container with ID starting with 8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9 not found: ID does not exist" containerID="8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9" Jan 08 23:51:04 crc kubenswrapper[4945]: I0108 23:51:04.553387 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9"} err="failed to get container status \"8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9\": rpc error: code = NotFound desc = could not find container \"8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9\": container with ID starting with 8ede9f727ffd6b917f60ff04b557df6cae5c1fd4a6eeeada7e335aae1be31ed9 not found: ID does not exist" Jan 08 23:51:05 crc kubenswrapper[4945]: I0108 23:51:05.135150 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64cbbf1a-592e-4018-a4e3-78023ab25ef5" (UID: "64cbbf1a-592e-4018-a4e3-78023ab25ef5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:51:05 crc kubenswrapper[4945]: I0108 23:51:05.213916 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cbbf1a-592e-4018-a4e3-78023ab25ef5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:51:05 crc kubenswrapper[4945]: I0108 23:51:05.422131 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4tdng"] Jan 08 23:51:05 crc kubenswrapper[4945]: I0108 23:51:05.429643 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4tdng"] Jan 08 23:51:06 crc kubenswrapper[4945]: I0108 23:51:06.010850 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" path="/var/lib/kubelet/pods/64cbbf1a-592e-4018-a4e3-78023ab25ef5/volumes" Jan 08 23:51:43 crc kubenswrapper[4945]: I0108 23:51:43.578856 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:51:43 crc kubenswrapper[4945]: I0108 23:51:43.579672 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:52:13 crc kubenswrapper[4945]: I0108 23:52:13.578199 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:52:13 crc kubenswrapper[4945]: I0108 23:52:13.579310 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:52:43 crc kubenswrapper[4945]: I0108 23:52:43.578208 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 08 23:52:43 crc kubenswrapper[4945]: I0108 23:52:43.579120 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 08 23:52:43 crc kubenswrapper[4945]: I0108 23:52:43.579210 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 08 23:52:43 crc kubenswrapper[4945]: I0108 23:52:43.580297 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 08 23:52:43 crc kubenswrapper[4945]: I0108 23:52:43.580418 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" gracePeriod=600 Jan 08 23:52:43 crc kubenswrapper[4945]: E0108 23:52:43.705282 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:52:44 crc kubenswrapper[4945]: I0108 23:52:44.313292 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" exitCode=0 Jan 08 23:52:44 crc kubenswrapper[4945]: I0108 23:52:44.313368 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0"} Jan 08 23:52:44 crc kubenswrapper[4945]: I0108 23:52:44.313776 4945 scope.go:117] "RemoveContainer" containerID="2aa3b6ab2306a98ace51571c2901bf3e4aeb0a371e3372d9716e50ba1bf0a86a" Jan 08 23:52:44 crc kubenswrapper[4945]: I0108 23:52:44.314446 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:52:44 crc kubenswrapper[4945]: E0108 23:52:44.314806 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:52:58 crc kubenswrapper[4945]: I0108 23:52:58.000783 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:52:58 crc kubenswrapper[4945]: E0108 23:52:58.001633 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:53:09 crc kubenswrapper[4945]: I0108 23:53:09.000617 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:53:09 crc kubenswrapper[4945]: E0108 23:53:09.001453 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:53:21 crc kubenswrapper[4945]: I0108 23:53:21.000044 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:53:21 crc kubenswrapper[4945]: E0108 23:53:21.000607 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:53:36 crc kubenswrapper[4945]: I0108 23:53:36.001502 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:53:36 crc kubenswrapper[4945]: E0108 23:53:36.002305 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:53:49 crc kubenswrapper[4945]: I0108 23:53:49.001701 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:53:49 crc kubenswrapper[4945]: E0108 23:53:49.002396 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:54:04 crc kubenswrapper[4945]: I0108 23:54:04.001135 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:54:04 crc kubenswrapper[4945]: E0108 23:54:04.002451 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:54:19 crc kubenswrapper[4945]: I0108 23:54:19.000662 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:54:19 crc kubenswrapper[4945]: E0108 23:54:19.001445 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:54:32 crc kubenswrapper[4945]: I0108 23:54:32.001082 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:54:32 crc kubenswrapper[4945]: E0108 23:54:32.002090 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:54:46 crc kubenswrapper[4945]: I0108 23:54:46.000605 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:54:46 crc kubenswrapper[4945]: E0108 23:54:46.001374 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:54:59 crc kubenswrapper[4945]: I0108 23:54:58.999983 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:54:59 crc kubenswrapper[4945]: E0108 23:54:59.000796 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:55:13 crc kubenswrapper[4945]: I0108 23:55:12.999809 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:55:13 crc kubenswrapper[4945]: E0108 23:55:13.000551 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:55:24 crc kubenswrapper[4945]: I0108 23:55:24.000698 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:55:24 crc kubenswrapper[4945]: E0108 23:55:24.001437 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:55:37 crc kubenswrapper[4945]: I0108 23:55:37.000715 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:55:37 crc kubenswrapper[4945]: E0108 23:55:37.001429 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:55:50 crc kubenswrapper[4945]: I0108 23:55:50.005408 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:55:50 crc kubenswrapper[4945]: E0108 23:55:50.006005 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:56:03 crc kubenswrapper[4945]: I0108 23:56:03.000733 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:56:03 crc kubenswrapper[4945]: E0108 23:56:03.002597 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:56:14 crc kubenswrapper[4945]: I0108 23:56:14.001396 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:56:14 crc kubenswrapper[4945]: E0108 23:56:14.002267 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:56:26 crc kubenswrapper[4945]: I0108 23:56:26.001073 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:56:26 crc kubenswrapper[4945]: E0108 23:56:26.002214 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:56:38 crc kubenswrapper[4945]: I0108 23:56:38.000292 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:56:38 crc kubenswrapper[4945]: E0108 23:56:38.001256 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:56:50 crc kubenswrapper[4945]: I0108 23:56:50.003244 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:56:50 crc kubenswrapper[4945]: E0108 23:56:50.004278 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:57:03 crc kubenswrapper[4945]: I0108 23:57:03.000783 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:57:03 crc kubenswrapper[4945]: E0108 23:57:03.001594 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:57:15 crc kubenswrapper[4945]: I0108 23:57:15.000376 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:57:15 crc kubenswrapper[4945]: E0108 23:57:15.003243 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:57:28 crc kubenswrapper[4945]: I0108 23:57:28.001217 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:57:28 crc kubenswrapper[4945]: E0108 23:57:28.002754 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:57:41 crc kubenswrapper[4945]: I0108 23:57:41.000126 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:57:41 crc kubenswrapper[4945]: E0108 23:57:41.000900 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 08 23:57:53 crc kubenswrapper[4945]: I0108 23:57:53.000517 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 08 23:57:53 crc kubenswrapper[4945]: I0108 23:57:53.801585 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"1eb0488c81865add13522725ec7e9dfbdde53b657c212e13eb652906573fce3b"} Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.721577 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qqq9g"] Jan 08 23:59:41 crc kubenswrapper[4945]: E0108 23:59:41.723822 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerName="extract-utilities" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.723905 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerName="extract-utilities" Jan 08 23:59:41 crc kubenswrapper[4945]: E0108 23:59:41.723967 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerName="registry-server" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.724050 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerName="registry-server" Jan 08 23:59:41 crc kubenswrapper[4945]: E0108 23:59:41.724111 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerName="extract-utilities" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.724166 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerName="extract-utilities" Jan 08 23:59:41 crc kubenswrapper[4945]: E0108 23:59:41.724225 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerName="extract-content" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.724281 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerName="extract-content" Jan 08 23:59:41 crc kubenswrapper[4945]: E0108 23:59:41.724338 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerName="extract-content" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.724397 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerName="extract-content" Jan 08 23:59:41 crc kubenswrapper[4945]: E0108 23:59:41.724451 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerName="registry-server" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.724505 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerName="registry-server" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.724727 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ec2f41-5c86-4984-b5e1-931048bef5c1" containerName="registry-server" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.724823 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="64cbbf1a-592e-4018-a4e3-78023ab25ef5" containerName="registry-server" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.726038 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.728941 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qqq9g"] Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.876473 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-catalog-content\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.876536 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg9kp\" (UniqueName: \"kubernetes.io/projected/18114b94-0719-4ba2-ad45-319155634167-kube-api-access-mg9kp\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.876590 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-utilities\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.977431 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg9kp\" (UniqueName: \"kubernetes.io/projected/18114b94-0719-4ba2-ad45-319155634167-kube-api-access-mg9kp\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.977517 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-utilities\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.977571 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-catalog-content\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.978091 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-catalog-content\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.978383 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-utilities\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:41 crc kubenswrapper[4945]: I0108 23:59:41.994623 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg9kp\" (UniqueName: \"kubernetes.io/projected/18114b94-0719-4ba2-ad45-319155634167-kube-api-access-mg9kp\") pod \"community-operators-qqq9g\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:42 crc kubenswrapper[4945]: I0108 23:59:42.066221 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:42 crc kubenswrapper[4945]: I0108 23:59:42.599708 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qqq9g"] Jan 08 23:59:42 crc kubenswrapper[4945]: I0108 23:59:42.636967 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqq9g" event={"ID":"18114b94-0719-4ba2-ad45-319155634167","Type":"ContainerStarted","Data":"1f8a0b693936f69e7579b31fde70cdce12c5b3e7785c9cebe8aa986b90dba224"} Jan 08 23:59:43 crc kubenswrapper[4945]: I0108 23:59:43.645249 4945 generic.go:334] "Generic (PLEG): container finished" podID="18114b94-0719-4ba2-ad45-319155634167" containerID="bd3eb1ffa8c4f00ccb586627d14162ea5ac96d3dbf1547a98e7cfed226ca1bf6" exitCode=0 Jan 08 23:59:43 crc kubenswrapper[4945]: I0108 23:59:43.645349 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqq9g" event={"ID":"18114b94-0719-4ba2-ad45-319155634167","Type":"ContainerDied","Data":"bd3eb1ffa8c4f00ccb586627d14162ea5ac96d3dbf1547a98e7cfed226ca1bf6"} Jan 08 23:59:43 crc kubenswrapper[4945]: I0108 23:59:43.647013 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 08 23:59:44 crc kubenswrapper[4945]: I0108 23:59:44.654291 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqq9g" event={"ID":"18114b94-0719-4ba2-ad45-319155634167","Type":"ContainerStarted","Data":"b4f326324cb5f0dad79335136cff1c89cdf16d3f9737c6d834a14778d1bc3a31"} Jan 08 23:59:45 crc kubenswrapper[4945]: I0108 23:59:45.666427 4945 generic.go:334] "Generic (PLEG): container finished" podID="18114b94-0719-4ba2-ad45-319155634167" containerID="b4f326324cb5f0dad79335136cff1c89cdf16d3f9737c6d834a14778d1bc3a31" exitCode=0 Jan 08 23:59:45 crc kubenswrapper[4945]: I0108 23:59:45.666501 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqq9g" event={"ID":"18114b94-0719-4ba2-ad45-319155634167","Type":"ContainerDied","Data":"b4f326324cb5f0dad79335136cff1c89cdf16d3f9737c6d834a14778d1bc3a31"} Jan 08 23:59:47 crc kubenswrapper[4945]: I0108 23:59:47.684117 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqq9g" event={"ID":"18114b94-0719-4ba2-ad45-319155634167","Type":"ContainerStarted","Data":"37aae0d9075d2bce1f1b8b4a7b2263c3573d4a6ade0caef1bc5276c2dc0dbf2a"} Jan 08 23:59:47 crc kubenswrapper[4945]: I0108 23:59:47.711924 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qqq9g" podStartSLOduration=3.86833834 podStartE2EDuration="6.711903734s" podCreationTimestamp="2026-01-08 23:59:41 +0000 UTC" firstStartedPulling="2026-01-08 23:59:43.64676608 +0000 UTC m=+2653.957925016" lastFinishedPulling="2026-01-08 23:59:46.490331464 +0000 UTC m=+2656.801490410" observedRunningTime="2026-01-08 23:59:47.707758812 +0000 UTC m=+2658.018917768" watchObservedRunningTime="2026-01-08 23:59:47.711903734 +0000 UTC m=+2658.023062680" Jan 08 23:59:52 crc kubenswrapper[4945]: I0108 23:59:52.066507 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:52 crc kubenswrapper[4945]: I0108 23:59:52.067081 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:52 crc kubenswrapper[4945]: I0108 23:59:52.109185 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:52 crc kubenswrapper[4945]: I0108 23:59:52.759207 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:52 crc kubenswrapper[4945]: I0108 23:59:52.800692 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qqq9g"] Jan 08 23:59:54 crc kubenswrapper[4945]: I0108 23:59:54.725808 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qqq9g" podUID="18114b94-0719-4ba2-ad45-319155634167" containerName="registry-server" containerID="cri-o://37aae0d9075d2bce1f1b8b4a7b2263c3573d4a6ade0caef1bc5276c2dc0dbf2a" gracePeriod=2 Jan 08 23:59:55 crc kubenswrapper[4945]: I0108 23:59:55.736104 4945 generic.go:334] "Generic (PLEG): container finished" podID="18114b94-0719-4ba2-ad45-319155634167" containerID="37aae0d9075d2bce1f1b8b4a7b2263c3573d4a6ade0caef1bc5276c2dc0dbf2a" exitCode=0 Jan 08 23:59:55 crc kubenswrapper[4945]: I0108 23:59:55.736299 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqq9g" event={"ID":"18114b94-0719-4ba2-ad45-319155634167","Type":"ContainerDied","Data":"37aae0d9075d2bce1f1b8b4a7b2263c3573d4a6ade0caef1bc5276c2dc0dbf2a"} Jan 08 23:59:55 crc kubenswrapper[4945]: I0108 23:59:55.875320 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:55 crc kubenswrapper[4945]: I0108 23:59:55.984308 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg9kp\" (UniqueName: \"kubernetes.io/projected/18114b94-0719-4ba2-ad45-319155634167-kube-api-access-mg9kp\") pod \"18114b94-0719-4ba2-ad45-319155634167\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " Jan 08 23:59:55 crc kubenswrapper[4945]: I0108 23:59:55.984437 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-catalog-content\") pod \"18114b94-0719-4ba2-ad45-319155634167\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " Jan 08 23:59:55 crc kubenswrapper[4945]: I0108 23:59:55.984532 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-utilities\") pod \"18114b94-0719-4ba2-ad45-319155634167\" (UID: \"18114b94-0719-4ba2-ad45-319155634167\") " Jan 08 23:59:55 crc kubenswrapper[4945]: I0108 23:59:55.985441 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-utilities" (OuterVolumeSpecName: "utilities") pod "18114b94-0719-4ba2-ad45-319155634167" (UID: "18114b94-0719-4ba2-ad45-319155634167"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:59:55 crc kubenswrapper[4945]: I0108 23:59:55.989928 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18114b94-0719-4ba2-ad45-319155634167-kube-api-access-mg9kp" (OuterVolumeSpecName: "kube-api-access-mg9kp") pod "18114b94-0719-4ba2-ad45-319155634167" (UID: "18114b94-0719-4ba2-ad45-319155634167"). InnerVolumeSpecName "kube-api-access-mg9kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.037195 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18114b94-0719-4ba2-ad45-319155634167" (UID: "18114b94-0719-4ba2-ad45-319155634167"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.086483 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-utilities\") on node \"crc\" DevicePath \"\"" Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.086515 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg9kp\" (UniqueName: \"kubernetes.io/projected/18114b94-0719-4ba2-ad45-319155634167-kube-api-access-mg9kp\") on node \"crc\" DevicePath \"\"" Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.086525 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18114b94-0719-4ba2-ad45-319155634167-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.745573 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqq9g" event={"ID":"18114b94-0719-4ba2-ad45-319155634167","Type":"ContainerDied","Data":"1f8a0b693936f69e7579b31fde70cdce12c5b3e7785c9cebe8aa986b90dba224"} Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.745988 4945 scope.go:117] "RemoveContainer" containerID="37aae0d9075d2bce1f1b8b4a7b2263c3573d4a6ade0caef1bc5276c2dc0dbf2a" Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.745620 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqq9g" Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.772139 4945 scope.go:117] "RemoveContainer" containerID="b4f326324cb5f0dad79335136cff1c89cdf16d3f9737c6d834a14778d1bc3a31" Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.776116 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qqq9g"] Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.782029 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qqq9g"] Jan 08 23:59:56 crc kubenswrapper[4945]: I0108 23:59:56.797247 4945 scope.go:117] "RemoveContainer" containerID="bd3eb1ffa8c4f00ccb586627d14162ea5ac96d3dbf1547a98e7cfed226ca1bf6" Jan 08 23:59:58 crc kubenswrapper[4945]: I0108 23:59:58.010483 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18114b94-0719-4ba2-ad45-319155634167" path="/var/lib/kubelet/pods/18114b94-0719-4ba2-ad45-319155634167/volumes" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.160419 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599"] Jan 09 00:00:00 crc kubenswrapper[4945]: E0109 00:00:00.161432 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18114b94-0719-4ba2-ad45-319155634167" containerName="extract-content" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.161451 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="18114b94-0719-4ba2-ad45-319155634167" containerName="extract-content" Jan 09 00:00:00 crc kubenswrapper[4945]: E0109 00:00:00.161477 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18114b94-0719-4ba2-ad45-319155634167" containerName="registry-server" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.161489 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="18114b94-0719-4ba2-ad45-319155634167" containerName="registry-server" Jan 09 00:00:00 crc kubenswrapper[4945]: E0109 00:00:00.161512 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18114b94-0719-4ba2-ad45-319155634167" containerName="extract-utilities" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.161520 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="18114b94-0719-4ba2-ad45-319155634167" containerName="extract-utilities" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.161697 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="18114b94-0719-4ba2-ad45-319155634167" containerName="registry-server" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.162350 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.166644 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.166912 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.167721 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29465280-tl4mb"] Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.169248 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.170817 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.171475 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.172868 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599"] Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.189366 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29465280-tl4mb"] Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.246913 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b57b79d3-0b89-4f67-afd3-024709104516-config-volume\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.246967 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-serviceca\") pod \"image-pruner-29465280-tl4mb\" (UID: \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\") " pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.247005 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b57b79d3-0b89-4f67-afd3-024709104516-secret-volume\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.247025 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74fgn\" (UniqueName: \"kubernetes.io/projected/b57b79d3-0b89-4f67-afd3-024709104516-kube-api-access-74fgn\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.247081 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7zh9\" (UniqueName: \"kubernetes.io/projected/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-kube-api-access-p7zh9\") pod \"image-pruner-29465280-tl4mb\" (UID: \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\") " pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.348320 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b57b79d3-0b89-4f67-afd3-024709104516-secret-volume\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.348374 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74fgn\" (UniqueName: \"kubernetes.io/projected/b57b79d3-0b89-4f67-afd3-024709104516-kube-api-access-74fgn\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.348439 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7zh9\" (UniqueName: \"kubernetes.io/projected/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-kube-api-access-p7zh9\") pod \"image-pruner-29465280-tl4mb\" (UID: \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\") " pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.348491 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b57b79d3-0b89-4f67-afd3-024709104516-config-volume\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.348510 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-serviceca\") pod \"image-pruner-29465280-tl4mb\" (UID: \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\") " pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.349703 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-serviceca\") pod \"image-pruner-29465280-tl4mb\" (UID: \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\") " pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.350444 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b57b79d3-0b89-4f67-afd3-024709104516-config-volume\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.355361 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b57b79d3-0b89-4f67-afd3-024709104516-secret-volume\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.366191 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7zh9\" (UniqueName: \"kubernetes.io/projected/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-kube-api-access-p7zh9\") pod \"image-pruner-29465280-tl4mb\" (UID: \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\") " pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.368518 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74fgn\" (UniqueName: \"kubernetes.io/projected/b57b79d3-0b89-4f67-afd3-024709104516-kube-api-access-74fgn\") pod \"collect-profiles-29465280-g2599\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.515197 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.529133 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:00 crc kubenswrapper[4945]: I0109 00:00:00.976058 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29465280-tl4mb"] Jan 09 00:00:01 crc kubenswrapper[4945]: I0109 00:00:01.020614 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599"] Jan 09 00:00:01 crc kubenswrapper[4945]: W0109 00:00:01.024110 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb57b79d3_0b89_4f67_afd3_024709104516.slice/crio-09bdefaa126ea45c5aac9a38eec02bdf49a19e4ce10edd55b90c6bc16dc2965b WatchSource:0}: Error finding container 09bdefaa126ea45c5aac9a38eec02bdf49a19e4ce10edd55b90c6bc16dc2965b: Status 404 returned error can't find the container with id 09bdefaa126ea45c5aac9a38eec02bdf49a19e4ce10edd55b90c6bc16dc2965b Jan 09 00:00:01 crc kubenswrapper[4945]: I0109 00:00:01.789675 4945 generic.go:334] "Generic (PLEG): container finished" podID="b57b79d3-0b89-4f67-afd3-024709104516" containerID="c8d0dcc849b3ea8c67e9cfb11238b55ab0ae5786f6b26b8dd8a1999f60c26686" exitCode=0 Jan 09 00:00:01 crc kubenswrapper[4945]: I0109 00:00:01.789788 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" event={"ID":"b57b79d3-0b89-4f67-afd3-024709104516","Type":"ContainerDied","Data":"c8d0dcc849b3ea8c67e9cfb11238b55ab0ae5786f6b26b8dd8a1999f60c26686"} Jan 09 00:00:01 crc kubenswrapper[4945]: I0109 00:00:01.790120 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" event={"ID":"b57b79d3-0b89-4f67-afd3-024709104516","Type":"ContainerStarted","Data":"09bdefaa126ea45c5aac9a38eec02bdf49a19e4ce10edd55b90c6bc16dc2965b"} Jan 09 00:00:01 crc kubenswrapper[4945]: I0109 00:00:01.791726 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29465280-tl4mb" event={"ID":"4de27da5-ddc5-40f7-bd70-b4eb64cd0857","Type":"ContainerStarted","Data":"bfad0400fae9f90b19fdc370ea777ad823abea01af89d9203d9de7498ca859a4"} Jan 09 00:00:01 crc kubenswrapper[4945]: I0109 00:00:01.791761 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29465280-tl4mb" event={"ID":"4de27da5-ddc5-40f7-bd70-b4eb64cd0857","Type":"ContainerStarted","Data":"cf9e1b6fb89f4a9d785b95dafbc769dcae85a3ca40a35ceca37ab32e2b6fec19"} Jan 09 00:00:01 crc kubenswrapper[4945]: I0109 00:00:01.828698 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29465280-tl4mb" podStartSLOduration=1.828680174 podStartE2EDuration="1.828680174s" podCreationTimestamp="2026-01-09 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:00:01.824092021 +0000 UTC m=+2672.135250967" watchObservedRunningTime="2026-01-09 00:00:01.828680174 +0000 UTC m=+2672.139839120" Jan 09 00:00:02 crc kubenswrapper[4945]: I0109 00:00:02.799177 4945 generic.go:334] "Generic (PLEG): container finished" podID="4de27da5-ddc5-40f7-bd70-b4eb64cd0857" containerID="bfad0400fae9f90b19fdc370ea777ad823abea01af89d9203d9de7498ca859a4" exitCode=0 Jan 09 00:00:02 crc kubenswrapper[4945]: I0109 00:00:02.799275 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29465280-tl4mb" event={"ID":"4de27da5-ddc5-40f7-bd70-b4eb64cd0857","Type":"ContainerDied","Data":"bfad0400fae9f90b19fdc370ea777ad823abea01af89d9203d9de7498ca859a4"} Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.068197 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.182741 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b57b79d3-0b89-4f67-afd3-024709104516-config-volume\") pod \"b57b79d3-0b89-4f67-afd3-024709104516\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.182874 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b57b79d3-0b89-4f67-afd3-024709104516-secret-volume\") pod \"b57b79d3-0b89-4f67-afd3-024709104516\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.183077 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74fgn\" (UniqueName: \"kubernetes.io/projected/b57b79d3-0b89-4f67-afd3-024709104516-kube-api-access-74fgn\") pod \"b57b79d3-0b89-4f67-afd3-024709104516\" (UID: \"b57b79d3-0b89-4f67-afd3-024709104516\") " Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.183829 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57b79d3-0b89-4f67-afd3-024709104516-config-volume" (OuterVolumeSpecName: "config-volume") pod "b57b79d3-0b89-4f67-afd3-024709104516" (UID: "b57b79d3-0b89-4f67-afd3-024709104516"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.188557 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57b79d3-0b89-4f67-afd3-024709104516-kube-api-access-74fgn" (OuterVolumeSpecName: "kube-api-access-74fgn") pod "b57b79d3-0b89-4f67-afd3-024709104516" (UID: "b57b79d3-0b89-4f67-afd3-024709104516"). InnerVolumeSpecName "kube-api-access-74fgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.192121 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b57b79d3-0b89-4f67-afd3-024709104516-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b57b79d3-0b89-4f67-afd3-024709104516" (UID: "b57b79d3-0b89-4f67-afd3-024709104516"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.284844 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b57b79d3-0b89-4f67-afd3-024709104516-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.284902 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74fgn\" (UniqueName: \"kubernetes.io/projected/b57b79d3-0b89-4f67-afd3-024709104516-kube-api-access-74fgn\") on node \"crc\" DevicePath \"\"" Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.284923 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b57b79d3-0b89-4f67-afd3-024709104516-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.808310 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" event={"ID":"b57b79d3-0b89-4f67-afd3-024709104516","Type":"ContainerDied","Data":"09bdefaa126ea45c5aac9a38eec02bdf49a19e4ce10edd55b90c6bc16dc2965b"} Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.808366 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09bdefaa126ea45c5aac9a38eec02bdf49a19e4ce10edd55b90c6bc16dc2965b" Jan 09 00:00:03 crc kubenswrapper[4945]: I0109 00:00:03.808369 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599" Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.039085 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.094807 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-serviceca\") pod \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\" (UID: \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\") " Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.095323 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7zh9\" (UniqueName: \"kubernetes.io/projected/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-kube-api-access-p7zh9\") pod \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\" (UID: \"4de27da5-ddc5-40f7-bd70-b4eb64cd0857\") " Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.096486 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-serviceca" (OuterVolumeSpecName: "serviceca") pod "4de27da5-ddc5-40f7-bd70-b4eb64cd0857" (UID: "4de27da5-ddc5-40f7-bd70-b4eb64cd0857"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.099227 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-kube-api-access-p7zh9" (OuterVolumeSpecName: "kube-api-access-p7zh9") pod "4de27da5-ddc5-40f7-bd70-b4eb64cd0857" (UID: "4de27da5-ddc5-40f7-bd70-b4eb64cd0857"). InnerVolumeSpecName "kube-api-access-p7zh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.144717 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt"] Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.149563 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465235-sg2kt"] Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.197139 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7zh9\" (UniqueName: \"kubernetes.io/projected/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-kube-api-access-p7zh9\") on node \"crc\" DevicePath \"\"" Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.197167 4945 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4de27da5-ddc5-40f7-bd70-b4eb64cd0857-serviceca\") on node \"crc\" DevicePath \"\"" Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.817372 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29465280-tl4mb" event={"ID":"4de27da5-ddc5-40f7-bd70-b4eb64cd0857","Type":"ContainerDied","Data":"cf9e1b6fb89f4a9d785b95dafbc769dcae85a3ca40a35ceca37ab32e2b6fec19"} Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.817412 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9e1b6fb89f4a9d785b95dafbc769dcae85a3ca40a35ceca37ab32e2b6fec19" Jan 09 00:00:04 crc kubenswrapper[4945]: I0109 00:00:04.817434 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29465280-tl4mb" Jan 09 00:00:06 crc kubenswrapper[4945]: I0109 00:00:06.008947 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27ab75bd-1a05-42c5-b6ee-0917bdc88c6b" path="/var/lib/kubelet/pods/27ab75bd-1a05-42c5-b6ee-0917bdc88c6b/volumes" Jan 09 00:00:13 crc kubenswrapper[4945]: I0109 00:00:13.578030 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:00:13 crc kubenswrapper[4945]: I0109 00:00:13.578615 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:00:34 crc kubenswrapper[4945]: I0109 00:00:34.874507 4945 scope.go:117] "RemoveContainer" containerID="f2d5d787ee2fbce1c779457fb156c668199fad436028b53078a552096f42dadb" Jan 09 00:00:43 crc kubenswrapper[4945]: I0109 00:00:43.578644 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:00:43 crc kubenswrapper[4945]: I0109 00:00:43.579272 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:01:13 crc kubenswrapper[4945]: I0109 00:01:13.578663 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:01:13 crc kubenswrapper[4945]: I0109 00:01:13.579297 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:01:13 crc kubenswrapper[4945]: I0109 00:01:13.579346 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:01:13 crc kubenswrapper[4945]: I0109 00:01:13.579829 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1eb0488c81865add13522725ec7e9dfbdde53b657c212e13eb652906573fce3b"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:01:13 crc kubenswrapper[4945]: I0109 00:01:13.579883 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://1eb0488c81865add13522725ec7e9dfbdde53b657c212e13eb652906573fce3b" gracePeriod=600 Jan 09 00:01:14 crc kubenswrapper[4945]: I0109 00:01:14.279401 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="1eb0488c81865add13522725ec7e9dfbdde53b657c212e13eb652906573fce3b" exitCode=0 Jan 09 00:01:14 crc kubenswrapper[4945]: I0109 00:01:14.279468 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"1eb0488c81865add13522725ec7e9dfbdde53b657c212e13eb652906573fce3b"} Jan 09 00:01:14 crc kubenswrapper[4945]: I0109 00:01:14.279705 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024"} Jan 09 00:01:14 crc kubenswrapper[4945]: I0109 00:01:14.279728 4945 scope.go:117] "RemoveContainer" containerID="b0340e246f0b27dedb4b590a7b8e96541ebedbd15fc91538cba0299e2244cfa0" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.309760 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g94jw"] Jan 09 00:01:25 crc kubenswrapper[4945]: E0109 00:01:25.310754 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de27da5-ddc5-40f7-bd70-b4eb64cd0857" containerName="image-pruner" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.310768 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de27da5-ddc5-40f7-bd70-b4eb64cd0857" containerName="image-pruner" Jan 09 00:01:25 crc kubenswrapper[4945]: E0109 00:01:25.310787 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57b79d3-0b89-4f67-afd3-024709104516" containerName="collect-profiles" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.310793 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57b79d3-0b89-4f67-afd3-024709104516" containerName="collect-profiles" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.310947 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de27da5-ddc5-40f7-bd70-b4eb64cd0857" containerName="image-pruner" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.310961 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57b79d3-0b89-4f67-afd3-024709104516" containerName="collect-profiles" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.311938 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.335080 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g94jw"] Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.442803 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sz5g\" (UniqueName: \"kubernetes.io/projected/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-kube-api-access-4sz5g\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.443133 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-catalog-content\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.443236 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-utilities\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.544840 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-catalog-content\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.545508 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-utilities\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.545700 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sz5g\" (UniqueName: \"kubernetes.io/projected/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-kube-api-access-4sz5g\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.546057 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-catalog-content\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.546088 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-utilities\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.575157 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sz5g\" (UniqueName: \"kubernetes.io/projected/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-kube-api-access-4sz5g\") pod \"redhat-marketplace-g94jw\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:25 crc kubenswrapper[4945]: I0109 00:01:25.629320 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:26 crc kubenswrapper[4945]: I0109 00:01:26.042381 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g94jw"] Jan 09 00:01:26 crc kubenswrapper[4945]: I0109 00:01:26.368832 4945 generic.go:334] "Generic (PLEG): container finished" podID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerID="a558dd96ef29890e91a02a1b8a88a2d747f9df459c9bf8f34d9e9473b02b9315" exitCode=0 Jan 09 00:01:26 crc kubenswrapper[4945]: I0109 00:01:26.368909 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g94jw" event={"ID":"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1","Type":"ContainerDied","Data":"a558dd96ef29890e91a02a1b8a88a2d747f9df459c9bf8f34d9e9473b02b9315"} Jan 09 00:01:26 crc kubenswrapper[4945]: I0109 00:01:26.369154 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g94jw" event={"ID":"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1","Type":"ContainerStarted","Data":"69551dd529731a58c220cd1667c79af129015bf3a6944f7bece8a79bf5384984"} Jan 09 00:01:28 crc kubenswrapper[4945]: I0109 00:01:28.384333 4945 generic.go:334] "Generic (PLEG): container finished" podID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerID="a1e7f179d87b53ee4ce8fb28593a6d90988d6f8d64d86964067dacf9da9b6d78" exitCode=0 Jan 09 00:01:28 crc kubenswrapper[4945]: I0109 00:01:28.384410 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g94jw" event={"ID":"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1","Type":"ContainerDied","Data":"a1e7f179d87b53ee4ce8fb28593a6d90988d6f8d64d86964067dacf9da9b6d78"} Jan 09 00:01:29 crc kubenswrapper[4945]: I0109 00:01:29.393140 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g94jw" event={"ID":"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1","Type":"ContainerStarted","Data":"e89cbfdeb45178a3ece3b2370125984a4015e88afbd50b925dab9632794a329a"} Jan 09 00:01:29 crc kubenswrapper[4945]: I0109 00:01:29.415017 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g94jw" podStartSLOduration=1.908901342 podStartE2EDuration="4.414983965s" podCreationTimestamp="2026-01-09 00:01:25 +0000 UTC" firstStartedPulling="2026-01-09 00:01:26.37038408 +0000 UTC m=+2756.681543026" lastFinishedPulling="2026-01-09 00:01:28.876466703 +0000 UTC m=+2759.187625649" observedRunningTime="2026-01-09 00:01:29.41112524 +0000 UTC m=+2759.722284206" watchObservedRunningTime="2026-01-09 00:01:29.414983965 +0000 UTC m=+2759.726142911" Jan 09 00:01:35 crc kubenswrapper[4945]: I0109 00:01:35.629649 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:35 crc kubenswrapper[4945]: I0109 00:01:35.630268 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:35 crc kubenswrapper[4945]: I0109 00:01:35.677307 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:36 crc kubenswrapper[4945]: I0109 00:01:36.494958 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:36 crc kubenswrapper[4945]: I0109 00:01:36.540619 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g94jw"] Jan 09 00:01:38 crc kubenswrapper[4945]: I0109 00:01:38.461220 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g94jw" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerName="registry-server" containerID="cri-o://e89cbfdeb45178a3ece3b2370125984a4015e88afbd50b925dab9632794a329a" gracePeriod=2 Jan 09 00:01:39 crc kubenswrapper[4945]: I0109 00:01:39.506472 4945 generic.go:334] "Generic (PLEG): container finished" podID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerID="e89cbfdeb45178a3ece3b2370125984a4015e88afbd50b925dab9632794a329a" exitCode=0 Jan 09 00:01:39 crc kubenswrapper[4945]: I0109 00:01:39.506519 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g94jw" event={"ID":"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1","Type":"ContainerDied","Data":"e89cbfdeb45178a3ece3b2370125984a4015e88afbd50b925dab9632794a329a"} Jan 09 00:01:39 crc kubenswrapper[4945]: I0109 00:01:39.815986 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:39 crc kubenswrapper[4945]: I0109 00:01:39.998071 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sz5g\" (UniqueName: \"kubernetes.io/projected/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-kube-api-access-4sz5g\") pod \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " Jan 09 00:01:39 crc kubenswrapper[4945]: I0109 00:01:39.998165 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-catalog-content\") pod \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " Jan 09 00:01:39 crc kubenswrapper[4945]: I0109 00:01:39.998249 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-utilities\") pod \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\" (UID: \"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1\") " Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:39.999538 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-utilities" (OuterVolumeSpecName: "utilities") pod "409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" (UID: "409c2ce4-7cb1-4e78-88ba-d6968f0c92a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.013220 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-kube-api-access-4sz5g" (OuterVolumeSpecName: "kube-api-access-4sz5g") pod "409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" (UID: "409c2ce4-7cb1-4e78-88ba-d6968f0c92a1"). InnerVolumeSpecName "kube-api-access-4sz5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.026312 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" (UID: "409c2ce4-7cb1-4e78-88ba-d6968f0c92a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.100143 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.100181 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sz5g\" (UniqueName: \"kubernetes.io/projected/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-kube-api-access-4sz5g\") on node \"crc\" DevicePath \"\"" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.100191 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.515403 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g94jw" event={"ID":"409c2ce4-7cb1-4e78-88ba-d6968f0c92a1","Type":"ContainerDied","Data":"69551dd529731a58c220cd1667c79af129015bf3a6944f7bece8a79bf5384984"} Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.515673 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g94jw" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.515794 4945 scope.go:117] "RemoveContainer" containerID="e89cbfdeb45178a3ece3b2370125984a4015e88afbd50b925dab9632794a329a" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.543765 4945 scope.go:117] "RemoveContainer" containerID="a1e7f179d87b53ee4ce8fb28593a6d90988d6f8d64d86964067dacf9da9b6d78" Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.544139 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g94jw"] Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.549949 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g94jw"] Jan 09 00:01:40 crc kubenswrapper[4945]: I0109 00:01:40.580384 4945 scope.go:117] "RemoveContainer" containerID="a558dd96ef29890e91a02a1b8a88a2d747f9df459c9bf8f34d9e9473b02b9315" Jan 09 00:01:42 crc kubenswrapper[4945]: I0109 00:01:42.021605 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" path="/var/lib/kubelet/pods/409c2ce4-7cb1-4e78-88ba-d6968f0c92a1/volumes" Jan 09 00:03:13 crc kubenswrapper[4945]: I0109 00:03:13.577974 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:03:13 crc kubenswrapper[4945]: I0109 00:03:13.578617 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:03:43 crc kubenswrapper[4945]: I0109 00:03:43.578388 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:03:43 crc kubenswrapper[4945]: I0109 00:03:43.579066 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:04:13 crc kubenswrapper[4945]: I0109 00:04:13.578623 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:04:13 crc kubenswrapper[4945]: I0109 00:04:13.579324 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:04:13 crc kubenswrapper[4945]: I0109 00:04:13.579414 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:04:13 crc kubenswrapper[4945]: I0109 00:04:13.580179 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:04:13 crc kubenswrapper[4945]: I0109 00:04:13.580243 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" gracePeriod=600 Jan 09 00:04:13 crc kubenswrapper[4945]: E0109 00:04:13.717266 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:04:14 crc kubenswrapper[4945]: I0109 00:04:14.609549 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" exitCode=0 Jan 09 00:04:14 crc kubenswrapper[4945]: I0109 00:04:14.609613 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024"} Jan 09 00:04:14 crc kubenswrapper[4945]: I0109 00:04:14.610110 4945 scope.go:117] "RemoveContainer" containerID="1eb0488c81865add13522725ec7e9dfbdde53b657c212e13eb652906573fce3b" Jan 09 00:04:14 crc kubenswrapper[4945]: I0109 00:04:14.612120 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:04:14 crc kubenswrapper[4945]: E0109 00:04:14.613296 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:04:29 crc kubenswrapper[4945]: I0109 00:04:29.000167 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:04:29 crc kubenswrapper[4945]: E0109 00:04:29.001014 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:04:44 crc kubenswrapper[4945]: I0109 00:04:44.001115 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:04:44 crc kubenswrapper[4945]: E0109 00:04:44.001922 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.692490 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9dt2b"] Jan 09 00:04:53 crc kubenswrapper[4945]: E0109 00:04:53.693796 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerName="registry-server" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.693810 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerName="registry-server" Jan 09 00:04:53 crc kubenswrapper[4945]: E0109 00:04:53.693822 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerName="extract-utilities" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.693881 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerName="extract-utilities" Jan 09 00:04:53 crc kubenswrapper[4945]: E0109 00:04:53.693900 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerName="extract-content" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.693925 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerName="extract-content" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.694263 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="409c2ce4-7cb1-4e78-88ba-d6968f0c92a1" containerName="registry-server" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.697692 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.699673 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9dt2b"] Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.863086 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-utilities\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.863146 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-catalog-content\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.863262 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2srb\" (UniqueName: \"kubernetes.io/projected/6ae44fda-e446-4248-b308-f2028b8b7e89-kube-api-access-t2srb\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.964201 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2srb\" (UniqueName: \"kubernetes.io/projected/6ae44fda-e446-4248-b308-f2028b8b7e89-kube-api-access-t2srb\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.964282 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-utilities\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.964304 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-catalog-content\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.964830 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-catalog-content\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.964913 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-utilities\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:53 crc kubenswrapper[4945]: I0109 00:04:53.988212 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2srb\" (UniqueName: \"kubernetes.io/projected/6ae44fda-e446-4248-b308-f2028b8b7e89-kube-api-access-t2srb\") pod \"redhat-operators-9dt2b\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:54 crc kubenswrapper[4945]: I0109 00:04:54.024828 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:04:54 crc kubenswrapper[4945]: I0109 00:04:54.467059 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9dt2b"] Jan 09 00:04:54 crc kubenswrapper[4945]: I0109 00:04:54.881841 4945 generic.go:334] "Generic (PLEG): container finished" podID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerID="d29119a4675d6af639d9c26284bea31fb2e3e520d83df7161a294d551df9ee8d" exitCode=0 Jan 09 00:04:54 crc kubenswrapper[4945]: I0109 00:04:54.881955 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dt2b" event={"ID":"6ae44fda-e446-4248-b308-f2028b8b7e89","Type":"ContainerDied","Data":"d29119a4675d6af639d9c26284bea31fb2e3e520d83df7161a294d551df9ee8d"} Jan 09 00:04:54 crc kubenswrapper[4945]: I0109 00:04:54.882131 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dt2b" event={"ID":"6ae44fda-e446-4248-b308-f2028b8b7e89","Type":"ContainerStarted","Data":"20aa2d47413111877fa3a6d3979584302682b38710bc320eecbe5d491a490781"} Jan 09 00:04:54 crc kubenswrapper[4945]: I0109 00:04:54.883357 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.845464 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zmzfx"] Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.847291 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.864095 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmzfx"] Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.892518 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-utilities\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.893114 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hjh\" (UniqueName: \"kubernetes.io/projected/b5ec6185-b271-4184-9814-568305d3fef0-kube-api-access-z5hjh\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.893143 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-catalog-content\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.994098 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5hjh\" (UniqueName: \"kubernetes.io/projected/b5ec6185-b271-4184-9814-568305d3fef0-kube-api-access-z5hjh\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.994159 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-catalog-content\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.994199 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-utilities\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.994654 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-catalog-content\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:55 crc kubenswrapper[4945]: I0109 00:04:55.994681 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-utilities\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:56 crc kubenswrapper[4945]: I0109 00:04:56.017418 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5hjh\" (UniqueName: \"kubernetes.io/projected/b5ec6185-b271-4184-9814-568305d3fef0-kube-api-access-z5hjh\") pod \"certified-operators-zmzfx\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:56 crc kubenswrapper[4945]: I0109 00:04:56.172889 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:04:56 crc kubenswrapper[4945]: I0109 00:04:56.645145 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmzfx"] Jan 09 00:04:56 crc kubenswrapper[4945]: W0109 00:04:56.665116 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5ec6185_b271_4184_9814_568305d3fef0.slice/crio-ba16cf8c6db23c9b2d4792333091024d298578849249c44c0253fc6f69db0e67 WatchSource:0}: Error finding container ba16cf8c6db23c9b2d4792333091024d298578849249c44c0253fc6f69db0e67: Status 404 returned error can't find the container with id ba16cf8c6db23c9b2d4792333091024d298578849249c44c0253fc6f69db0e67 Jan 09 00:04:56 crc kubenswrapper[4945]: I0109 00:04:56.917851 4945 generic.go:334] "Generic (PLEG): container finished" podID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerID="80973aa1da2454dd9c8368641b4d55f61d24523aca3b623b52e0b4eddc70a471" exitCode=0 Jan 09 00:04:56 crc kubenswrapper[4945]: I0109 00:04:56.917907 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dt2b" event={"ID":"6ae44fda-e446-4248-b308-f2028b8b7e89","Type":"ContainerDied","Data":"80973aa1da2454dd9c8368641b4d55f61d24523aca3b623b52e0b4eddc70a471"} Jan 09 00:04:56 crc kubenswrapper[4945]: I0109 00:04:56.919295 4945 generic.go:334] "Generic (PLEG): container finished" podID="b5ec6185-b271-4184-9814-568305d3fef0" containerID="5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f" exitCode=0 Jan 09 00:04:56 crc kubenswrapper[4945]: I0109 00:04:56.919325 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmzfx" event={"ID":"b5ec6185-b271-4184-9814-568305d3fef0","Type":"ContainerDied","Data":"5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f"} Jan 09 00:04:56 crc kubenswrapper[4945]: I0109 00:04:56.919356 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmzfx" event={"ID":"b5ec6185-b271-4184-9814-568305d3fef0","Type":"ContainerStarted","Data":"ba16cf8c6db23c9b2d4792333091024d298578849249c44c0253fc6f69db0e67"} Jan 09 00:04:57 crc kubenswrapper[4945]: I0109 00:04:57.000653 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:04:57 crc kubenswrapper[4945]: E0109 00:04:57.001064 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:04:58 crc kubenswrapper[4945]: I0109 00:04:58.940874 4945 generic.go:334] "Generic (PLEG): container finished" podID="b5ec6185-b271-4184-9814-568305d3fef0" containerID="31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b" exitCode=0 Jan 09 00:04:58 crc kubenswrapper[4945]: I0109 00:04:58.940933 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmzfx" event={"ID":"b5ec6185-b271-4184-9814-568305d3fef0","Type":"ContainerDied","Data":"31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b"} Jan 09 00:04:58 crc kubenswrapper[4945]: I0109 00:04:58.944731 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dt2b" event={"ID":"6ae44fda-e446-4248-b308-f2028b8b7e89","Type":"ContainerStarted","Data":"0104d582e7e05be06930a6fe3e59942b20d94b5ed9404dcd828942d9a50bd61a"} Jan 09 00:04:58 crc kubenswrapper[4945]: I0109 00:04:58.985854 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9dt2b" podStartSLOduration=2.983088862 podStartE2EDuration="5.985836038s" podCreationTimestamp="2026-01-09 00:04:53 +0000 UTC" firstStartedPulling="2026-01-09 00:04:54.883136319 +0000 UTC m=+2965.194295265" lastFinishedPulling="2026-01-09 00:04:57.885883495 +0000 UTC m=+2968.197042441" observedRunningTime="2026-01-09 00:04:58.985800017 +0000 UTC m=+2969.296958963" watchObservedRunningTime="2026-01-09 00:04:58.985836038 +0000 UTC m=+2969.296994984" Jan 09 00:04:59 crc kubenswrapper[4945]: I0109 00:04:59.953486 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmzfx" event={"ID":"b5ec6185-b271-4184-9814-568305d3fef0","Type":"ContainerStarted","Data":"999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c"} Jan 09 00:04:59 crc kubenswrapper[4945]: I0109 00:04:59.971470 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zmzfx" podStartSLOduration=2.438213172 podStartE2EDuration="4.971450718s" podCreationTimestamp="2026-01-09 00:04:55 +0000 UTC" firstStartedPulling="2026-01-09 00:04:56.920740347 +0000 UTC m=+2967.231899293" lastFinishedPulling="2026-01-09 00:04:59.453977893 +0000 UTC m=+2969.765136839" observedRunningTime="2026-01-09 00:04:59.968598108 +0000 UTC m=+2970.279757054" watchObservedRunningTime="2026-01-09 00:04:59.971450718 +0000 UTC m=+2970.282609664" Jan 09 00:05:04 crc kubenswrapper[4945]: I0109 00:05:04.025644 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:05:04 crc kubenswrapper[4945]: I0109 00:05:04.025922 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:05:04 crc kubenswrapper[4945]: I0109 00:05:04.068208 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:05:05 crc kubenswrapper[4945]: I0109 00:05:05.035943 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:05:05 crc kubenswrapper[4945]: I0109 00:05:05.076732 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9dt2b"] Jan 09 00:05:06 crc kubenswrapper[4945]: I0109 00:05:06.173571 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:05:06 crc kubenswrapper[4945]: I0109 00:05:06.173634 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:05:06 crc kubenswrapper[4945]: I0109 00:05:06.216160 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:05:07 crc kubenswrapper[4945]: I0109 00:05:07.003828 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9dt2b" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerName="registry-server" containerID="cri-o://0104d582e7e05be06930a6fe3e59942b20d94b5ed9404dcd828942d9a50bd61a" gracePeriod=2 Jan 09 00:05:07 crc kubenswrapper[4945]: I0109 00:05:07.047412 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:05:08 crc kubenswrapper[4945]: I0109 00:05:08.840122 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmzfx"] Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.020885 4945 generic.go:334] "Generic (PLEG): container finished" podID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerID="0104d582e7e05be06930a6fe3e59942b20d94b5ed9404dcd828942d9a50bd61a" exitCode=0 Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.021013 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dt2b" event={"ID":"6ae44fda-e446-4248-b308-f2028b8b7e89","Type":"ContainerDied","Data":"0104d582e7e05be06930a6fe3e59942b20d94b5ed9404dcd828942d9a50bd61a"} Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.021129 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zmzfx" podUID="b5ec6185-b271-4184-9814-568305d3fef0" containerName="registry-server" containerID="cri-o://999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c" gracePeriod=2 Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.303685 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.446552 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.497865 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-utilities\") pod \"6ae44fda-e446-4248-b308-f2028b8b7e89\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.497980 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2srb\" (UniqueName: \"kubernetes.io/projected/6ae44fda-e446-4248-b308-f2028b8b7e89-kube-api-access-t2srb\") pod \"6ae44fda-e446-4248-b308-f2028b8b7e89\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.498265 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-catalog-content\") pod \"b5ec6185-b271-4184-9814-568305d3fef0\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.498351 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-catalog-content\") pod \"6ae44fda-e446-4248-b308-f2028b8b7e89\" (UID: \"6ae44fda-e446-4248-b308-f2028b8b7e89\") " Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.498431 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5hjh\" (UniqueName: \"kubernetes.io/projected/b5ec6185-b271-4184-9814-568305d3fef0-kube-api-access-z5hjh\") pod \"b5ec6185-b271-4184-9814-568305d3fef0\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.498875 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-utilities" (OuterVolumeSpecName: "utilities") pod "6ae44fda-e446-4248-b308-f2028b8b7e89" (UID: "6ae44fda-e446-4248-b308-f2028b8b7e89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.504503 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ec6185-b271-4184-9814-568305d3fef0-kube-api-access-z5hjh" (OuterVolumeSpecName: "kube-api-access-z5hjh") pod "b5ec6185-b271-4184-9814-568305d3fef0" (UID: "b5ec6185-b271-4184-9814-568305d3fef0"). InnerVolumeSpecName "kube-api-access-z5hjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.505504 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae44fda-e446-4248-b308-f2028b8b7e89-kube-api-access-t2srb" (OuterVolumeSpecName: "kube-api-access-t2srb") pod "6ae44fda-e446-4248-b308-f2028b8b7e89" (UID: "6ae44fda-e446-4248-b308-f2028b8b7e89"). InnerVolumeSpecName "kube-api-access-t2srb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.551253 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5ec6185-b271-4184-9814-568305d3fef0" (UID: "b5ec6185-b271-4184-9814-568305d3fef0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.599365 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-utilities\") pod \"b5ec6185-b271-4184-9814-568305d3fef0\" (UID: \"b5ec6185-b271-4184-9814-568305d3fef0\") " Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.599698 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5hjh\" (UniqueName: \"kubernetes.io/projected/b5ec6185-b271-4184-9814-568305d3fef0-kube-api-access-z5hjh\") on node \"crc\" DevicePath \"\"" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.599721 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.599740 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2srb\" (UniqueName: \"kubernetes.io/projected/6ae44fda-e446-4248-b308-f2028b8b7e89-kube-api-access-t2srb\") on node \"crc\" DevicePath \"\"" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.599783 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.600377 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-utilities" (OuterVolumeSpecName: "utilities") pod "b5ec6185-b271-4184-9814-568305d3fef0" (UID: "b5ec6185-b271-4184-9814-568305d3fef0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.639311 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ae44fda-e446-4248-b308-f2028b8b7e89" (UID: "6ae44fda-e446-4248-b308-f2028b8b7e89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.701080 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ae44fda-e446-4248-b308-f2028b8b7e89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:05:09 crc kubenswrapper[4945]: I0109 00:05:09.701112 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec6185-b271-4184-9814-568305d3fef0-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.032384 4945 generic.go:334] "Generic (PLEG): container finished" podID="b5ec6185-b271-4184-9814-568305d3fef0" containerID="999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c" exitCode=0 Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.032476 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmzfx" event={"ID":"b5ec6185-b271-4184-9814-568305d3fef0","Type":"ContainerDied","Data":"999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c"} Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.032512 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmzfx" event={"ID":"b5ec6185-b271-4184-9814-568305d3fef0","Type":"ContainerDied","Data":"ba16cf8c6db23c9b2d4792333091024d298578849249c44c0253fc6f69db0e67"} Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.032534 4945 scope.go:117] "RemoveContainer" containerID="999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.032528 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmzfx" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.035193 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9dt2b" event={"ID":"6ae44fda-e446-4248-b308-f2028b8b7e89","Type":"ContainerDied","Data":"20aa2d47413111877fa3a6d3979584302682b38710bc320eecbe5d491a490781"} Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.035283 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9dt2b" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.086714 4945 scope.go:117] "RemoveContainer" containerID="31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.087394 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmzfx"] Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.094126 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zmzfx"] Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.100413 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9dt2b"] Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.105741 4945 scope.go:117] "RemoveContainer" containerID="5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.106020 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9dt2b"] Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.120700 4945 scope.go:117] "RemoveContainer" containerID="999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c" Jan 09 00:05:10 crc kubenswrapper[4945]: E0109 00:05:10.121278 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c\": container with ID starting with 999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c not found: ID does not exist" containerID="999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.121346 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c"} err="failed to get container status \"999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c\": rpc error: code = NotFound desc = could not find container \"999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c\": container with ID starting with 999dce5624a491b927da294dd29b822f699f37bb8752e542f354e35f5818069c not found: ID does not exist" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.121389 4945 scope.go:117] "RemoveContainer" containerID="31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b" Jan 09 00:05:10 crc kubenswrapper[4945]: E0109 00:05:10.121934 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b\": container with ID starting with 31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b not found: ID does not exist" containerID="31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.121975 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b"} err="failed to get container status \"31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b\": rpc error: code = NotFound desc = could not find container \"31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b\": container with ID starting with 31a3f7aea30f6be08a2393e92b8f90cc48b11b3f83ffc09b0ed9bd8d30e3468b not found: ID does not exist" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.122018 4945 scope.go:117] "RemoveContainer" containerID="5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f" Jan 09 00:05:10 crc kubenswrapper[4945]: E0109 00:05:10.122277 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f\": container with ID starting with 5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f not found: ID does not exist" containerID="5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.122307 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f"} err="failed to get container status \"5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f\": rpc error: code = NotFound desc = could not find container \"5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f\": container with ID starting with 5d17e9496c18971f2bd8457d3c5da5b6fc0939397488de8478c4061ce134c85f not found: ID does not exist" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.122322 4945 scope.go:117] "RemoveContainer" containerID="0104d582e7e05be06930a6fe3e59942b20d94b5ed9404dcd828942d9a50bd61a" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.137722 4945 scope.go:117] "RemoveContainer" containerID="80973aa1da2454dd9c8368641b4d55f61d24523aca3b623b52e0b4eddc70a471" Jan 09 00:05:10 crc kubenswrapper[4945]: I0109 00:05:10.168791 4945 scope.go:117] "RemoveContainer" containerID="d29119a4675d6af639d9c26284bea31fb2e3e520d83df7161a294d551df9ee8d" Jan 09 00:05:12 crc kubenswrapper[4945]: I0109 00:05:12.000839 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:05:12 crc kubenswrapper[4945]: E0109 00:05:12.001109 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:05:12 crc kubenswrapper[4945]: I0109 00:05:12.011552 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" path="/var/lib/kubelet/pods/6ae44fda-e446-4248-b308-f2028b8b7e89/volumes" Jan 09 00:05:12 crc kubenswrapper[4945]: I0109 00:05:12.012263 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ec6185-b271-4184-9814-568305d3fef0" path="/var/lib/kubelet/pods/b5ec6185-b271-4184-9814-568305d3fef0/volumes" Jan 09 00:05:25 crc kubenswrapper[4945]: I0109 00:05:25.000423 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:05:25 crc kubenswrapper[4945]: E0109 00:05:25.001742 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:05:36 crc kubenswrapper[4945]: I0109 00:05:35.999842 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:05:36 crc kubenswrapper[4945]: E0109 00:05:36.000554 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:05:49 crc kubenswrapper[4945]: I0109 00:05:49.000520 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:05:49 crc kubenswrapper[4945]: E0109 00:05:49.001356 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:06:04 crc kubenswrapper[4945]: I0109 00:06:04.000510 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:06:04 crc kubenswrapper[4945]: E0109 00:06:04.001531 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:06:15 crc kubenswrapper[4945]: I0109 00:06:15.001152 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:06:15 crc kubenswrapper[4945]: E0109 00:06:15.002102 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:06:29 crc kubenswrapper[4945]: I0109 00:06:29.000100 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:06:29 crc kubenswrapper[4945]: E0109 00:06:29.000907 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:06:41 crc kubenswrapper[4945]: I0109 00:06:41.000405 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:06:41 crc kubenswrapper[4945]: E0109 00:06:41.001070 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:06:53 crc kubenswrapper[4945]: I0109 00:06:53.000373 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:06:53 crc kubenswrapper[4945]: E0109 00:06:53.001107 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:07:04 crc kubenswrapper[4945]: I0109 00:07:04.001956 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:07:04 crc kubenswrapper[4945]: E0109 00:07:04.002749 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:07:15 crc kubenswrapper[4945]: I0109 00:07:15.001137 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:07:15 crc kubenswrapper[4945]: E0109 00:07:15.002177 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:07:30 crc kubenswrapper[4945]: I0109 00:07:30.005422 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:07:30 crc kubenswrapper[4945]: E0109 00:07:30.006203 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:07:44 crc kubenswrapper[4945]: I0109 00:07:44.000623 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:07:44 crc kubenswrapper[4945]: E0109 00:07:44.001455 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:07:58 crc kubenswrapper[4945]: I0109 00:07:58.000130 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:07:58 crc kubenswrapper[4945]: E0109 00:07:58.000906 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:08:11 crc kubenswrapper[4945]: I0109 00:08:11.000221 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:08:11 crc kubenswrapper[4945]: E0109 00:08:11.001169 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:08:24 crc kubenswrapper[4945]: I0109 00:08:24.000584 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:08:24 crc kubenswrapper[4945]: E0109 00:08:24.001598 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:08:37 crc kubenswrapper[4945]: I0109 00:08:37.000252 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:08:37 crc kubenswrapper[4945]: E0109 00:08:37.001278 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:08:51 crc kubenswrapper[4945]: I0109 00:08:51.000276 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:08:51 crc kubenswrapper[4945]: E0109 00:08:51.001074 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:09:04 crc kubenswrapper[4945]: I0109 00:09:04.000764 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:09:04 crc kubenswrapper[4945]: E0109 00:09:04.001438 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:09:16 crc kubenswrapper[4945]: I0109 00:09:16.001269 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:09:16 crc kubenswrapper[4945]: I0109 00:09:16.835692 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"bc7dbad432cc69c90fc36491c945fff809ce4ae3f09b13f067bede41382c6e03"} Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.205410 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tzfrf"] Jan 09 00:09:55 crc kubenswrapper[4945]: E0109 00:09:55.206297 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerName="extract-content" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.206312 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerName="extract-content" Jan 09 00:09:55 crc kubenswrapper[4945]: E0109 00:09:55.206329 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerName="registry-server" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.206337 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerName="registry-server" Jan 09 00:09:55 crc kubenswrapper[4945]: E0109 00:09:55.206356 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerName="extract-utilities" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.206364 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerName="extract-utilities" Jan 09 00:09:55 crc kubenswrapper[4945]: E0109 00:09:55.206370 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ec6185-b271-4184-9814-568305d3fef0" containerName="extract-content" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.206376 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ec6185-b271-4184-9814-568305d3fef0" containerName="extract-content" Jan 09 00:09:55 crc kubenswrapper[4945]: E0109 00:09:55.206385 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ec6185-b271-4184-9814-568305d3fef0" containerName="registry-server" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.206391 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ec6185-b271-4184-9814-568305d3fef0" containerName="registry-server" Jan 09 00:09:55 crc kubenswrapper[4945]: E0109 00:09:55.206405 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ec6185-b271-4184-9814-568305d3fef0" containerName="extract-utilities" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.206411 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ec6185-b271-4184-9814-568305d3fef0" containerName="extract-utilities" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.206558 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae44fda-e446-4248-b308-f2028b8b7e89" containerName="registry-server" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.206573 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ec6185-b271-4184-9814-568305d3fef0" containerName="registry-server" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.207797 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.226811 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzfrf"] Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.284073 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-catalog-content\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.284219 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-utilities\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.284306 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcg7v\" (UniqueName: \"kubernetes.io/projected/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-kube-api-access-dcg7v\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.386312 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-catalog-content\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.386704 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-utilities\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.386825 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcg7v\" (UniqueName: \"kubernetes.io/projected/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-kube-api-access-dcg7v\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.386889 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-catalog-content\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.387206 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-utilities\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.417094 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcg7v\" (UniqueName: \"kubernetes.io/projected/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-kube-api-access-dcg7v\") pod \"community-operators-tzfrf\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:55 crc kubenswrapper[4945]: I0109 00:09:55.548208 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:09:56 crc kubenswrapper[4945]: I0109 00:09:56.022604 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzfrf"] Jan 09 00:09:56 crc kubenswrapper[4945]: I0109 00:09:56.409135 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzfrf" event={"ID":"f1caf04d-dc9f-4207-a8d2-a47faa2620f5","Type":"ContainerStarted","Data":"b4380f75c9fcfc0a9a848bdcc847b06bccf61f8fac95293032d242d5227396c3"} Jan 09 00:09:57 crc kubenswrapper[4945]: I0109 00:09:57.417159 4945 generic.go:334] "Generic (PLEG): container finished" podID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerID="ac17086a999cb4af2db9baf60869dc360d4b761bae9b4b1ca4a5a6e623c65268" exitCode=0 Jan 09 00:09:57 crc kubenswrapper[4945]: I0109 00:09:57.417304 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzfrf" event={"ID":"f1caf04d-dc9f-4207-a8d2-a47faa2620f5","Type":"ContainerDied","Data":"ac17086a999cb4af2db9baf60869dc360d4b761bae9b4b1ca4a5a6e623c65268"} Jan 09 00:09:57 crc kubenswrapper[4945]: I0109 00:09:57.419569 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 00:10:02 crc kubenswrapper[4945]: I0109 00:10:02.455289 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzfrf" event={"ID":"f1caf04d-dc9f-4207-a8d2-a47faa2620f5","Type":"ContainerStarted","Data":"012c8f653fb4b8fcf419427629140512850aa94825a0cbd7066df7e3fdb552ea"} Jan 09 00:10:03 crc kubenswrapper[4945]: I0109 00:10:03.463421 4945 generic.go:334] "Generic (PLEG): container finished" podID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerID="012c8f653fb4b8fcf419427629140512850aa94825a0cbd7066df7e3fdb552ea" exitCode=0 Jan 09 00:10:03 crc kubenswrapper[4945]: I0109 00:10:03.463465 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzfrf" event={"ID":"f1caf04d-dc9f-4207-a8d2-a47faa2620f5","Type":"ContainerDied","Data":"012c8f653fb4b8fcf419427629140512850aa94825a0cbd7066df7e3fdb552ea"} Jan 09 00:10:04 crc kubenswrapper[4945]: I0109 00:10:04.472160 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzfrf" event={"ID":"f1caf04d-dc9f-4207-a8d2-a47faa2620f5","Type":"ContainerStarted","Data":"b334092ef1c769fdfda0923d4214e75aa87efb9a18e85fa2b915be7bab9536ba"} Jan 09 00:10:04 crc kubenswrapper[4945]: I0109 00:10:04.514103 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tzfrf" podStartSLOduration=2.84626407 podStartE2EDuration="9.514078847s" podCreationTimestamp="2026-01-09 00:09:55 +0000 UTC" firstStartedPulling="2026-01-09 00:09:57.419228162 +0000 UTC m=+3267.730387098" lastFinishedPulling="2026-01-09 00:10:04.087042919 +0000 UTC m=+3274.398201875" observedRunningTime="2026-01-09 00:10:04.497378627 +0000 UTC m=+3274.808537593" watchObservedRunningTime="2026-01-09 00:10:04.514078847 +0000 UTC m=+3274.825237793" Jan 09 00:10:05 crc kubenswrapper[4945]: I0109 00:10:05.549978 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:10:05 crc kubenswrapper[4945]: I0109 00:10:05.550048 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:10:06 crc kubenswrapper[4945]: I0109 00:10:06.598004 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tzfrf" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="registry-server" probeResult="failure" output=< Jan 09 00:10:06 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 00:10:06 crc kubenswrapper[4945]: > Jan 09 00:10:15 crc kubenswrapper[4945]: I0109 00:10:15.598632 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:10:15 crc kubenswrapper[4945]: I0109 00:10:15.656584 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tzfrf" Jan 09 00:10:15 crc kubenswrapper[4945]: I0109 00:10:15.725027 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tzfrf"] Jan 09 00:10:15 crc kubenswrapper[4945]: I0109 00:10:15.841044 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q99xk"] Jan 09 00:10:15 crc kubenswrapper[4945]: I0109 00:10:15.841281 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q99xk" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="registry-server" containerID="cri-o://a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b" gracePeriod=2 Jan 09 00:10:16 crc kubenswrapper[4945]: E0109 00:10:16.015655 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b is running failed: container process not found" containerID="a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b" cmd=["grpc_health_probe","-addr=:50051"] Jan 09 00:10:16 crc kubenswrapper[4945]: E0109 00:10:16.016301 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b is running failed: container process not found" containerID="a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b" cmd=["grpc_health_probe","-addr=:50051"] Jan 09 00:10:16 crc kubenswrapper[4945]: E0109 00:10:16.016630 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b is running failed: container process not found" containerID="a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b" cmd=["grpc_health_probe","-addr=:50051"] Jan 09 00:10:16 crc kubenswrapper[4945]: E0109 00:10:16.016659 4945 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-q99xk" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="registry-server" Jan 09 00:10:18 crc kubenswrapper[4945]: I0109 00:10:18.562907 4945 generic.go:334] "Generic (PLEG): container finished" podID="035037f1-e099-4416-a125-177d9aeef29f" containerID="a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b" exitCode=0 Jan 09 00:10:18 crc kubenswrapper[4945]: I0109 00:10:18.563011 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q99xk" event={"ID":"035037f1-e099-4416-a125-177d9aeef29f","Type":"ContainerDied","Data":"a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b"} Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.134194 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q99xk" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.267379 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-utilities\") pod \"035037f1-e099-4416-a125-177d9aeef29f\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.267503 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mhhk\" (UniqueName: \"kubernetes.io/projected/035037f1-e099-4416-a125-177d9aeef29f-kube-api-access-9mhhk\") pod \"035037f1-e099-4416-a125-177d9aeef29f\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.267628 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-catalog-content\") pod \"035037f1-e099-4416-a125-177d9aeef29f\" (UID: \"035037f1-e099-4416-a125-177d9aeef29f\") " Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.268143 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-utilities" (OuterVolumeSpecName: "utilities") pod "035037f1-e099-4416-a125-177d9aeef29f" (UID: "035037f1-e099-4416-a125-177d9aeef29f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.276652 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035037f1-e099-4416-a125-177d9aeef29f-kube-api-access-9mhhk" (OuterVolumeSpecName: "kube-api-access-9mhhk") pod "035037f1-e099-4416-a125-177d9aeef29f" (UID: "035037f1-e099-4416-a125-177d9aeef29f"). InnerVolumeSpecName "kube-api-access-9mhhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.318981 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "035037f1-e099-4416-a125-177d9aeef29f" (UID: "035037f1-e099-4416-a125-177d9aeef29f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.369219 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mhhk\" (UniqueName: \"kubernetes.io/projected/035037f1-e099-4416-a125-177d9aeef29f-kube-api-access-9mhhk\") on node \"crc\" DevicePath \"\"" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.369289 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.369301 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/035037f1-e099-4416-a125-177d9aeef29f-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.572694 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q99xk" event={"ID":"035037f1-e099-4416-a125-177d9aeef29f","Type":"ContainerDied","Data":"b6de9ec0e07072b014171a1a4415fb63682fa5a60f99a345d9f60f5bf8464fba"} Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.572746 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q99xk" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.572762 4945 scope.go:117] "RemoveContainer" containerID="a2103614b7d5e1394ff4423fad6e43d92a213554a8562c93e1d33516510a0f3b" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.607171 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q99xk"] Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.614163 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q99xk"] Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.663869 4945 scope.go:117] "RemoveContainer" containerID="d9a1cc05795b2ce2b04a6d19fd727cda07b7e9254fd6c42fdb1fe8a913299836" Jan 09 00:10:19 crc kubenswrapper[4945]: I0109 00:10:19.684802 4945 scope.go:117] "RemoveContainer" containerID="0dd1d08d974b90e2be829c9061dfaa074e7a175f5a4827fdf64858fd9f7fd718" Jan 09 00:10:20 crc kubenswrapper[4945]: I0109 00:10:20.009720 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035037f1-e099-4416-a125-177d9aeef29f" path="/var/lib/kubelet/pods/035037f1-e099-4416-a125-177d9aeef29f/volumes" Jan 09 00:11:43 crc kubenswrapper[4945]: I0109 00:11:43.578142 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:11:43 crc kubenswrapper[4945]: I0109 00:11:43.578781 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:12:13 crc kubenswrapper[4945]: I0109 00:12:13.578829 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:12:13 crc kubenswrapper[4945]: I0109 00:12:13.579263 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:12:43 crc kubenswrapper[4945]: I0109 00:12:43.577930 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:12:43 crc kubenswrapper[4945]: I0109 00:12:43.578592 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:12:43 crc kubenswrapper[4945]: I0109 00:12:43.578647 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:12:43 crc kubenswrapper[4945]: I0109 00:12:43.579344 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc7dbad432cc69c90fc36491c945fff809ce4ae3f09b13f067bede41382c6e03"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:12:43 crc kubenswrapper[4945]: I0109 00:12:43.579397 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://bc7dbad432cc69c90fc36491c945fff809ce4ae3f09b13f067bede41382c6e03" gracePeriod=600 Jan 09 00:12:44 crc kubenswrapper[4945]: I0109 00:12:44.541726 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="bc7dbad432cc69c90fc36491c945fff809ce4ae3f09b13f067bede41382c6e03" exitCode=0 Jan 09 00:12:44 crc kubenswrapper[4945]: I0109 00:12:44.541805 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"bc7dbad432cc69c90fc36491c945fff809ce4ae3f09b13f067bede41382c6e03"} Jan 09 00:12:44 crc kubenswrapper[4945]: I0109 00:12:44.542284 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab"} Jan 09 00:12:44 crc kubenswrapper[4945]: I0109 00:12:44.542307 4945 scope.go:117] "RemoveContainer" containerID="7f9ea96331a098e282420ff4111bb07530805b5df0b45aadc173f1b9c3a4c024" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.154701 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5"] Jan 09 00:15:00 crc kubenswrapper[4945]: E0109 00:15:00.155759 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="extract-content" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.155777 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="extract-content" Jan 09 00:15:00 crc kubenswrapper[4945]: E0109 00:15:00.155799 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="registry-server" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.155807 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="registry-server" Jan 09 00:15:00 crc kubenswrapper[4945]: E0109 00:15:00.155828 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="extract-utilities" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.155837 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="extract-utilities" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.156066 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="035037f1-e099-4416-a125-177d9aeef29f" containerName="registry-server" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.156678 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.166436 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.169327 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5"] Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.172164 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.344949 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pql4c\" (UniqueName: \"kubernetes.io/projected/e2b4e739-91f2-48c0-a139-209dddd53a22-kube-api-access-pql4c\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.345055 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b4e739-91f2-48c0-a139-209dddd53a22-config-volume\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.345089 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b4e739-91f2-48c0-a139-209dddd53a22-secret-volume\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.447263 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b4e739-91f2-48c0-a139-209dddd53a22-config-volume\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.447328 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b4e739-91f2-48c0-a139-209dddd53a22-secret-volume\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.447426 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pql4c\" (UniqueName: \"kubernetes.io/projected/e2b4e739-91f2-48c0-a139-209dddd53a22-kube-api-access-pql4c\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.448387 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b4e739-91f2-48c0-a139-209dddd53a22-config-volume\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.453789 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b4e739-91f2-48c0-a139-209dddd53a22-secret-volume\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.468118 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pql4c\" (UniqueName: \"kubernetes.io/projected/e2b4e739-91f2-48c0-a139-209dddd53a22-kube-api-access-pql4c\") pod \"collect-profiles-29465295-xq7r5\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.485525 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:00 crc kubenswrapper[4945]: I0109 00:15:00.904435 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5"] Jan 09 00:15:01 crc kubenswrapper[4945]: I0109 00:15:01.928188 4945 generic.go:334] "Generic (PLEG): container finished" podID="e2b4e739-91f2-48c0-a139-209dddd53a22" containerID="61715a64ea6f47dc27d07dd5b66ac0e8b674db25a5b3d2b41f21ba9bdb47234f" exitCode=0 Jan 09 00:15:01 crc kubenswrapper[4945]: I0109 00:15:01.928239 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" event={"ID":"e2b4e739-91f2-48c0-a139-209dddd53a22","Type":"ContainerDied","Data":"61715a64ea6f47dc27d07dd5b66ac0e8b674db25a5b3d2b41f21ba9bdb47234f"} Jan 09 00:15:01 crc kubenswrapper[4945]: I0109 00:15:01.928309 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" event={"ID":"e2b4e739-91f2-48c0-a139-209dddd53a22","Type":"ContainerStarted","Data":"5d268c8b4741006180067d10fdbdf09599c2ed8e7a6812c07b29f193bca728b7"} Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.196570 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.392773 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b4e739-91f2-48c0-a139-209dddd53a22-config-volume\") pod \"e2b4e739-91f2-48c0-a139-209dddd53a22\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.392832 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pql4c\" (UniqueName: \"kubernetes.io/projected/e2b4e739-91f2-48c0-a139-209dddd53a22-kube-api-access-pql4c\") pod \"e2b4e739-91f2-48c0-a139-209dddd53a22\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.392875 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b4e739-91f2-48c0-a139-209dddd53a22-secret-volume\") pod \"e2b4e739-91f2-48c0-a139-209dddd53a22\" (UID: \"e2b4e739-91f2-48c0-a139-209dddd53a22\") " Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.393705 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b4e739-91f2-48c0-a139-209dddd53a22-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2b4e739-91f2-48c0-a139-209dddd53a22" (UID: "e2b4e739-91f2-48c0-a139-209dddd53a22"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.398378 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b4e739-91f2-48c0-a139-209dddd53a22-kube-api-access-pql4c" (OuterVolumeSpecName: "kube-api-access-pql4c") pod "e2b4e739-91f2-48c0-a139-209dddd53a22" (UID: "e2b4e739-91f2-48c0-a139-209dddd53a22"). InnerVolumeSpecName "kube-api-access-pql4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.398479 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b4e739-91f2-48c0-a139-209dddd53a22-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2b4e739-91f2-48c0-a139-209dddd53a22" (UID: "e2b4e739-91f2-48c0-a139-209dddd53a22"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.494119 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2b4e739-91f2-48c0-a139-209dddd53a22-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.494152 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2b4e739-91f2-48c0-a139-209dddd53a22-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.494165 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pql4c\" (UniqueName: \"kubernetes.io/projected/e2b4e739-91f2-48c0-a139-209dddd53a22-kube-api-access-pql4c\") on node \"crc\" DevicePath \"\"" Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.947812 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" event={"ID":"e2b4e739-91f2-48c0-a139-209dddd53a22","Type":"ContainerDied","Data":"5d268c8b4741006180067d10fdbdf09599c2ed8e7a6812c07b29f193bca728b7"} Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.947861 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d268c8b4741006180067d10fdbdf09599c2ed8e7a6812c07b29f193bca728b7" Jan 09 00:15:03 crc kubenswrapper[4945]: I0109 00:15:03.949232 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5" Jan 09 00:15:04 crc kubenswrapper[4945]: I0109 00:15:04.270083 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g"] Jan 09 00:15:04 crc kubenswrapper[4945]: I0109 00:15:04.276010 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465250-5jt8g"] Jan 09 00:15:06 crc kubenswrapper[4945]: I0109 00:15:06.009381 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f9421f-a7ab-44ad-b6f9-e77a2982552d" path="/var/lib/kubelet/pods/d9f9421f-a7ab-44ad-b6f9-e77a2982552d/volumes" Jan 09 00:15:13 crc kubenswrapper[4945]: I0109 00:15:13.578629 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:15:13 crc kubenswrapper[4945]: I0109 00:15:13.579052 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:15:35 crc kubenswrapper[4945]: I0109 00:15:35.167494 4945 scope.go:117] "RemoveContainer" containerID="6c851d206f5fde290eca6ae1e5aeb87ffac1d7da437b2933bc164b60babb85b4" Jan 09 00:15:43 crc kubenswrapper[4945]: I0109 00:15:43.578072 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:15:43 crc kubenswrapper[4945]: I0109 00:15:43.578672 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:16:13 crc kubenswrapper[4945]: I0109 00:16:13.578648 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:16:13 crc kubenswrapper[4945]: I0109 00:16:13.579290 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:16:13 crc kubenswrapper[4945]: I0109 00:16:13.579345 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:16:13 crc kubenswrapper[4945]: I0109 00:16:13.580132 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:16:13 crc kubenswrapper[4945]: I0109 00:16:13.580207 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" gracePeriod=600 Jan 09 00:16:13 crc kubenswrapper[4945]: E0109 00:16:13.704938 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:16:14 crc kubenswrapper[4945]: I0109 00:16:14.438182 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" exitCode=0 Jan 09 00:16:14 crc kubenswrapper[4945]: I0109 00:16:14.438240 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab"} Jan 09 00:16:14 crc kubenswrapper[4945]: I0109 00:16:14.438285 4945 scope.go:117] "RemoveContainer" containerID="bc7dbad432cc69c90fc36491c945fff809ce4ae3f09b13f067bede41382c6e03" Jan 09 00:16:14 crc kubenswrapper[4945]: I0109 00:16:14.439052 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:16:14 crc kubenswrapper[4945]: E0109 00:16:14.439450 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:16:28 crc kubenswrapper[4945]: I0109 00:16:28.000390 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:16:28 crc kubenswrapper[4945]: E0109 00:16:28.002287 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:16:40 crc kubenswrapper[4945]: I0109 00:16:40.003670 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:16:40 crc kubenswrapper[4945]: E0109 00:16:40.004352 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:16:51 crc kubenswrapper[4945]: I0109 00:16:51.001077 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:16:51 crc kubenswrapper[4945]: E0109 00:16:51.001934 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:17:04 crc kubenswrapper[4945]: I0109 00:17:04.000450 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:17:04 crc kubenswrapper[4945]: E0109 00:17:04.001307 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:17:19 crc kubenswrapper[4945]: I0109 00:17:19.000943 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:17:19 crc kubenswrapper[4945]: E0109 00:17:19.002743 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:17:32 crc kubenswrapper[4945]: I0109 00:17:32.000089 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:17:32 crc kubenswrapper[4945]: E0109 00:17:32.000758 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:17:43 crc kubenswrapper[4945]: I0109 00:17:43.001167 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:17:43 crc kubenswrapper[4945]: E0109 00:17:43.002279 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:17:54 crc kubenswrapper[4945]: I0109 00:17:54.000971 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:17:54 crc kubenswrapper[4945]: E0109 00:17:54.001865 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:18:06 crc kubenswrapper[4945]: I0109 00:18:06.001057 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:18:06 crc kubenswrapper[4945]: E0109 00:18:06.001732 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:18:17 crc kubenswrapper[4945]: I0109 00:18:16.999639 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:18:17 crc kubenswrapper[4945]: E0109 00:18:17.000294 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:18:31 crc kubenswrapper[4945]: I0109 00:18:31.000794 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:18:31 crc kubenswrapper[4945]: E0109 00:18:31.001546 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:18:43 crc kubenswrapper[4945]: I0109 00:18:43.001240 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:18:43 crc kubenswrapper[4945]: E0109 00:18:43.006079 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:18:54 crc kubenswrapper[4945]: I0109 00:18:54.001026 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:18:54 crc kubenswrapper[4945]: E0109 00:18:54.002307 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:19:09 crc kubenswrapper[4945]: I0109 00:19:08.999902 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:19:09 crc kubenswrapper[4945]: E0109 00:19:09.000679 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:19:23 crc kubenswrapper[4945]: I0109 00:19:23.000138 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:19:23 crc kubenswrapper[4945]: E0109 00:19:23.000975 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:19:35 crc kubenswrapper[4945]: I0109 00:19:35.000678 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:19:35 crc kubenswrapper[4945]: E0109 00:19:35.001568 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.322049 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cp9j4"] Jan 09 00:19:43 crc kubenswrapper[4945]: E0109 00:19:43.325591 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b4e739-91f2-48c0-a139-209dddd53a22" containerName="collect-profiles" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.325751 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b4e739-91f2-48c0-a139-209dddd53a22" containerName="collect-profiles" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.326227 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b4e739-91f2-48c0-a139-209dddd53a22" containerName="collect-profiles" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.327731 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.330580 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cp9j4"] Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.446053 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-catalog-content\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.446120 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wb5\" (UniqueName: \"kubernetes.io/projected/2fec3334-b6f2-450e-bb3a-39756cde743b-kube-api-access-72wb5\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.446142 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-utilities\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.548491 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-catalog-content\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.549181 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wb5\" (UniqueName: \"kubernetes.io/projected/2fec3334-b6f2-450e-bb3a-39756cde743b-kube-api-access-72wb5\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.549141 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-catalog-content\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.549577 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-utilities\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.550039 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-utilities\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.574126 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wb5\" (UniqueName: \"kubernetes.io/projected/2fec3334-b6f2-450e-bb3a-39756cde743b-kube-api-access-72wb5\") pod \"certified-operators-cp9j4\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:43 crc kubenswrapper[4945]: I0109 00:19:43.659346 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.201816 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cp9j4"] Jan 09 00:19:44 crc kubenswrapper[4945]: W0109 00:19:44.207910 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fec3334_b6f2_450e_bb3a_39756cde743b.slice/crio-19f650a648439b2765dff82adefee1940767032f73dc8bb8ffde0f8e9450e0e4 WatchSource:0}: Error finding container 19f650a648439b2765dff82adefee1940767032f73dc8bb8ffde0f8e9450e0e4: Status 404 returned error can't find the container with id 19f650a648439b2765dff82adefee1940767032f73dc8bb8ffde0f8e9450e0e4 Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.713406 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2cdf6"] Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.714810 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.721501 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cdf6"] Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.864650 4945 generic.go:334] "Generic (PLEG): container finished" podID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerID="35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4" exitCode=0 Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.864747 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp9j4" event={"ID":"2fec3334-b6f2-450e-bb3a-39756cde743b","Type":"ContainerDied","Data":"35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4"} Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.864785 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp9j4" event={"ID":"2fec3334-b6f2-450e-bb3a-39756cde743b","Type":"ContainerStarted","Data":"19f650a648439b2765dff82adefee1940767032f73dc8bb8ffde0f8e9450e0e4"} Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.867391 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-catalog-content\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.867433 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8tkd\" (UniqueName: \"kubernetes.io/projected/c28594e0-0bec-4493-9c72-09cdfbdf0fae-kube-api-access-v8tkd\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.867508 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-utilities\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.868567 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.969127 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-utilities\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.969301 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-catalog-content\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.969350 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8tkd\" (UniqueName: \"kubernetes.io/projected/c28594e0-0bec-4493-9c72-09cdfbdf0fae-kube-api-access-v8tkd\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.969950 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-utilities\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.970159 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-catalog-content\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:44 crc kubenswrapper[4945]: I0109 00:19:44.989566 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8tkd\" (UniqueName: \"kubernetes.io/projected/c28594e0-0bec-4493-9c72-09cdfbdf0fae-kube-api-access-v8tkd\") pod \"redhat-marketplace-2cdf6\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:45 crc kubenswrapper[4945]: I0109 00:19:45.043238 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:45 crc kubenswrapper[4945]: I0109 00:19:45.473951 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cdf6"] Jan 09 00:19:45 crc kubenswrapper[4945]: W0109 00:19:45.475539 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc28594e0_0bec_4493_9c72_09cdfbdf0fae.slice/crio-321d4f5c24a5109d96323442db3c3d6878974a223155cce1fb734c5d64f97f4a WatchSource:0}: Error finding container 321d4f5c24a5109d96323442db3c3d6878974a223155cce1fb734c5d64f97f4a: Status 404 returned error can't find the container with id 321d4f5c24a5109d96323442db3c3d6878974a223155cce1fb734c5d64f97f4a Jan 09 00:19:45 crc kubenswrapper[4945]: I0109 00:19:45.872462 4945 generic.go:334] "Generic (PLEG): container finished" podID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerID="8b5613155677721c0a0c564ed8bf97fad7f830d93801da71c36fc65bdbfab4b7" exitCode=0 Jan 09 00:19:45 crc kubenswrapper[4945]: I0109 00:19:45.872509 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cdf6" event={"ID":"c28594e0-0bec-4493-9c72-09cdfbdf0fae","Type":"ContainerDied","Data":"8b5613155677721c0a0c564ed8bf97fad7f830d93801da71c36fc65bdbfab4b7"} Jan 09 00:19:45 crc kubenswrapper[4945]: I0109 00:19:45.872544 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cdf6" event={"ID":"c28594e0-0bec-4493-9c72-09cdfbdf0fae","Type":"ContainerStarted","Data":"321d4f5c24a5109d96323442db3c3d6878974a223155cce1fb734c5d64f97f4a"} Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.508318 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sdfxm"] Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.510482 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.528447 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdfxm"] Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.699699 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-catalog-content\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.699821 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-utilities\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.699881 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52f6f\" (UniqueName: \"kubernetes.io/projected/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-kube-api-access-52f6f\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.800862 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-catalog-content\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.800948 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-utilities\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.801010 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52f6f\" (UniqueName: \"kubernetes.io/projected/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-kube-api-access-52f6f\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.801785 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-catalog-content\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.802062 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-utilities\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.825149 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52f6f\" (UniqueName: \"kubernetes.io/projected/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-kube-api-access-52f6f\") pod \"redhat-operators-sdfxm\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.834835 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.882484 4945 generic.go:334] "Generic (PLEG): container finished" podID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerID="c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc" exitCode=0 Jan 09 00:19:46 crc kubenswrapper[4945]: I0109 00:19:46.882563 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp9j4" event={"ID":"2fec3334-b6f2-450e-bb3a-39756cde743b","Type":"ContainerDied","Data":"c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc"} Jan 09 00:19:47 crc kubenswrapper[4945]: I0109 00:19:47.274828 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sdfxm"] Jan 09 00:19:47 crc kubenswrapper[4945]: W0109 00:19:47.370795 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod156a96fe_4eb0_4bb1_bae5_dd93b28f1256.slice/crio-7e0f77bdb5c8e295d09769e48a83046bbaf17e28536f2e231710b2365e664183 WatchSource:0}: Error finding container 7e0f77bdb5c8e295d09769e48a83046bbaf17e28536f2e231710b2365e664183: Status 404 returned error can't find the container with id 7e0f77bdb5c8e295d09769e48a83046bbaf17e28536f2e231710b2365e664183 Jan 09 00:19:47 crc kubenswrapper[4945]: I0109 00:19:47.896724 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfxm" event={"ID":"156a96fe-4eb0-4bb1-bae5-dd93b28f1256","Type":"ContainerStarted","Data":"7e0f77bdb5c8e295d09769e48a83046bbaf17e28536f2e231710b2365e664183"} Jan 09 00:19:48 crc kubenswrapper[4945]: I0109 00:19:48.908499 4945 generic.go:334] "Generic (PLEG): container finished" podID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerID="b4371db7e530ecfb721f176075127242233d14056f82fb572c6cba363113bbf3" exitCode=0 Jan 09 00:19:48 crc kubenswrapper[4945]: I0109 00:19:48.908639 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfxm" event={"ID":"156a96fe-4eb0-4bb1-bae5-dd93b28f1256","Type":"ContainerDied","Data":"b4371db7e530ecfb721f176075127242233d14056f82fb572c6cba363113bbf3"} Jan 09 00:19:49 crc kubenswrapper[4945]: I0109 00:19:49.000964 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:19:49 crc kubenswrapper[4945]: E0109 00:19:49.001322 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:19:49 crc kubenswrapper[4945]: I0109 00:19:49.921687 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp9j4" event={"ID":"2fec3334-b6f2-450e-bb3a-39756cde743b","Type":"ContainerStarted","Data":"6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c"} Jan 09 00:19:49 crc kubenswrapper[4945]: I0109 00:19:49.924663 4945 generic.go:334] "Generic (PLEG): container finished" podID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerID="0de3b09436245ed6702e6fe5f888a501fad4892b89fc28e6010dca5193b7c544" exitCode=0 Jan 09 00:19:49 crc kubenswrapper[4945]: I0109 00:19:49.924698 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cdf6" event={"ID":"c28594e0-0bec-4493-9c72-09cdfbdf0fae","Type":"ContainerDied","Data":"0de3b09436245ed6702e6fe5f888a501fad4892b89fc28e6010dca5193b7c544"} Jan 09 00:19:49 crc kubenswrapper[4945]: I0109 00:19:49.946668 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cp9j4" podStartSLOduration=3.02138431 podStartE2EDuration="6.94663162s" podCreationTimestamp="2026-01-09 00:19:43 +0000 UTC" firstStartedPulling="2026-01-09 00:19:44.868104924 +0000 UTC m=+3855.179263890" lastFinishedPulling="2026-01-09 00:19:48.793352254 +0000 UTC m=+3859.104511200" observedRunningTime="2026-01-09 00:19:49.940420798 +0000 UTC m=+3860.251579744" watchObservedRunningTime="2026-01-09 00:19:49.94663162 +0000 UTC m=+3860.257790576" Jan 09 00:19:50 crc kubenswrapper[4945]: I0109 00:19:50.942358 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfxm" event={"ID":"156a96fe-4eb0-4bb1-bae5-dd93b28f1256","Type":"ContainerStarted","Data":"b00d56d2d4ad0fd7373c32ba2435b420231edbff63d8cd2926b0d387a5429bfd"} Jan 09 00:19:50 crc kubenswrapper[4945]: I0109 00:19:50.946288 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cdf6" event={"ID":"c28594e0-0bec-4493-9c72-09cdfbdf0fae","Type":"ContainerStarted","Data":"95647987c80ada246d1054a2573448ec812c56122b3d3279778e58615fc01f65"} Jan 09 00:19:50 crc kubenswrapper[4945]: I0109 00:19:50.985609 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2cdf6" podStartSLOduration=2.949098729 podStartE2EDuration="6.985590689s" podCreationTimestamp="2026-01-09 00:19:44 +0000 UTC" firstStartedPulling="2026-01-09 00:19:46.363744083 +0000 UTC m=+3856.674903029" lastFinishedPulling="2026-01-09 00:19:50.400236043 +0000 UTC m=+3860.711394989" observedRunningTime="2026-01-09 00:19:50.984075112 +0000 UTC m=+3861.295234078" watchObservedRunningTime="2026-01-09 00:19:50.985590689 +0000 UTC m=+3861.296749635" Jan 09 00:19:51 crc kubenswrapper[4945]: I0109 00:19:51.955111 4945 generic.go:334] "Generic (PLEG): container finished" podID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerID="b00d56d2d4ad0fd7373c32ba2435b420231edbff63d8cd2926b0d387a5429bfd" exitCode=0 Jan 09 00:19:51 crc kubenswrapper[4945]: I0109 00:19:51.955165 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfxm" event={"ID":"156a96fe-4eb0-4bb1-bae5-dd93b28f1256","Type":"ContainerDied","Data":"b00d56d2d4ad0fd7373c32ba2435b420231edbff63d8cd2926b0d387a5429bfd"} Jan 09 00:19:52 crc kubenswrapper[4945]: I0109 00:19:52.965237 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfxm" event={"ID":"156a96fe-4eb0-4bb1-bae5-dd93b28f1256","Type":"ContainerStarted","Data":"875cb9fe551024321b82d0c76ee8af9dccb575612b3446d0a0dc0aaad9e13b69"} Jan 09 00:19:52 crc kubenswrapper[4945]: I0109 00:19:52.987453 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sdfxm" podStartSLOduration=3.143356614 podStartE2EDuration="6.987430492s" podCreationTimestamp="2026-01-09 00:19:46 +0000 UTC" firstStartedPulling="2026-01-09 00:19:48.910415897 +0000 UTC m=+3859.221574843" lastFinishedPulling="2026-01-09 00:19:52.754489775 +0000 UTC m=+3863.065648721" observedRunningTime="2026-01-09 00:19:52.982765348 +0000 UTC m=+3863.293924314" watchObservedRunningTime="2026-01-09 00:19:52.987430492 +0000 UTC m=+3863.298589448" Jan 09 00:19:53 crc kubenswrapper[4945]: I0109 00:19:53.659746 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:53 crc kubenswrapper[4945]: I0109 00:19:53.659807 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:53 crc kubenswrapper[4945]: I0109 00:19:53.702580 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:19:55 crc kubenswrapper[4945]: I0109 00:19:55.044322 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:55 crc kubenswrapper[4945]: I0109 00:19:55.044393 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:55 crc kubenswrapper[4945]: I0109 00:19:55.089616 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:56 crc kubenswrapper[4945]: I0109 00:19:56.033946 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:19:56 crc kubenswrapper[4945]: I0109 00:19:56.836218 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:56 crc kubenswrapper[4945]: I0109 00:19:56.836368 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:19:58 crc kubenswrapper[4945]: I0109 00:19:58.199819 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sdfxm" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="registry-server" probeResult="failure" output=< Jan 09 00:19:58 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 00:19:58 crc kubenswrapper[4945]: > Jan 09 00:19:58 crc kubenswrapper[4945]: I0109 00:19:58.903156 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cdf6"] Jan 09 00:19:58 crc kubenswrapper[4945]: I0109 00:19:58.903800 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2cdf6" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerName="registry-server" containerID="cri-o://95647987c80ada246d1054a2573448ec812c56122b3d3279778e58615fc01f65" gracePeriod=2 Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.019132 4945 generic.go:334] "Generic (PLEG): container finished" podID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerID="95647987c80ada246d1054a2573448ec812c56122b3d3279778e58615fc01f65" exitCode=0 Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.019566 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cdf6" event={"ID":"c28594e0-0bec-4493-9c72-09cdfbdf0fae","Type":"ContainerDied","Data":"95647987c80ada246d1054a2573448ec812c56122b3d3279778e58615fc01f65"} Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.096787 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.105041 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-catalog-content\") pod \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.105391 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8tkd\" (UniqueName: \"kubernetes.io/projected/c28594e0-0bec-4493-9c72-09cdfbdf0fae-kube-api-access-v8tkd\") pod \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.105529 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-utilities\") pod \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\" (UID: \"c28594e0-0bec-4493-9c72-09cdfbdf0fae\") " Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.107199 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-utilities" (OuterVolumeSpecName: "utilities") pod "c28594e0-0bec-4493-9c72-09cdfbdf0fae" (UID: "c28594e0-0bec-4493-9c72-09cdfbdf0fae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.115004 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28594e0-0bec-4493-9c72-09cdfbdf0fae-kube-api-access-v8tkd" (OuterVolumeSpecName: "kube-api-access-v8tkd") pod "c28594e0-0bec-4493-9c72-09cdfbdf0fae" (UID: "c28594e0-0bec-4493-9c72-09cdfbdf0fae"). InnerVolumeSpecName "kube-api-access-v8tkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.136200 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c28594e0-0bec-4493-9c72-09cdfbdf0fae" (UID: "c28594e0-0bec-4493-9c72-09cdfbdf0fae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.207851 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.208126 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8tkd\" (UniqueName: \"kubernetes.io/projected/c28594e0-0bec-4493-9c72-09cdfbdf0fae-kube-api-access-v8tkd\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:01 crc kubenswrapper[4945]: I0109 00:20:01.208143 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28594e0-0bec-4493-9c72-09cdfbdf0fae-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:02 crc kubenswrapper[4945]: I0109 00:20:02.000682 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:20:02 crc kubenswrapper[4945]: E0109 00:20:02.001286 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:20:02 crc kubenswrapper[4945]: I0109 00:20:02.032550 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cdf6" event={"ID":"c28594e0-0bec-4493-9c72-09cdfbdf0fae","Type":"ContainerDied","Data":"321d4f5c24a5109d96323442db3c3d6878974a223155cce1fb734c5d64f97f4a"} Jan 09 00:20:02 crc kubenswrapper[4945]: I0109 00:20:02.034133 4945 scope.go:117] "RemoveContainer" containerID="95647987c80ada246d1054a2573448ec812c56122b3d3279778e58615fc01f65" Jan 09 00:20:02 crc kubenswrapper[4945]: I0109 00:20:02.034267 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cdf6" Jan 09 00:20:02 crc kubenswrapper[4945]: I0109 00:20:02.053695 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cdf6"] Jan 09 00:20:02 crc kubenswrapper[4945]: I0109 00:20:02.058507 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cdf6"] Jan 09 00:20:02 crc kubenswrapper[4945]: I0109 00:20:02.068089 4945 scope.go:117] "RemoveContainer" containerID="0de3b09436245ed6702e6fe5f888a501fad4892b89fc28e6010dca5193b7c544" Jan 09 00:20:02 crc kubenswrapper[4945]: I0109 00:20:02.085289 4945 scope.go:117] "RemoveContainer" containerID="8b5613155677721c0a0c564ed8bf97fad7f830d93801da71c36fc65bdbfab4b7" Jan 09 00:20:03 crc kubenswrapper[4945]: I0109 00:20:03.701311 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:20:04 crc kubenswrapper[4945]: I0109 00:20:04.009288 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" path="/var/lib/kubelet/pods/c28594e0-0bec-4493-9c72-09cdfbdf0fae/volumes" Jan 09 00:20:04 crc kubenswrapper[4945]: I0109 00:20:04.297950 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cp9j4"] Jan 09 00:20:04 crc kubenswrapper[4945]: I0109 00:20:04.298465 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cp9j4" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerName="registry-server" containerID="cri-o://6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c" gracePeriod=2 Jan 09 00:20:05 crc kubenswrapper[4945]: I0109 00:20:05.777891 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:20:05 crc kubenswrapper[4945]: I0109 00:20:05.973483 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72wb5\" (UniqueName: \"kubernetes.io/projected/2fec3334-b6f2-450e-bb3a-39756cde743b-kube-api-access-72wb5\") pod \"2fec3334-b6f2-450e-bb3a-39756cde743b\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " Jan 09 00:20:05 crc kubenswrapper[4945]: I0109 00:20:05.973866 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-catalog-content\") pod \"2fec3334-b6f2-450e-bb3a-39756cde743b\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " Jan 09 00:20:05 crc kubenswrapper[4945]: I0109 00:20:05.974068 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-utilities\") pod \"2fec3334-b6f2-450e-bb3a-39756cde743b\" (UID: \"2fec3334-b6f2-450e-bb3a-39756cde743b\") " Jan 09 00:20:05 crc kubenswrapper[4945]: I0109 00:20:05.974759 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-utilities" (OuterVolumeSpecName: "utilities") pod "2fec3334-b6f2-450e-bb3a-39756cde743b" (UID: "2fec3334-b6f2-450e-bb3a-39756cde743b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:20:05 crc kubenswrapper[4945]: I0109 00:20:05.980021 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fec3334-b6f2-450e-bb3a-39756cde743b-kube-api-access-72wb5" (OuterVolumeSpecName: "kube-api-access-72wb5") pod "2fec3334-b6f2-450e-bb3a-39756cde743b" (UID: "2fec3334-b6f2-450e-bb3a-39756cde743b"). InnerVolumeSpecName "kube-api-access-72wb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.023698 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fec3334-b6f2-450e-bb3a-39756cde743b" (UID: "2fec3334-b6f2-450e-bb3a-39756cde743b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.064385 4945 generic.go:334] "Generic (PLEG): container finished" podID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerID="6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c" exitCode=0 Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.064449 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp9j4" event={"ID":"2fec3334-b6f2-450e-bb3a-39756cde743b","Type":"ContainerDied","Data":"6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c"} Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.064483 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp9j4" event={"ID":"2fec3334-b6f2-450e-bb3a-39756cde743b","Type":"ContainerDied","Data":"19f650a648439b2765dff82adefee1940767032f73dc8bb8ffde0f8e9450e0e4"} Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.064506 4945 scope.go:117] "RemoveContainer" containerID="6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.064676 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp9j4" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.076428 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72wb5\" (UniqueName: \"kubernetes.io/projected/2fec3334-b6f2-450e-bb3a-39756cde743b-kube-api-access-72wb5\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.076489 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.076504 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fec3334-b6f2-450e-bb3a-39756cde743b-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.095607 4945 scope.go:117] "RemoveContainer" containerID="c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.099086 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cp9j4"] Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.104454 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cp9j4"] Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.129575 4945 scope.go:117] "RemoveContainer" containerID="35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.145833 4945 scope.go:117] "RemoveContainer" containerID="6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c" Jan 09 00:20:06 crc kubenswrapper[4945]: E0109 00:20:06.146403 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c\": container with ID starting with 6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c not found: ID does not exist" containerID="6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.146447 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c"} err="failed to get container status \"6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c\": rpc error: code = NotFound desc = could not find container \"6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c\": container with ID starting with 6f11b1c70525b6d224e126e0427e840f836f11f33278fb8cd30fece4a3bfc73c not found: ID does not exist" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.146475 4945 scope.go:117] "RemoveContainer" containerID="c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc" Jan 09 00:20:06 crc kubenswrapper[4945]: E0109 00:20:06.146828 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc\": container with ID starting with c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc not found: ID does not exist" containerID="c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.146859 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc"} err="failed to get container status \"c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc\": rpc error: code = NotFound desc = could not find container \"c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc\": container with ID starting with c0654cf2482f7645bd19c3de13ab91adf74f888db511cb664ac1d45715492adc not found: ID does not exist" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.146873 4945 scope.go:117] "RemoveContainer" containerID="35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4" Jan 09 00:20:06 crc kubenswrapper[4945]: E0109 00:20:06.147109 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4\": container with ID starting with 35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4 not found: ID does not exist" containerID="35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.147138 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4"} err="failed to get container status \"35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4\": rpc error: code = NotFound desc = could not find container \"35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4\": container with ID starting with 35366bb579039d1d3eac50764ced693422a1f64ed91a9095d391bf582118ded4 not found: ID does not exist" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.876344 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:20:06 crc kubenswrapper[4945]: I0109 00:20:06.920237 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:20:08 crc kubenswrapper[4945]: I0109 00:20:08.009702 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" path="/var/lib/kubelet/pods/2fec3334-b6f2-450e-bb3a-39756cde743b/volumes" Jan 09 00:20:08 crc kubenswrapper[4945]: I0109 00:20:08.701324 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdfxm"] Jan 09 00:20:08 crc kubenswrapper[4945]: I0109 00:20:08.701558 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sdfxm" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="registry-server" containerID="cri-o://875cb9fe551024321b82d0c76ee8af9dccb575612b3446d0a0dc0aaad9e13b69" gracePeriod=2 Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.089071 4945 generic.go:334] "Generic (PLEG): container finished" podID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerID="875cb9fe551024321b82d0c76ee8af9dccb575612b3446d0a0dc0aaad9e13b69" exitCode=0 Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.089421 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfxm" event={"ID":"156a96fe-4eb0-4bb1-bae5-dd93b28f1256","Type":"ContainerDied","Data":"875cb9fe551024321b82d0c76ee8af9dccb575612b3446d0a0dc0aaad9e13b69"} Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.337510 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.523269 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-catalog-content\") pod \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.523398 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-utilities\") pod \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.523492 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52f6f\" (UniqueName: \"kubernetes.io/projected/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-kube-api-access-52f6f\") pod \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\" (UID: \"156a96fe-4eb0-4bb1-bae5-dd93b28f1256\") " Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.524479 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-utilities" (OuterVolumeSpecName: "utilities") pod "156a96fe-4eb0-4bb1-bae5-dd93b28f1256" (UID: "156a96fe-4eb0-4bb1-bae5-dd93b28f1256"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.529188 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-kube-api-access-52f6f" (OuterVolumeSpecName: "kube-api-access-52f6f") pod "156a96fe-4eb0-4bb1-bae5-dd93b28f1256" (UID: "156a96fe-4eb0-4bb1-bae5-dd93b28f1256"). InnerVolumeSpecName "kube-api-access-52f6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.624950 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.625018 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52f6f\" (UniqueName: \"kubernetes.io/projected/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-kube-api-access-52f6f\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.654496 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "156a96fe-4eb0-4bb1-bae5-dd93b28f1256" (UID: "156a96fe-4eb0-4bb1-bae5-dd93b28f1256"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:20:09 crc kubenswrapper[4945]: I0109 00:20:09.726739 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/156a96fe-4eb0-4bb1-bae5-dd93b28f1256-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:10 crc kubenswrapper[4945]: I0109 00:20:10.099247 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sdfxm" event={"ID":"156a96fe-4eb0-4bb1-bae5-dd93b28f1256","Type":"ContainerDied","Data":"7e0f77bdb5c8e295d09769e48a83046bbaf17e28536f2e231710b2365e664183"} Jan 09 00:20:10 crc kubenswrapper[4945]: I0109 00:20:10.099333 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sdfxm" Jan 09 00:20:10 crc kubenswrapper[4945]: I0109 00:20:10.099617 4945 scope.go:117] "RemoveContainer" containerID="875cb9fe551024321b82d0c76ee8af9dccb575612b3446d0a0dc0aaad9e13b69" Jan 09 00:20:10 crc kubenswrapper[4945]: I0109 00:20:10.126245 4945 scope.go:117] "RemoveContainer" containerID="b00d56d2d4ad0fd7373c32ba2435b420231edbff63d8cd2926b0d387a5429bfd" Jan 09 00:20:10 crc kubenswrapper[4945]: I0109 00:20:10.126482 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sdfxm"] Jan 09 00:20:10 crc kubenswrapper[4945]: I0109 00:20:10.135081 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sdfxm"] Jan 09 00:20:10 crc kubenswrapper[4945]: I0109 00:20:10.147346 4945 scope.go:117] "RemoveContainer" containerID="b4371db7e530ecfb721f176075127242233d14056f82fb572c6cba363113bbf3" Jan 09 00:20:12 crc kubenswrapper[4945]: I0109 00:20:12.009016 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" path="/var/lib/kubelet/pods/156a96fe-4eb0-4bb1-bae5-dd93b28f1256/volumes" Jan 09 00:20:13 crc kubenswrapper[4945]: I0109 00:20:13.000837 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:20:13 crc kubenswrapper[4945]: E0109 00:20:13.001131 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.273835 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d2l27"] Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.274809 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.274827 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.274847 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="extract-utilities" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.274855 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="extract-utilities" Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.274866 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.274874 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.274887 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerName="extract-content" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.274896 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerName="extract-content" Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.274912 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerName="extract-utilities" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.274919 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerName="extract-utilities" Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.274934 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="extract-content" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.274944 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="extract-content" Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.274960 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.274967 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.274980 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerName="extract-utilities" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.275005 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerName="extract-utilities" Jan 09 00:20:24 crc kubenswrapper[4945]: E0109 00:20:24.275022 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerName="extract-content" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.275031 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerName="extract-content" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.275197 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28594e0-0bec-4493-9c72-09cdfbdf0fae" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.275209 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fec3334-b6f2-450e-bb3a-39756cde743b" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.275221 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="156a96fe-4eb0-4bb1-bae5-dd93b28f1256" containerName="registry-server" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.276621 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.285327 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d2l27"] Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.429472 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-catalog-content\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.429569 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsvtg\" (UniqueName: \"kubernetes.io/projected/8ce84867-ca14-4ef2-a44b-d6344eca1110-kube-api-access-qsvtg\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.429615 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-utilities\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.530713 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-utilities\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.530783 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-catalog-content\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.530850 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsvtg\" (UniqueName: \"kubernetes.io/projected/8ce84867-ca14-4ef2-a44b-d6344eca1110-kube-api-access-qsvtg\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.531325 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-utilities\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.531417 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-catalog-content\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.554167 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsvtg\" (UniqueName: \"kubernetes.io/projected/8ce84867-ca14-4ef2-a44b-d6344eca1110-kube-api-access-qsvtg\") pod \"community-operators-d2l27\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.603690 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:24 crc kubenswrapper[4945]: I0109 00:20:24.901276 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d2l27"] Jan 09 00:20:25 crc kubenswrapper[4945]: I0109 00:20:25.203323 4945 generic.go:334] "Generic (PLEG): container finished" podID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerID="db96f1c3c5fd0b502e97f45092607bafe8ab6ddea63d93b5650e4417bf2344a9" exitCode=0 Jan 09 00:20:25 crc kubenswrapper[4945]: I0109 00:20:25.203399 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2l27" event={"ID":"8ce84867-ca14-4ef2-a44b-d6344eca1110","Type":"ContainerDied","Data":"db96f1c3c5fd0b502e97f45092607bafe8ab6ddea63d93b5650e4417bf2344a9"} Jan 09 00:20:25 crc kubenswrapper[4945]: I0109 00:20:25.203668 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2l27" event={"ID":"8ce84867-ca14-4ef2-a44b-d6344eca1110","Type":"ContainerStarted","Data":"ec5a646474ed3b5272633724848a3432c35127a49d85ea28bd889955dd2840d6"} Jan 09 00:20:26 crc kubenswrapper[4945]: I0109 00:20:26.213088 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2l27" event={"ID":"8ce84867-ca14-4ef2-a44b-d6344eca1110","Type":"ContainerStarted","Data":"a6ad2f1c94a2aa06343fe012e0b196e5ab22b3a3afe2da5184210e344670524c"} Jan 09 00:20:27 crc kubenswrapper[4945]: I0109 00:20:27.222216 4945 generic.go:334] "Generic (PLEG): container finished" podID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerID="a6ad2f1c94a2aa06343fe012e0b196e5ab22b3a3afe2da5184210e344670524c" exitCode=0 Jan 09 00:20:27 crc kubenswrapper[4945]: I0109 00:20:27.222267 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2l27" event={"ID":"8ce84867-ca14-4ef2-a44b-d6344eca1110","Type":"ContainerDied","Data":"a6ad2f1c94a2aa06343fe012e0b196e5ab22b3a3afe2da5184210e344670524c"} Jan 09 00:20:28 crc kubenswrapper[4945]: I0109 00:20:28.002151 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:20:28 crc kubenswrapper[4945]: E0109 00:20:28.002394 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:20:28 crc kubenswrapper[4945]: I0109 00:20:28.233658 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2l27" event={"ID":"8ce84867-ca14-4ef2-a44b-d6344eca1110","Type":"ContainerStarted","Data":"91895038bb4e8af332e508c2067b33051809ea01e554e05076a88ccd81de2e5a"} Jan 09 00:20:28 crc kubenswrapper[4945]: I0109 00:20:28.257278 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d2l27" podStartSLOduration=1.497290992 podStartE2EDuration="4.257251821s" podCreationTimestamp="2026-01-09 00:20:24 +0000 UTC" firstStartedPulling="2026-01-09 00:20:25.205447989 +0000 UTC m=+3895.516606935" lastFinishedPulling="2026-01-09 00:20:27.965408818 +0000 UTC m=+3898.276567764" observedRunningTime="2026-01-09 00:20:28.253016897 +0000 UTC m=+3898.564175843" watchObservedRunningTime="2026-01-09 00:20:28.257251821 +0000 UTC m=+3898.568410777" Jan 09 00:20:34 crc kubenswrapper[4945]: I0109 00:20:34.604928 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:34 crc kubenswrapper[4945]: I0109 00:20:34.605538 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:34 crc kubenswrapper[4945]: I0109 00:20:34.648488 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:35 crc kubenswrapper[4945]: I0109 00:20:35.320796 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:36 crc kubenswrapper[4945]: I0109 00:20:36.060615 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d2l27"] Jan 09 00:20:37 crc kubenswrapper[4945]: I0109 00:20:37.296474 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d2l27" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerName="registry-server" containerID="cri-o://91895038bb4e8af332e508c2067b33051809ea01e554e05076a88ccd81de2e5a" gracePeriod=2 Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.312579 4945 generic.go:334] "Generic (PLEG): container finished" podID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerID="91895038bb4e8af332e508c2067b33051809ea01e554e05076a88ccd81de2e5a" exitCode=0 Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.312633 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2l27" event={"ID":"8ce84867-ca14-4ef2-a44b-d6344eca1110","Type":"ContainerDied","Data":"91895038bb4e8af332e508c2067b33051809ea01e554e05076a88ccd81de2e5a"} Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.625650 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.654299 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsvtg\" (UniqueName: \"kubernetes.io/projected/8ce84867-ca14-4ef2-a44b-d6344eca1110-kube-api-access-qsvtg\") pod \"8ce84867-ca14-4ef2-a44b-d6344eca1110\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.654579 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-catalog-content\") pod \"8ce84867-ca14-4ef2-a44b-d6344eca1110\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.654676 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-utilities\") pod \"8ce84867-ca14-4ef2-a44b-d6344eca1110\" (UID: \"8ce84867-ca14-4ef2-a44b-d6344eca1110\") " Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.657549 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-utilities" (OuterVolumeSpecName: "utilities") pod "8ce84867-ca14-4ef2-a44b-d6344eca1110" (UID: "8ce84867-ca14-4ef2-a44b-d6344eca1110"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.667365 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce84867-ca14-4ef2-a44b-d6344eca1110-kube-api-access-qsvtg" (OuterVolumeSpecName: "kube-api-access-qsvtg") pod "8ce84867-ca14-4ef2-a44b-d6344eca1110" (UID: "8ce84867-ca14-4ef2-a44b-d6344eca1110"). InnerVolumeSpecName "kube-api-access-qsvtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.717813 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ce84867-ca14-4ef2-a44b-d6344eca1110" (UID: "8ce84867-ca14-4ef2-a44b-d6344eca1110"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.757403 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsvtg\" (UniqueName: \"kubernetes.io/projected/8ce84867-ca14-4ef2-a44b-d6344eca1110-kube-api-access-qsvtg\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.757446 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:39 crc kubenswrapper[4945]: I0109 00:20:39.757460 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce84867-ca14-4ef2-a44b-d6344eca1110-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:20:40 crc kubenswrapper[4945]: I0109 00:20:40.320814 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d2l27" event={"ID":"8ce84867-ca14-4ef2-a44b-d6344eca1110","Type":"ContainerDied","Data":"ec5a646474ed3b5272633724848a3432c35127a49d85ea28bd889955dd2840d6"} Jan 09 00:20:40 crc kubenswrapper[4945]: I0109 00:20:40.320877 4945 scope.go:117] "RemoveContainer" containerID="91895038bb4e8af332e508c2067b33051809ea01e554e05076a88ccd81de2e5a" Jan 09 00:20:40 crc kubenswrapper[4945]: I0109 00:20:40.321038 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d2l27" Jan 09 00:20:40 crc kubenswrapper[4945]: I0109 00:20:40.347447 4945 scope.go:117] "RemoveContainer" containerID="a6ad2f1c94a2aa06343fe012e0b196e5ab22b3a3afe2da5184210e344670524c" Jan 09 00:20:40 crc kubenswrapper[4945]: I0109 00:20:40.356370 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d2l27"] Jan 09 00:20:40 crc kubenswrapper[4945]: I0109 00:20:40.372982 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d2l27"] Jan 09 00:20:40 crc kubenswrapper[4945]: I0109 00:20:40.379749 4945 scope.go:117] "RemoveContainer" containerID="db96f1c3c5fd0b502e97f45092607bafe8ab6ddea63d93b5650e4417bf2344a9" Jan 09 00:20:42 crc kubenswrapper[4945]: I0109 00:20:42.009198 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" path="/var/lib/kubelet/pods/8ce84867-ca14-4ef2-a44b-d6344eca1110/volumes" Jan 09 00:20:43 crc kubenswrapper[4945]: I0109 00:20:43.000787 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:20:43 crc kubenswrapper[4945]: E0109 00:20:43.001424 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:20:57 crc kubenswrapper[4945]: I0109 00:20:57.000732 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:20:57 crc kubenswrapper[4945]: E0109 00:20:57.001538 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:21:12 crc kubenswrapper[4945]: I0109 00:21:11.999845 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:21:12 crc kubenswrapper[4945]: E0109 00:21:12.000600 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:21:27 crc kubenswrapper[4945]: I0109 00:21:27.000301 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:21:28 crc kubenswrapper[4945]: I0109 00:21:28.667664 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"a3e604c43721efbfde6d6500e129996d632d004cd6f757047759135a5471cc2f"} Jan 09 00:23:43 crc kubenswrapper[4945]: I0109 00:23:43.578769 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:23:43 crc kubenswrapper[4945]: I0109 00:23:43.579971 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:24:13 crc kubenswrapper[4945]: I0109 00:24:13.578423 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:24:13 crc kubenswrapper[4945]: I0109 00:24:13.580137 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:24:43 crc kubenswrapper[4945]: I0109 00:24:43.578407 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:24:43 crc kubenswrapper[4945]: I0109 00:24:43.579463 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:24:43 crc kubenswrapper[4945]: I0109 00:24:43.579538 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:24:43 crc kubenswrapper[4945]: I0109 00:24:43.580655 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3e604c43721efbfde6d6500e129996d632d004cd6f757047759135a5471cc2f"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:24:43 crc kubenswrapper[4945]: I0109 00:24:43.580736 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://a3e604c43721efbfde6d6500e129996d632d004cd6f757047759135a5471cc2f" gracePeriod=600 Jan 09 00:24:43 crc kubenswrapper[4945]: I0109 00:24:43.780288 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="a3e604c43721efbfde6d6500e129996d632d004cd6f757047759135a5471cc2f" exitCode=0 Jan 09 00:24:43 crc kubenswrapper[4945]: I0109 00:24:43.780370 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"a3e604c43721efbfde6d6500e129996d632d004cd6f757047759135a5471cc2f"} Jan 09 00:24:43 crc kubenswrapper[4945]: I0109 00:24:43.780696 4945 scope.go:117] "RemoveContainer" containerID="3d16d4e8e44c96623e59cfafb847565cc07a65cb4b4e259629d9dcafa57501ab" Jan 09 00:24:44 crc kubenswrapper[4945]: I0109 00:24:44.790429 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490"} Jan 09 00:26:43 crc kubenswrapper[4945]: I0109 00:26:43.578199 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:26:43 crc kubenswrapper[4945]: I0109 00:26:43.578744 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:27:13 crc kubenswrapper[4945]: I0109 00:27:13.578450 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:27:13 crc kubenswrapper[4945]: I0109 00:27:13.579098 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:27:43 crc kubenswrapper[4945]: I0109 00:27:43.578290 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:27:43 crc kubenswrapper[4945]: I0109 00:27:43.578851 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:27:43 crc kubenswrapper[4945]: I0109 00:27:43.578897 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:27:43 crc kubenswrapper[4945]: I0109 00:27:43.579569 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:27:43 crc kubenswrapper[4945]: I0109 00:27:43.579625 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" gracePeriod=600 Jan 09 00:27:43 crc kubenswrapper[4945]: E0109 00:27:43.711105 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:27:44 crc kubenswrapper[4945]: I0109 00:27:44.038469 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" exitCode=0 Jan 09 00:27:44 crc kubenswrapper[4945]: I0109 00:27:44.038513 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490"} Jan 09 00:27:44 crc kubenswrapper[4945]: I0109 00:27:44.038548 4945 scope.go:117] "RemoveContainer" containerID="a3e604c43721efbfde6d6500e129996d632d004cd6f757047759135a5471cc2f" Jan 09 00:27:44 crc kubenswrapper[4945]: I0109 00:27:44.039203 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:27:44 crc kubenswrapper[4945]: E0109 00:27:44.039413 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:27:58 crc kubenswrapper[4945]: I0109 00:27:57.999931 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:27:58 crc kubenswrapper[4945]: E0109 00:27:58.001228 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:28:09 crc kubenswrapper[4945]: I0109 00:28:09.000439 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:28:09 crc kubenswrapper[4945]: E0109 00:28:09.001147 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:28:23 crc kubenswrapper[4945]: I0109 00:28:23.000058 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:28:23 crc kubenswrapper[4945]: E0109 00:28:23.000794 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:28:36 crc kubenswrapper[4945]: I0109 00:28:36.000519 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:28:36 crc kubenswrapper[4945]: E0109 00:28:36.001400 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:28:47 crc kubenswrapper[4945]: I0109 00:28:47.000368 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:28:47 crc kubenswrapper[4945]: E0109 00:28:47.000862 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:29:00 crc kubenswrapper[4945]: I0109 00:29:00.003659 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:29:00 crc kubenswrapper[4945]: E0109 00:29:00.004464 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:29:11 crc kubenswrapper[4945]: I0109 00:29:11.000921 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:29:11 crc kubenswrapper[4945]: E0109 00:29:11.001749 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:29:25 crc kubenswrapper[4945]: I0109 00:29:25.000762 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:29:25 crc kubenswrapper[4945]: E0109 00:29:25.002016 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:29:38 crc kubenswrapper[4945]: I0109 00:29:38.000791 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:29:38 crc kubenswrapper[4945]: E0109 00:29:38.001606 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:29:49 crc kubenswrapper[4945]: I0109 00:29:49.000797 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:29:49 crc kubenswrapper[4945]: E0109 00:29:49.001588 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.002006 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fv5rk"] Jan 09 00:29:53 crc kubenswrapper[4945]: E0109 00:29:53.002930 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerName="registry-server" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.002945 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerName="registry-server" Jan 09 00:29:53 crc kubenswrapper[4945]: E0109 00:29:53.002955 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerName="extract-content" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.002962 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerName="extract-content" Jan 09 00:29:53 crc kubenswrapper[4945]: E0109 00:29:53.002969 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerName="extract-utilities" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.002975 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerName="extract-utilities" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.003131 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce84867-ca14-4ef2-a44b-d6344eca1110" containerName="registry-server" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.007515 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.018783 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fv5rk"] Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.132298 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f78n\" (UniqueName: \"kubernetes.io/projected/d35cc981-ca5a-44a4-9561-03264b3548df-kube-api-access-8f78n\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.132373 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-catalog-content\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.132467 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-utilities\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.234228 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f78n\" (UniqueName: \"kubernetes.io/projected/d35cc981-ca5a-44a4-9561-03264b3548df-kube-api-access-8f78n\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.234281 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-catalog-content\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.234327 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-utilities\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.234808 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-utilities\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.234891 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-catalog-content\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.257793 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f78n\" (UniqueName: \"kubernetes.io/projected/d35cc981-ca5a-44a4-9561-03264b3548df-kube-api-access-8f78n\") pod \"redhat-marketplace-fv5rk\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.342483 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.771741 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fv5rk"] Jan 09 00:29:53 crc kubenswrapper[4945]: I0109 00:29:53.907375 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fv5rk" event={"ID":"d35cc981-ca5a-44a4-9561-03264b3548df","Type":"ContainerStarted","Data":"8335d62b86c31dff4abf972dac6f3c5de99976ad9be97429e2b5198da4ed784c"} Jan 09 00:29:54 crc kubenswrapper[4945]: I0109 00:29:54.914332 4945 generic.go:334] "Generic (PLEG): container finished" podID="d35cc981-ca5a-44a4-9561-03264b3548df" containerID="612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5" exitCode=0 Jan 09 00:29:54 crc kubenswrapper[4945]: I0109 00:29:54.914398 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fv5rk" event={"ID":"d35cc981-ca5a-44a4-9561-03264b3548df","Type":"ContainerDied","Data":"612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5"} Jan 09 00:29:54 crc kubenswrapper[4945]: I0109 00:29:54.916144 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 00:29:57 crc kubenswrapper[4945]: I0109 00:29:57.944674 4945 generic.go:334] "Generic (PLEG): container finished" podID="d35cc981-ca5a-44a4-9561-03264b3548df" containerID="303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0" exitCode=0 Jan 09 00:29:57 crc kubenswrapper[4945]: I0109 00:29:57.944929 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fv5rk" event={"ID":"d35cc981-ca5a-44a4-9561-03264b3548df","Type":"ContainerDied","Data":"303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0"} Jan 09 00:29:58 crc kubenswrapper[4945]: I0109 00:29:58.954811 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fv5rk" event={"ID":"d35cc981-ca5a-44a4-9561-03264b3548df","Type":"ContainerStarted","Data":"a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907"} Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.005226 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:30:00 crc kubenswrapper[4945]: E0109 00:30:00.006248 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.170911 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fv5rk" podStartSLOduration=4.604115608 podStartE2EDuration="8.170888479s" podCreationTimestamp="2026-01-09 00:29:52 +0000 UTC" firstStartedPulling="2026-01-09 00:29:54.915856262 +0000 UTC m=+4465.227015208" lastFinishedPulling="2026-01-09 00:29:58.482629133 +0000 UTC m=+4468.793788079" observedRunningTime="2026-01-09 00:29:58.983596096 +0000 UTC m=+4469.294755042" watchObservedRunningTime="2026-01-09 00:30:00.170888479 +0000 UTC m=+4470.482047425" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.173765 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w"] Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.174710 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.176439 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.180383 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.185224 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w"] Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.333563 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-secret-volume\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.334039 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl79m\" (UniqueName: \"kubernetes.io/projected/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-kube-api-access-rl79m\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.334181 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-config-volume\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.435284 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-config-volume\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.435390 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-secret-volume\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.435462 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl79m\" (UniqueName: \"kubernetes.io/projected/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-kube-api-access-rl79m\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.436772 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-config-volume\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.442049 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-secret-volume\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.452042 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl79m\" (UniqueName: \"kubernetes.io/projected/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-kube-api-access-rl79m\") pod \"collect-profiles-29465310-68s9w\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.501542 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.902771 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w"] Jan 09 00:30:00 crc kubenswrapper[4945]: W0109 00:30:00.908244 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d9dc44b_8369_43f6_8c5e_56e2a496ef1a.slice/crio-5b56f1a7db57faa79992ecd2f61eb3b7c80603751214943f408103ecea864933 WatchSource:0}: Error finding container 5b56f1a7db57faa79992ecd2f61eb3b7c80603751214943f408103ecea864933: Status 404 returned error can't find the container with id 5b56f1a7db57faa79992ecd2f61eb3b7c80603751214943f408103ecea864933 Jan 09 00:30:00 crc kubenswrapper[4945]: I0109 00:30:00.969629 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" event={"ID":"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a","Type":"ContainerStarted","Data":"5b56f1a7db57faa79992ecd2f61eb3b7c80603751214943f408103ecea864933"} Jan 09 00:30:01 crc kubenswrapper[4945]: I0109 00:30:01.977396 4945 generic.go:334] "Generic (PLEG): container finished" podID="2d9dc44b-8369-43f6-8c5e-56e2a496ef1a" containerID="d8117ca988674b0caec3962135db77f3a796ead121139b92264c2af3a0e9010f" exitCode=0 Jan 09 00:30:01 crc kubenswrapper[4945]: I0109 00:30:01.977927 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" event={"ID":"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a","Type":"ContainerDied","Data":"d8117ca988674b0caec3962135db77f3a796ead121139b92264c2af3a0e9010f"} Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.238918 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.343075 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.343141 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.372469 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-secret-volume\") pod \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.372800 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-config-volume\") pod \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.372890 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl79m\" (UniqueName: \"kubernetes.io/projected/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-kube-api-access-rl79m\") pod \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\" (UID: \"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a\") " Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.373536 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d9dc44b-8369-43f6-8c5e-56e2a496ef1a" (UID: "2d9dc44b-8369-43f6-8c5e-56e2a496ef1a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.377735 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d9dc44b-8369-43f6-8c5e-56e2a496ef1a" (UID: "2d9dc44b-8369-43f6-8c5e-56e2a496ef1a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.378092 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-kube-api-access-rl79m" (OuterVolumeSpecName: "kube-api-access-rl79m") pod "2d9dc44b-8369-43f6-8c5e-56e2a496ef1a" (UID: "2d9dc44b-8369-43f6-8c5e-56e2a496ef1a"). InnerVolumeSpecName "kube-api-access-rl79m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.385493 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.474708 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl79m\" (UniqueName: \"kubernetes.io/projected/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-kube-api-access-rl79m\") on node \"crc\" DevicePath \"\"" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.474755 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.474769 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.993308 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.993273 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w" event={"ID":"2d9dc44b-8369-43f6-8c5e-56e2a496ef1a","Type":"ContainerDied","Data":"5b56f1a7db57faa79992ecd2f61eb3b7c80603751214943f408103ecea864933"} Jan 09 00:30:03 crc kubenswrapper[4945]: I0109 00:30:03.993363 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b56f1a7db57faa79992ecd2f61eb3b7c80603751214943f408103ecea864933" Jan 09 00:30:04 crc kubenswrapper[4945]: I0109 00:30:04.032549 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:30:04 crc kubenswrapper[4945]: I0109 00:30:04.083437 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fv5rk"] Jan 09 00:30:04 crc kubenswrapper[4945]: I0109 00:30:04.311985 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt"] Jan 09 00:30:04 crc kubenswrapper[4945]: I0109 00:30:04.319799 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465265-6qmnt"] Jan 09 00:30:06 crc kubenswrapper[4945]: I0109 00:30:06.005908 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fv5rk" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" containerName="registry-server" containerID="cri-o://a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907" gracePeriod=2 Jan 09 00:30:06 crc kubenswrapper[4945]: I0109 00:30:06.011037 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a458a2-0117-4cc1-ac1d-68ba7e11e19d" path="/var/lib/kubelet/pods/53a458a2-0117-4cc1-ac1d-68ba7e11e19d/volumes" Jan 09 00:30:06 crc kubenswrapper[4945]: I0109 00:30:06.920757 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.014185 4945 generic.go:334] "Generic (PLEG): container finished" podID="d35cc981-ca5a-44a4-9561-03264b3548df" containerID="a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907" exitCode=0 Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.014259 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fv5rk" event={"ID":"d35cc981-ca5a-44a4-9561-03264b3548df","Type":"ContainerDied","Data":"a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907"} Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.014631 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fv5rk" event={"ID":"d35cc981-ca5a-44a4-9561-03264b3548df","Type":"ContainerDied","Data":"8335d62b86c31dff4abf972dac6f3c5de99976ad9be97429e2b5198da4ed784c"} Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.014660 4945 scope.go:117] "RemoveContainer" containerID="a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.014310 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fv5rk" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.035716 4945 scope.go:117] "RemoveContainer" containerID="303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.040697 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-utilities\") pod \"d35cc981-ca5a-44a4-9561-03264b3548df\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.040790 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-catalog-content\") pod \"d35cc981-ca5a-44a4-9561-03264b3548df\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.040874 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f78n\" (UniqueName: \"kubernetes.io/projected/d35cc981-ca5a-44a4-9561-03264b3548df-kube-api-access-8f78n\") pod \"d35cc981-ca5a-44a4-9561-03264b3548df\" (UID: \"d35cc981-ca5a-44a4-9561-03264b3548df\") " Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.041949 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-utilities" (OuterVolumeSpecName: "utilities") pod "d35cc981-ca5a-44a4-9561-03264b3548df" (UID: "d35cc981-ca5a-44a4-9561-03264b3548df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.048055 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35cc981-ca5a-44a4-9561-03264b3548df-kube-api-access-8f78n" (OuterVolumeSpecName: "kube-api-access-8f78n") pod "d35cc981-ca5a-44a4-9561-03264b3548df" (UID: "d35cc981-ca5a-44a4-9561-03264b3548df"). InnerVolumeSpecName "kube-api-access-8f78n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.057387 4945 scope.go:117] "RemoveContainer" containerID="612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.067087 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d35cc981-ca5a-44a4-9561-03264b3548df" (UID: "d35cc981-ca5a-44a4-9561-03264b3548df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.142354 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f78n\" (UniqueName: \"kubernetes.io/projected/d35cc981-ca5a-44a4-9561-03264b3548df-kube-api-access-8f78n\") on node \"crc\" DevicePath \"\"" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.142399 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.142414 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35cc981-ca5a-44a4-9561-03264b3548df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.531025 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fv5rk"] Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.536934 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fv5rk"] Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.575325 4945 scope.go:117] "RemoveContainer" containerID="a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907" Jan 09 00:30:07 crc kubenswrapper[4945]: E0109 00:30:07.576046 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907\": container with ID starting with a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907 not found: ID does not exist" containerID="a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.576093 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907"} err="failed to get container status \"a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907\": rpc error: code = NotFound desc = could not find container \"a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907\": container with ID starting with a925a1368be8e695dda157f166519dca66ea8e5a4c4c7db817451251acba1907 not found: ID does not exist" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.576123 4945 scope.go:117] "RemoveContainer" containerID="303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0" Jan 09 00:30:07 crc kubenswrapper[4945]: E0109 00:30:07.576482 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0\": container with ID starting with 303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0 not found: ID does not exist" containerID="303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.576543 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0"} err="failed to get container status \"303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0\": rpc error: code = NotFound desc = could not find container \"303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0\": container with ID starting with 303f3c67bd8c91cff40c21cfe9260ce35e679ee2d73f9ab14e6a1a94f11da1f0 not found: ID does not exist" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.576561 4945 scope.go:117] "RemoveContainer" containerID="612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5" Jan 09 00:30:07 crc kubenswrapper[4945]: E0109 00:30:07.576915 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5\": container with ID starting with 612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5 not found: ID does not exist" containerID="612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5" Jan 09 00:30:07 crc kubenswrapper[4945]: I0109 00:30:07.576938 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5"} err="failed to get container status \"612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5\": rpc error: code = NotFound desc = could not find container \"612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5\": container with ID starting with 612bba21ecd204013f62e81a14d57eeb165d1ac6123a4b5e100a21312d5e92e5 not found: ID does not exist" Jan 09 00:30:08 crc kubenswrapper[4945]: I0109 00:30:08.009951 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" path="/var/lib/kubelet/pods/d35cc981-ca5a-44a4-9561-03264b3548df/volumes" Jan 09 00:30:14 crc kubenswrapper[4945]: I0109 00:30:14.000266 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:30:14 crc kubenswrapper[4945]: E0109 00:30:14.000815 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:30:28 crc kubenswrapper[4945]: I0109 00:30:28.001059 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:30:28 crc kubenswrapper[4945]: E0109 00:30:28.001929 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:30:35 crc kubenswrapper[4945]: I0109 00:30:35.423978 4945 scope.go:117] "RemoveContainer" containerID="e404b5a4fae8d88fbac084072f791afe4130b3b03fd5b8f30fffbf1a4d31c699" Jan 09 00:30:42 crc kubenswrapper[4945]: I0109 00:30:42.000647 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:30:42 crc kubenswrapper[4945]: E0109 00:30:42.001414 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:30:53 crc kubenswrapper[4945]: I0109 00:30:53.001013 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:30:53 crc kubenswrapper[4945]: E0109 00:30:53.001837 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:31:04 crc kubenswrapper[4945]: I0109 00:31:04.000625 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:31:04 crc kubenswrapper[4945]: E0109 00:31:04.001394 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:31:18 crc kubenswrapper[4945]: I0109 00:31:18.000395 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:31:18 crc kubenswrapper[4945]: E0109 00:31:18.001262 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:31:30 crc kubenswrapper[4945]: I0109 00:31:30.005053 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:31:30 crc kubenswrapper[4945]: E0109 00:31:30.005923 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.850606 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wksdf"] Jan 09 00:31:37 crc kubenswrapper[4945]: E0109 00:31:37.851525 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9dc44b-8369-43f6-8c5e-56e2a496ef1a" containerName="collect-profiles" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.851539 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9dc44b-8369-43f6-8c5e-56e2a496ef1a" containerName="collect-profiles" Jan 09 00:31:37 crc kubenswrapper[4945]: E0109 00:31:37.851554 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" containerName="registry-server" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.851560 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" containerName="registry-server" Jan 09 00:31:37 crc kubenswrapper[4945]: E0109 00:31:37.851580 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" containerName="extract-content" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.851588 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" containerName="extract-content" Jan 09 00:31:37 crc kubenswrapper[4945]: E0109 00:31:37.851604 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" containerName="extract-utilities" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.851610 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" containerName="extract-utilities" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.851736 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35cc981-ca5a-44a4-9561-03264b3548df" containerName="registry-server" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.851755 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9dc44b-8369-43f6-8c5e-56e2a496ef1a" containerName="collect-profiles" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.852756 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.867253 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wksdf"] Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.945654 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-catalog-content\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.945725 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-utilities\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:37 crc kubenswrapper[4945]: I0109 00:31:37.945754 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69ct\" (UniqueName: \"kubernetes.io/projected/16576ed6-d356-4395-912f-3f8474c3f514-kube-api-access-w69ct\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:38 crc kubenswrapper[4945]: I0109 00:31:38.046846 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-utilities\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:38 crc kubenswrapper[4945]: I0109 00:31:38.046909 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69ct\" (UniqueName: \"kubernetes.io/projected/16576ed6-d356-4395-912f-3f8474c3f514-kube-api-access-w69ct\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:38 crc kubenswrapper[4945]: I0109 00:31:38.047129 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-catalog-content\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:38 crc kubenswrapper[4945]: I0109 00:31:38.047549 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-utilities\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:38 crc kubenswrapper[4945]: I0109 00:31:38.048501 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-catalog-content\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:38 crc kubenswrapper[4945]: I0109 00:31:38.070551 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69ct\" (UniqueName: \"kubernetes.io/projected/16576ed6-d356-4395-912f-3f8474c3f514-kube-api-access-w69ct\") pod \"community-operators-wksdf\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:38 crc kubenswrapper[4945]: I0109 00:31:38.171855 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:38 crc kubenswrapper[4945]: I0109 00:31:38.670911 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wksdf"] Jan 09 00:31:39 crc kubenswrapper[4945]: I0109 00:31:39.647020 4945 generic.go:334] "Generic (PLEG): container finished" podID="16576ed6-d356-4395-912f-3f8474c3f514" containerID="5296d7a5584735500dcb27822c5ebc836477cae6a6bed7067b3c4e4fb35152f5" exitCode=0 Jan 09 00:31:39 crc kubenswrapper[4945]: I0109 00:31:39.647124 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksdf" event={"ID":"16576ed6-d356-4395-912f-3f8474c3f514","Type":"ContainerDied","Data":"5296d7a5584735500dcb27822c5ebc836477cae6a6bed7067b3c4e4fb35152f5"} Jan 09 00:31:39 crc kubenswrapper[4945]: I0109 00:31:39.647595 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksdf" event={"ID":"16576ed6-d356-4395-912f-3f8474c3f514","Type":"ContainerStarted","Data":"0d3fb20e297464cb9080d9c8dec3dc3608263fa92ccd1029c3ef46a88207a343"} Jan 09 00:31:41 crc kubenswrapper[4945]: I0109 00:31:41.661819 4945 generic.go:334] "Generic (PLEG): container finished" podID="16576ed6-d356-4395-912f-3f8474c3f514" containerID="2fe763b2a1663f13d26811d00390cc9f5841eb587009e1ffc501e16725afa29b" exitCode=0 Jan 09 00:31:41 crc kubenswrapper[4945]: I0109 00:31:41.661924 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksdf" event={"ID":"16576ed6-d356-4395-912f-3f8474c3f514","Type":"ContainerDied","Data":"2fe763b2a1663f13d26811d00390cc9f5841eb587009e1ffc501e16725afa29b"} Jan 09 00:31:42 crc kubenswrapper[4945]: I0109 00:31:42.670657 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksdf" event={"ID":"16576ed6-d356-4395-912f-3f8474c3f514","Type":"ContainerStarted","Data":"d2c0a4b29c1493d47d4d61645819c536bc76d8e098fbe61f837381f27e9d34b7"} Jan 09 00:31:42 crc kubenswrapper[4945]: I0109 00:31:42.691426 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wksdf" podStartSLOduration=3.216023737 podStartE2EDuration="5.691404319s" podCreationTimestamp="2026-01-09 00:31:37 +0000 UTC" firstStartedPulling="2026-01-09 00:31:39.649079385 +0000 UTC m=+4569.960238331" lastFinishedPulling="2026-01-09 00:31:42.124459967 +0000 UTC m=+4572.435618913" observedRunningTime="2026-01-09 00:31:42.685527605 +0000 UTC m=+4572.996686561" watchObservedRunningTime="2026-01-09 00:31:42.691404319 +0000 UTC m=+4573.002563265" Jan 09 00:31:43 crc kubenswrapper[4945]: I0109 00:31:42.999858 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:31:43 crc kubenswrapper[4945]: E0109 00:31:43.000101 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:31:48 crc kubenswrapper[4945]: I0109 00:31:48.172542 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:48 crc kubenswrapper[4945]: I0109 00:31:48.173230 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:48 crc kubenswrapper[4945]: I0109 00:31:48.222403 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:48 crc kubenswrapper[4945]: I0109 00:31:48.760742 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:48 crc kubenswrapper[4945]: I0109 00:31:48.814880 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wksdf"] Jan 09 00:31:50 crc kubenswrapper[4945]: I0109 00:31:50.727557 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wksdf" podUID="16576ed6-d356-4395-912f-3f8474c3f514" containerName="registry-server" containerID="cri-o://d2c0a4b29c1493d47d4d61645819c536bc76d8e098fbe61f837381f27e9d34b7" gracePeriod=2 Jan 09 00:31:53 crc kubenswrapper[4945]: I0109 00:31:53.750105 4945 generic.go:334] "Generic (PLEG): container finished" podID="16576ed6-d356-4395-912f-3f8474c3f514" containerID="d2c0a4b29c1493d47d4d61645819c536bc76d8e098fbe61f837381f27e9d34b7" exitCode=0 Jan 09 00:31:53 crc kubenswrapper[4945]: I0109 00:31:53.750266 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksdf" event={"ID":"16576ed6-d356-4395-912f-3f8474c3f514","Type":"ContainerDied","Data":"d2c0a4b29c1493d47d4d61645819c536bc76d8e098fbe61f837381f27e9d34b7"} Jan 09 00:31:53 crc kubenswrapper[4945]: I0109 00:31:53.859301 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:53 crc kubenswrapper[4945]: I0109 00:31:53.986619 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-catalog-content\") pod \"16576ed6-d356-4395-912f-3f8474c3f514\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " Jan 09 00:31:53 crc kubenswrapper[4945]: I0109 00:31:53.986732 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-utilities\") pod \"16576ed6-d356-4395-912f-3f8474c3f514\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " Jan 09 00:31:53 crc kubenswrapper[4945]: I0109 00:31:53.986805 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w69ct\" (UniqueName: \"kubernetes.io/projected/16576ed6-d356-4395-912f-3f8474c3f514-kube-api-access-w69ct\") pod \"16576ed6-d356-4395-912f-3f8474c3f514\" (UID: \"16576ed6-d356-4395-912f-3f8474c3f514\") " Jan 09 00:31:53 crc kubenswrapper[4945]: I0109 00:31:53.988652 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-utilities" (OuterVolumeSpecName: "utilities") pod "16576ed6-d356-4395-912f-3f8474c3f514" (UID: "16576ed6-d356-4395-912f-3f8474c3f514"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:31:53 crc kubenswrapper[4945]: I0109 00:31:53.992736 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16576ed6-d356-4395-912f-3f8474c3f514-kube-api-access-w69ct" (OuterVolumeSpecName: "kube-api-access-w69ct") pod "16576ed6-d356-4395-912f-3f8474c3f514" (UID: "16576ed6-d356-4395-912f-3f8474c3f514"). InnerVolumeSpecName "kube-api-access-w69ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.039575 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16576ed6-d356-4395-912f-3f8474c3f514" (UID: "16576ed6-d356-4395-912f-3f8474c3f514"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.088135 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.088168 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16576ed6-d356-4395-912f-3f8474c3f514-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.088178 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w69ct\" (UniqueName: \"kubernetes.io/projected/16576ed6-d356-4395-912f-3f8474c3f514-kube-api-access-w69ct\") on node \"crc\" DevicePath \"\"" Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.760464 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wksdf" event={"ID":"16576ed6-d356-4395-912f-3f8474c3f514","Type":"ContainerDied","Data":"0d3fb20e297464cb9080d9c8dec3dc3608263fa92ccd1029c3ef46a88207a343"} Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.760923 4945 scope.go:117] "RemoveContainer" containerID="d2c0a4b29c1493d47d4d61645819c536bc76d8e098fbe61f837381f27e9d34b7" Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.760518 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wksdf" Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.793454 4945 scope.go:117] "RemoveContainer" containerID="2fe763b2a1663f13d26811d00390cc9f5841eb587009e1ffc501e16725afa29b" Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.799909 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wksdf"] Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.807036 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wksdf"] Jan 09 00:31:54 crc kubenswrapper[4945]: I0109 00:31:54.814834 4945 scope.go:117] "RemoveContainer" containerID="5296d7a5584735500dcb27822c5ebc836477cae6a6bed7067b3c4e4fb35152f5" Jan 09 00:31:55 crc kubenswrapper[4945]: I0109 00:31:55.000209 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:31:55 crc kubenswrapper[4945]: E0109 00:31:55.000403 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:31:56 crc kubenswrapper[4945]: I0109 00:31:56.025353 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16576ed6-d356-4395-912f-3f8474c3f514" path="/var/lib/kubelet/pods/16576ed6-d356-4395-912f-3f8474c3f514/volumes" Jan 09 00:32:09 crc kubenswrapper[4945]: I0109 00:32:09.000565 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:32:09 crc kubenswrapper[4945]: E0109 00:32:09.002625 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:32:24 crc kubenswrapper[4945]: I0109 00:32:24.000235 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:32:24 crc kubenswrapper[4945]: E0109 00:32:24.001075 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:32:38 crc kubenswrapper[4945]: I0109 00:32:38.000470 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:32:38 crc kubenswrapper[4945]: E0109 00:32:38.001308 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:32:53 crc kubenswrapper[4945]: I0109 00:32:53.001783 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:32:54 crc kubenswrapper[4945]: I0109 00:32:54.147851 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"131175c614aae3d2144a82a8e5cc90358991986d393a3dd85e043f211f2e62f9"} Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.583776 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f4b4x"] Jan 09 00:34:28 crc kubenswrapper[4945]: E0109 00:34:28.584780 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16576ed6-d356-4395-912f-3f8474c3f514" containerName="registry-server" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.584798 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="16576ed6-d356-4395-912f-3f8474c3f514" containerName="registry-server" Jan 09 00:34:28 crc kubenswrapper[4945]: E0109 00:34:28.584813 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16576ed6-d356-4395-912f-3f8474c3f514" containerName="extract-utilities" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.584822 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="16576ed6-d356-4395-912f-3f8474c3f514" containerName="extract-utilities" Jan 09 00:34:28 crc kubenswrapper[4945]: E0109 00:34:28.584831 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16576ed6-d356-4395-912f-3f8474c3f514" containerName="extract-content" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.584839 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="16576ed6-d356-4395-912f-3f8474c3f514" containerName="extract-content" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.585043 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="16576ed6-d356-4395-912f-3f8474c3f514" containerName="registry-server" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.586240 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.596970 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4b4x"] Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.672311 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-utilities\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.672391 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59rwl\" (UniqueName: \"kubernetes.io/projected/e5f05223-089e-421f-a84e-910052b7ddde-kube-api-access-59rwl\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.672573 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-catalog-content\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.774681 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-utilities\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.774774 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59rwl\" (UniqueName: \"kubernetes.io/projected/e5f05223-089e-421f-a84e-910052b7ddde-kube-api-access-59rwl\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.774815 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-catalog-content\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.775828 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-utilities\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.775840 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2sbpr"] Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.776102 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-catalog-content\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.777543 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.791396 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sbpr"] Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.804072 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59rwl\" (UniqueName: \"kubernetes.io/projected/e5f05223-089e-421f-a84e-910052b7ddde-kube-api-access-59rwl\") pod \"redhat-operators-f4b4x\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.876415 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-catalog-content\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.876850 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-utilities\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.876921 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7nc\" (UniqueName: \"kubernetes.io/projected/90deaace-31a9-4492-8064-8d46b90544f1-kube-api-access-wn7nc\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.916268 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.978750 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-catalog-content\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.978833 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-utilities\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.978880 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7nc\" (UniqueName: \"kubernetes.io/projected/90deaace-31a9-4492-8064-8d46b90544f1-kube-api-access-wn7nc\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.979483 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-catalog-content\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:28 crc kubenswrapper[4945]: I0109 00:34:28.979672 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-utilities\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.003782 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7nc\" (UniqueName: \"kubernetes.io/projected/90deaace-31a9-4492-8064-8d46b90544f1-kube-api-access-wn7nc\") pod \"certified-operators-2sbpr\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.095244 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.430491 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4b4x"] Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.620944 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sbpr"] Jan 09 00:34:29 crc kubenswrapper[4945]: W0109 00:34:29.622768 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90deaace_31a9_4492_8064_8d46b90544f1.slice/crio-0040e705f67dd3d4ccc522ea1ca7a879b07ce98ea480c2949162bacc8b478a98 WatchSource:0}: Error finding container 0040e705f67dd3d4ccc522ea1ca7a879b07ce98ea480c2949162bacc8b478a98: Status 404 returned error can't find the container with id 0040e705f67dd3d4ccc522ea1ca7a879b07ce98ea480c2949162bacc8b478a98 Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.870050 4945 generic.go:334] "Generic (PLEG): container finished" podID="e5f05223-089e-421f-a84e-910052b7ddde" containerID="67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25" exitCode=0 Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.870156 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4b4x" event={"ID":"e5f05223-089e-421f-a84e-910052b7ddde","Type":"ContainerDied","Data":"67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25"} Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.870186 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4b4x" event={"ID":"e5f05223-089e-421f-a84e-910052b7ddde","Type":"ContainerStarted","Data":"a438d3e5577645bc118f06564cc6cac6343fa50756caa6a9335e19bae86180a5"} Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.872859 4945 generic.go:334] "Generic (PLEG): container finished" podID="90deaace-31a9-4492-8064-8d46b90544f1" containerID="8b2fbc06589180df7d200040eac47e18dbb02914d36366f54876ac8425fe0fe2" exitCode=0 Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.872905 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sbpr" event={"ID":"90deaace-31a9-4492-8064-8d46b90544f1","Type":"ContainerDied","Data":"8b2fbc06589180df7d200040eac47e18dbb02914d36366f54876ac8425fe0fe2"} Jan 09 00:34:29 crc kubenswrapper[4945]: I0109 00:34:29.872940 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sbpr" event={"ID":"90deaace-31a9-4492-8064-8d46b90544f1","Type":"ContainerStarted","Data":"0040e705f67dd3d4ccc522ea1ca7a879b07ce98ea480c2949162bacc8b478a98"} Jan 09 00:34:30 crc kubenswrapper[4945]: I0109 00:34:30.886251 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sbpr" event={"ID":"90deaace-31a9-4492-8064-8d46b90544f1","Type":"ContainerStarted","Data":"07a9e739b365c50c73b56fd7f4d32f39613f332011585f2f925674b5176da201"} Jan 09 00:34:31 crc kubenswrapper[4945]: I0109 00:34:31.896461 4945 generic.go:334] "Generic (PLEG): container finished" podID="e5f05223-089e-421f-a84e-910052b7ddde" containerID="109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494" exitCode=0 Jan 09 00:34:31 crc kubenswrapper[4945]: I0109 00:34:31.896577 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4b4x" event={"ID":"e5f05223-089e-421f-a84e-910052b7ddde","Type":"ContainerDied","Data":"109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494"} Jan 09 00:34:31 crc kubenswrapper[4945]: I0109 00:34:31.900280 4945 generic.go:334] "Generic (PLEG): container finished" podID="90deaace-31a9-4492-8064-8d46b90544f1" containerID="07a9e739b365c50c73b56fd7f4d32f39613f332011585f2f925674b5176da201" exitCode=0 Jan 09 00:34:31 crc kubenswrapper[4945]: I0109 00:34:31.900390 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sbpr" event={"ID":"90deaace-31a9-4492-8064-8d46b90544f1","Type":"ContainerDied","Data":"07a9e739b365c50c73b56fd7f4d32f39613f332011585f2f925674b5176da201"} Jan 09 00:34:32 crc kubenswrapper[4945]: I0109 00:34:32.912627 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sbpr" event={"ID":"90deaace-31a9-4492-8064-8d46b90544f1","Type":"ContainerStarted","Data":"a73f6aa81802338b84a65bfdddacba76296c4d24540a269084f7e12f62546bdf"} Jan 09 00:34:32 crc kubenswrapper[4945]: I0109 00:34:32.917019 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4b4x" event={"ID":"e5f05223-089e-421f-a84e-910052b7ddde","Type":"ContainerStarted","Data":"95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490"} Jan 09 00:34:32 crc kubenswrapper[4945]: I0109 00:34:32.937406 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2sbpr" podStartSLOduration=2.459018677 podStartE2EDuration="4.93738516s" podCreationTimestamp="2026-01-09 00:34:28 +0000 UTC" firstStartedPulling="2026-01-09 00:34:29.875017346 +0000 UTC m=+4740.186176292" lastFinishedPulling="2026-01-09 00:34:32.353383829 +0000 UTC m=+4742.664542775" observedRunningTime="2026-01-09 00:34:32.935211466 +0000 UTC m=+4743.246370552" watchObservedRunningTime="2026-01-09 00:34:32.93738516 +0000 UTC m=+4743.248544106" Jan 09 00:34:32 crc kubenswrapper[4945]: I0109 00:34:32.960836 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f4b4x" podStartSLOduration=2.512064918 podStartE2EDuration="4.960811045s" podCreationTimestamp="2026-01-09 00:34:28 +0000 UTC" firstStartedPulling="2026-01-09 00:34:29.872955165 +0000 UTC m=+4740.184114111" lastFinishedPulling="2026-01-09 00:34:32.321701282 +0000 UTC m=+4742.632860238" observedRunningTime="2026-01-09 00:34:32.956305364 +0000 UTC m=+4743.267464310" watchObservedRunningTime="2026-01-09 00:34:32.960811045 +0000 UTC m=+4743.271969991" Jan 09 00:34:38 crc kubenswrapper[4945]: I0109 00:34:38.917379 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:38 crc kubenswrapper[4945]: I0109 00:34:38.918094 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:38 crc kubenswrapper[4945]: I0109 00:34:38.966531 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:39 crc kubenswrapper[4945]: I0109 00:34:39.019344 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:39 crc kubenswrapper[4945]: I0109 00:34:39.096651 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:39 crc kubenswrapper[4945]: I0109 00:34:39.096723 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:39 crc kubenswrapper[4945]: I0109 00:34:39.142674 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:39 crc kubenswrapper[4945]: I0109 00:34:39.208179 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4b4x"] Jan 09 00:34:40 crc kubenswrapper[4945]: I0109 00:34:40.025246 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:40 crc kubenswrapper[4945]: I0109 00:34:40.976605 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f4b4x" podUID="e5f05223-089e-421f-a84e-910052b7ddde" containerName="registry-server" containerID="cri-o://95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490" gracePeriod=2 Jan 09 00:34:41 crc kubenswrapper[4945]: I0109 00:34:41.408872 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2sbpr"] Jan 09 00:34:41 crc kubenswrapper[4945]: I0109 00:34:41.982600 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2sbpr" podUID="90deaace-31a9-4492-8064-8d46b90544f1" containerName="registry-server" containerID="cri-o://a73f6aa81802338b84a65bfdddacba76296c4d24540a269084f7e12f62546bdf" gracePeriod=2 Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.774163 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.799919 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59rwl\" (UniqueName: \"kubernetes.io/projected/e5f05223-089e-421f-a84e-910052b7ddde-kube-api-access-59rwl\") pod \"e5f05223-089e-421f-a84e-910052b7ddde\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.800056 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-catalog-content\") pod \"e5f05223-089e-421f-a84e-910052b7ddde\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.800187 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-utilities\") pod \"e5f05223-089e-421f-a84e-910052b7ddde\" (UID: \"e5f05223-089e-421f-a84e-910052b7ddde\") " Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.801253 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-utilities" (OuterVolumeSpecName: "utilities") pod "e5f05223-089e-421f-a84e-910052b7ddde" (UID: "e5f05223-089e-421f-a84e-910052b7ddde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.811575 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f05223-089e-421f-a84e-910052b7ddde-kube-api-access-59rwl" (OuterVolumeSpecName: "kube-api-access-59rwl") pod "e5f05223-089e-421f-a84e-910052b7ddde" (UID: "e5f05223-089e-421f-a84e-910052b7ddde"). InnerVolumeSpecName "kube-api-access-59rwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.902230 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59rwl\" (UniqueName: \"kubernetes.io/projected/e5f05223-089e-421f-a84e-910052b7ddde-kube-api-access-59rwl\") on node \"crc\" DevicePath \"\"" Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.902272 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.960588 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5f05223-089e-421f-a84e-910052b7ddde" (UID: "e5f05223-089e-421f-a84e-910052b7ddde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.992058 4945 generic.go:334] "Generic (PLEG): container finished" podID="e5f05223-089e-421f-a84e-910052b7ddde" containerID="95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490" exitCode=0 Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.992143 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4b4x" Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.992179 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4b4x" event={"ID":"e5f05223-089e-421f-a84e-910052b7ddde","Type":"ContainerDied","Data":"95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490"} Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.992227 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4b4x" event={"ID":"e5f05223-089e-421f-a84e-910052b7ddde","Type":"ContainerDied","Data":"a438d3e5577645bc118f06564cc6cac6343fa50756caa6a9335e19bae86180a5"} Jan 09 00:34:42 crc kubenswrapper[4945]: I0109 00:34:42.992248 4945 scope.go:117] "RemoveContainer" containerID="95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.000628 4945 generic.go:334] "Generic (PLEG): container finished" podID="90deaace-31a9-4492-8064-8d46b90544f1" containerID="a73f6aa81802338b84a65bfdddacba76296c4d24540a269084f7e12f62546bdf" exitCode=0 Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.000679 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sbpr" event={"ID":"90deaace-31a9-4492-8064-8d46b90544f1","Type":"ContainerDied","Data":"a73f6aa81802338b84a65bfdddacba76296c4d24540a269084f7e12f62546bdf"} Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.003884 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f05223-089e-421f-a84e-910052b7ddde-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.018459 4945 scope.go:117] "RemoveContainer" containerID="109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.048164 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f4b4x"] Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.053966 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f4b4x"] Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.067099 4945 scope.go:117] "RemoveContainer" containerID="67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.080627 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.085986 4945 scope.go:117] "RemoveContainer" containerID="95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490" Jan 09 00:34:43 crc kubenswrapper[4945]: E0109 00:34:43.087136 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490\": container with ID starting with 95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490 not found: ID does not exist" containerID="95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.087181 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490"} err="failed to get container status \"95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490\": rpc error: code = NotFound desc = could not find container \"95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490\": container with ID starting with 95c1c4e5dd2df4e8f973af824f4fd78c2515452e81b22ef8d882d4c72b436490 not found: ID does not exist" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.087211 4945 scope.go:117] "RemoveContainer" containerID="109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494" Jan 09 00:34:43 crc kubenswrapper[4945]: E0109 00:34:43.087808 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494\": container with ID starting with 109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494 not found: ID does not exist" containerID="109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.087873 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494"} err="failed to get container status \"109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494\": rpc error: code = NotFound desc = could not find container \"109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494\": container with ID starting with 109cd236cd2299616beab7e9eb94a93e89a56bd4894c7e2965b86cb42143f494 not found: ID does not exist" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.087908 4945 scope.go:117] "RemoveContainer" containerID="67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25" Jan 09 00:34:43 crc kubenswrapper[4945]: E0109 00:34:43.088263 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25\": container with ID starting with 67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25 not found: ID does not exist" containerID="67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.088295 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25"} err="failed to get container status \"67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25\": rpc error: code = NotFound desc = could not find container \"67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25\": container with ID starting with 67e9b11bd8facfa5b319e183160bdd219244e8a356a4a4ed94b83da7a9667c25 not found: ID does not exist" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.105084 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-catalog-content\") pod \"90deaace-31a9-4492-8064-8d46b90544f1\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.105181 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7nc\" (UniqueName: \"kubernetes.io/projected/90deaace-31a9-4492-8064-8d46b90544f1-kube-api-access-wn7nc\") pod \"90deaace-31a9-4492-8064-8d46b90544f1\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.105294 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-utilities\") pod \"90deaace-31a9-4492-8064-8d46b90544f1\" (UID: \"90deaace-31a9-4492-8064-8d46b90544f1\") " Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.106531 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-utilities" (OuterVolumeSpecName: "utilities") pod "90deaace-31a9-4492-8064-8d46b90544f1" (UID: "90deaace-31a9-4492-8064-8d46b90544f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.119520 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90deaace-31a9-4492-8064-8d46b90544f1-kube-api-access-wn7nc" (OuterVolumeSpecName: "kube-api-access-wn7nc") pod "90deaace-31a9-4492-8064-8d46b90544f1" (UID: "90deaace-31a9-4492-8064-8d46b90544f1"). InnerVolumeSpecName "kube-api-access-wn7nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.162491 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90deaace-31a9-4492-8064-8d46b90544f1" (UID: "90deaace-31a9-4492-8064-8d46b90544f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.207192 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.207224 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90deaace-31a9-4492-8064-8d46b90544f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:34:43 crc kubenswrapper[4945]: I0109 00:34:43.207235 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7nc\" (UniqueName: \"kubernetes.io/projected/90deaace-31a9-4492-8064-8d46b90544f1-kube-api-access-wn7nc\") on node \"crc\" DevicePath \"\"" Jan 09 00:34:44 crc kubenswrapper[4945]: I0109 00:34:44.009081 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f05223-089e-421f-a84e-910052b7ddde" path="/var/lib/kubelet/pods/e5f05223-089e-421f-a84e-910052b7ddde/volumes" Jan 09 00:34:44 crc kubenswrapper[4945]: I0109 00:34:44.010288 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sbpr" event={"ID":"90deaace-31a9-4492-8064-8d46b90544f1","Type":"ContainerDied","Data":"0040e705f67dd3d4ccc522ea1ca7a879b07ce98ea480c2949162bacc8b478a98"} Jan 09 00:34:44 crc kubenswrapper[4945]: I0109 00:34:44.010313 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sbpr" Jan 09 00:34:44 crc kubenswrapper[4945]: I0109 00:34:44.010366 4945 scope.go:117] "RemoveContainer" containerID="a73f6aa81802338b84a65bfdddacba76296c4d24540a269084f7e12f62546bdf" Jan 09 00:34:44 crc kubenswrapper[4945]: I0109 00:34:44.028879 4945 scope.go:117] "RemoveContainer" containerID="07a9e739b365c50c73b56fd7f4d32f39613f332011585f2f925674b5176da201" Jan 09 00:34:44 crc kubenswrapper[4945]: I0109 00:34:44.050774 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2sbpr"] Jan 09 00:34:44 crc kubenswrapper[4945]: I0109 00:34:44.055432 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2sbpr"] Jan 09 00:34:44 crc kubenswrapper[4945]: I0109 00:34:44.067943 4945 scope.go:117] "RemoveContainer" containerID="8b2fbc06589180df7d200040eac47e18dbb02914d36366f54876ac8425fe0fe2" Jan 09 00:34:46 crc kubenswrapper[4945]: I0109 00:34:46.011048 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90deaace-31a9-4492-8064-8d46b90544f1" path="/var/lib/kubelet/pods/90deaace-31a9-4492-8064-8d46b90544f1/volumes" Jan 09 00:35:13 crc kubenswrapper[4945]: I0109 00:35:13.578303 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:35:13 crc kubenswrapper[4945]: I0109 00:35:13.578900 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:35:43 crc kubenswrapper[4945]: I0109 00:35:43.578025 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:35:43 crc kubenswrapper[4945]: I0109 00:35:43.578660 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:36:13 crc kubenswrapper[4945]: I0109 00:36:13.578372 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:36:13 crc kubenswrapper[4945]: I0109 00:36:13.578937 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:36:13 crc kubenswrapper[4945]: I0109 00:36:13.578985 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:36:13 crc kubenswrapper[4945]: I0109 00:36:13.579722 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"131175c614aae3d2144a82a8e5cc90358991986d393a3dd85e043f211f2e62f9"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:36:13 crc kubenswrapper[4945]: I0109 00:36:13.579787 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://131175c614aae3d2144a82a8e5cc90358991986d393a3dd85e043f211f2e62f9" gracePeriod=600 Jan 09 00:36:14 crc kubenswrapper[4945]: I0109 00:36:14.656694 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="131175c614aae3d2144a82a8e5cc90358991986d393a3dd85e043f211f2e62f9" exitCode=0 Jan 09 00:36:14 crc kubenswrapper[4945]: I0109 00:36:14.656767 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"131175c614aae3d2144a82a8e5cc90358991986d393a3dd85e043f211f2e62f9"} Jan 09 00:36:14 crc kubenswrapper[4945]: I0109 00:36:14.657350 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5"} Jan 09 00:36:14 crc kubenswrapper[4945]: I0109 00:36:14.657372 4945 scope.go:117] "RemoveContainer" containerID="9c356723060226f5262c9b3ff882e5e7ede91de7402e43d1b92eaf70c20d1490" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.308106 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-76w4z"] Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.329531 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-76w4z"] Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.482116 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-r69zw"] Jan 09 00:36:46 crc kubenswrapper[4945]: E0109 00:36:46.483942 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f05223-089e-421f-a84e-910052b7ddde" containerName="extract-utilities" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.483966 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f05223-089e-421f-a84e-910052b7ddde" containerName="extract-utilities" Jan 09 00:36:46 crc kubenswrapper[4945]: E0109 00:36:46.483985 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90deaace-31a9-4492-8064-8d46b90544f1" containerName="registry-server" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.484005 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="90deaace-31a9-4492-8064-8d46b90544f1" containerName="registry-server" Jan 09 00:36:46 crc kubenswrapper[4945]: E0109 00:36:46.484015 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f05223-089e-421f-a84e-910052b7ddde" containerName="extract-content" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.484022 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f05223-089e-421f-a84e-910052b7ddde" containerName="extract-content" Jan 09 00:36:46 crc kubenswrapper[4945]: E0109 00:36:46.484036 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90deaace-31a9-4492-8064-8d46b90544f1" containerName="extract-content" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.484042 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="90deaace-31a9-4492-8064-8d46b90544f1" containerName="extract-content" Jan 09 00:36:46 crc kubenswrapper[4945]: E0109 00:36:46.484060 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f05223-089e-421f-a84e-910052b7ddde" containerName="registry-server" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.484066 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f05223-089e-421f-a84e-910052b7ddde" containerName="registry-server" Jan 09 00:36:46 crc kubenswrapper[4945]: E0109 00:36:46.484073 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90deaace-31a9-4492-8064-8d46b90544f1" containerName="extract-utilities" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.484078 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="90deaace-31a9-4492-8064-8d46b90544f1" containerName="extract-utilities" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.484209 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="90deaace-31a9-4492-8064-8d46b90544f1" containerName="registry-server" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.484223 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f05223-089e-421f-a84e-910052b7ddde" containerName="registry-server" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.484766 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.487119 4945 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-vxd97" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.487578 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.487859 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.488112 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.489596 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-r69zw"] Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.618229 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/985802b0-7875-4556-b625-284f7253f956-crc-storage\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.618330 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/985802b0-7875-4556-b625-284f7253f956-node-mnt\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.618371 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2gq5\" (UniqueName: \"kubernetes.io/projected/985802b0-7875-4556-b625-284f7253f956-kube-api-access-c2gq5\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.719491 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/985802b0-7875-4556-b625-284f7253f956-node-mnt\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.719547 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2gq5\" (UniqueName: \"kubernetes.io/projected/985802b0-7875-4556-b625-284f7253f956-kube-api-access-c2gq5\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.719606 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/985802b0-7875-4556-b625-284f7253f956-crc-storage\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.719816 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/985802b0-7875-4556-b625-284f7253f956-node-mnt\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.720295 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/985802b0-7875-4556-b625-284f7253f956-crc-storage\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.746371 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2gq5\" (UniqueName: \"kubernetes.io/projected/985802b0-7875-4556-b625-284f7253f956-kube-api-access-c2gq5\") pod \"crc-storage-crc-r69zw\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:46 crc kubenswrapper[4945]: I0109 00:36:46.808103 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:47 crc kubenswrapper[4945]: I0109 00:36:47.221189 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-r69zw"] Jan 09 00:36:47 crc kubenswrapper[4945]: I0109 00:36:47.226252 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 00:36:47 crc kubenswrapper[4945]: I0109 00:36:47.906148 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-r69zw" event={"ID":"985802b0-7875-4556-b625-284f7253f956","Type":"ContainerStarted","Data":"661f293de5f26bd5fa0f6efa3ebc964ad1f4e49dfe446767e3c86c2c41660eaa"} Jan 09 00:36:48 crc kubenswrapper[4945]: I0109 00:36:48.009152 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e424d84a-0bbd-48ba-aec1-e2fecd4578f1" path="/var/lib/kubelet/pods/e424d84a-0bbd-48ba-aec1-e2fecd4578f1/volumes" Jan 09 00:36:48 crc kubenswrapper[4945]: I0109 00:36:48.915772 4945 generic.go:334] "Generic (PLEG): container finished" podID="985802b0-7875-4556-b625-284f7253f956" containerID="19433dd7ebeded123b956bcc6f1000ef59ba9a41302a00d2d0d2c5031b060d9a" exitCode=0 Jan 09 00:36:48 crc kubenswrapper[4945]: I0109 00:36:48.916037 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-r69zw" event={"ID":"985802b0-7875-4556-b625-284f7253f956","Type":"ContainerDied","Data":"19433dd7ebeded123b956bcc6f1000ef59ba9a41302a00d2d0d2c5031b060d9a"} Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.210041 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.368611 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2gq5\" (UniqueName: \"kubernetes.io/projected/985802b0-7875-4556-b625-284f7253f956-kube-api-access-c2gq5\") pod \"985802b0-7875-4556-b625-284f7253f956\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.368656 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/985802b0-7875-4556-b625-284f7253f956-crc-storage\") pod \"985802b0-7875-4556-b625-284f7253f956\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.368743 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/985802b0-7875-4556-b625-284f7253f956-node-mnt\") pod \"985802b0-7875-4556-b625-284f7253f956\" (UID: \"985802b0-7875-4556-b625-284f7253f956\") " Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.368983 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985802b0-7875-4556-b625-284f7253f956-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "985802b0-7875-4556-b625-284f7253f956" (UID: "985802b0-7875-4556-b625-284f7253f956"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.369622 4945 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/985802b0-7875-4556-b625-284f7253f956-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.382285 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985802b0-7875-4556-b625-284f7253f956-kube-api-access-c2gq5" (OuterVolumeSpecName: "kube-api-access-c2gq5") pod "985802b0-7875-4556-b625-284f7253f956" (UID: "985802b0-7875-4556-b625-284f7253f956"). InnerVolumeSpecName "kube-api-access-c2gq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.396057 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/985802b0-7875-4556-b625-284f7253f956-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "985802b0-7875-4556-b625-284f7253f956" (UID: "985802b0-7875-4556-b625-284f7253f956"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.471390 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2gq5\" (UniqueName: \"kubernetes.io/projected/985802b0-7875-4556-b625-284f7253f956-kube-api-access-c2gq5\") on node \"crc\" DevicePath \"\"" Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.471423 4945 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/985802b0-7875-4556-b625-284f7253f956-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.938732 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-r69zw" event={"ID":"985802b0-7875-4556-b625-284f7253f956","Type":"ContainerDied","Data":"661f293de5f26bd5fa0f6efa3ebc964ad1f4e49dfe446767e3c86c2c41660eaa"} Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.939388 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="661f293de5f26bd5fa0f6efa3ebc964ad1f4e49dfe446767e3c86c2c41660eaa" Jan 09 00:36:50 crc kubenswrapper[4945]: I0109 00:36:50.938790 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-r69zw" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.512784 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-r69zw"] Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.517503 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-r69zw"] Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.657494 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-ct65m"] Jan 09 00:36:52 crc kubenswrapper[4945]: E0109 00:36:52.657904 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985802b0-7875-4556-b625-284f7253f956" containerName="storage" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.657926 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="985802b0-7875-4556-b625-284f7253f956" containerName="storage" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.658104 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="985802b0-7875-4556-b625-284f7253f956" containerName="storage" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.658670 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.660943 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.660952 4945 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-vxd97" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.660978 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.661248 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.675223 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ct65m"] Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.815910 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9m5v\" (UniqueName: \"kubernetes.io/projected/bec48459-fea2-41a5-9754-36174bc30d41-kube-api-access-j9m5v\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.815975 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bec48459-fea2-41a5-9754-36174bc30d41-node-mnt\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.816035 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bec48459-fea2-41a5-9754-36174bc30d41-crc-storage\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.917634 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9m5v\" (UniqueName: \"kubernetes.io/projected/bec48459-fea2-41a5-9754-36174bc30d41-kube-api-access-j9m5v\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.917680 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bec48459-fea2-41a5-9754-36174bc30d41-node-mnt\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.917710 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bec48459-fea2-41a5-9754-36174bc30d41-crc-storage\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.918216 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bec48459-fea2-41a5-9754-36174bc30d41-node-mnt\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.918688 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bec48459-fea2-41a5-9754-36174bc30d41-crc-storage\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.941914 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9m5v\" (UniqueName: \"kubernetes.io/projected/bec48459-fea2-41a5-9754-36174bc30d41-kube-api-access-j9m5v\") pod \"crc-storage-crc-ct65m\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:52 crc kubenswrapper[4945]: I0109 00:36:52.979056 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:53 crc kubenswrapper[4945]: I0109 00:36:53.202696 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ct65m"] Jan 09 00:36:53 crc kubenswrapper[4945]: W0109 00:36:53.209093 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbec48459_fea2_41a5_9754_36174bc30d41.slice/crio-b5cf03be5560466b6a579d5605586be8a46218f02dd531ed3845811fe7e16e11 WatchSource:0}: Error finding container b5cf03be5560466b6a579d5605586be8a46218f02dd531ed3845811fe7e16e11: Status 404 returned error can't find the container with id b5cf03be5560466b6a579d5605586be8a46218f02dd531ed3845811fe7e16e11 Jan 09 00:36:53 crc kubenswrapper[4945]: I0109 00:36:53.959235 4945 generic.go:334] "Generic (PLEG): container finished" podID="bec48459-fea2-41a5-9754-36174bc30d41" containerID="357cf00822092fa289cd1579fe3d5c148365bd72cea001495f6f4b3a1317ec59" exitCode=0 Jan 09 00:36:53 crc kubenswrapper[4945]: I0109 00:36:53.959420 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ct65m" event={"ID":"bec48459-fea2-41a5-9754-36174bc30d41","Type":"ContainerDied","Data":"357cf00822092fa289cd1579fe3d5c148365bd72cea001495f6f4b3a1317ec59"} Jan 09 00:36:53 crc kubenswrapper[4945]: I0109 00:36:53.959545 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ct65m" event={"ID":"bec48459-fea2-41a5-9754-36174bc30d41","Type":"ContainerStarted","Data":"b5cf03be5560466b6a579d5605586be8a46218f02dd531ed3845811fe7e16e11"} Jan 09 00:36:54 crc kubenswrapper[4945]: I0109 00:36:54.010320 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985802b0-7875-4556-b625-284f7253f956" path="/var/lib/kubelet/pods/985802b0-7875-4556-b625-284f7253f956/volumes" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.208441 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.351947 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bec48459-fea2-41a5-9754-36174bc30d41-node-mnt\") pod \"bec48459-fea2-41a5-9754-36174bc30d41\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.352011 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bec48459-fea2-41a5-9754-36174bc30d41-crc-storage\") pod \"bec48459-fea2-41a5-9754-36174bc30d41\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.352059 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9m5v\" (UniqueName: \"kubernetes.io/projected/bec48459-fea2-41a5-9754-36174bc30d41-kube-api-access-j9m5v\") pod \"bec48459-fea2-41a5-9754-36174bc30d41\" (UID: \"bec48459-fea2-41a5-9754-36174bc30d41\") " Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.352094 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bec48459-fea2-41a5-9754-36174bc30d41-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "bec48459-fea2-41a5-9754-36174bc30d41" (UID: "bec48459-fea2-41a5-9754-36174bc30d41"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.352330 4945 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bec48459-fea2-41a5-9754-36174bc30d41-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.358716 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec48459-fea2-41a5-9754-36174bc30d41-kube-api-access-j9m5v" (OuterVolumeSpecName: "kube-api-access-j9m5v") pod "bec48459-fea2-41a5-9754-36174bc30d41" (UID: "bec48459-fea2-41a5-9754-36174bc30d41"). InnerVolumeSpecName "kube-api-access-j9m5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.378510 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec48459-fea2-41a5-9754-36174bc30d41-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "bec48459-fea2-41a5-9754-36174bc30d41" (UID: "bec48459-fea2-41a5-9754-36174bc30d41"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.453459 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9m5v\" (UniqueName: \"kubernetes.io/projected/bec48459-fea2-41a5-9754-36174bc30d41-kube-api-access-j9m5v\") on node \"crc\" DevicePath \"\"" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.453515 4945 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bec48459-fea2-41a5-9754-36174bc30d41-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.973558 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ct65m" event={"ID":"bec48459-fea2-41a5-9754-36174bc30d41","Type":"ContainerDied","Data":"b5cf03be5560466b6a579d5605586be8a46218f02dd531ed3845811fe7e16e11"} Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.973985 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5cf03be5560466b6a579d5605586be8a46218f02dd531ed3845811fe7e16e11" Jan 09 00:36:55 crc kubenswrapper[4945]: I0109 00:36:55.974144 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ct65m" Jan 09 00:37:35 crc kubenswrapper[4945]: I0109 00:37:35.592360 4945 scope.go:117] "RemoveContainer" containerID="f0326f7a37d125abd9922bfa94bf9e314ae209d84d469f312aa9fc2eb56ace0d" Jan 09 00:38:13 crc kubenswrapper[4945]: I0109 00:38:13.578319 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:38:13 crc kubenswrapper[4945]: I0109 00:38:13.579064 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:38:43 crc kubenswrapper[4945]: I0109 00:38:43.577901 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:38:43 crc kubenswrapper[4945]: I0109 00:38:43.578580 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:39:13 crc kubenswrapper[4945]: I0109 00:39:13.578729 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:39:13 crc kubenswrapper[4945]: I0109 00:39:13.579384 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:39:13 crc kubenswrapper[4945]: I0109 00:39:13.579531 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:39:13 crc kubenswrapper[4945]: I0109 00:39:13.580220 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:39:13 crc kubenswrapper[4945]: I0109 00:39:13.580294 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" gracePeriod=600 Jan 09 00:39:13 crc kubenswrapper[4945]: E0109 00:39:13.706951 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:39:14 crc kubenswrapper[4945]: I0109 00:39:14.019669 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" exitCode=0 Jan 09 00:39:14 crc kubenswrapper[4945]: I0109 00:39:14.019728 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5"} Jan 09 00:39:14 crc kubenswrapper[4945]: I0109 00:39:14.019774 4945 scope.go:117] "RemoveContainer" containerID="131175c614aae3d2144a82a8e5cc90358991986d393a3dd85e043f211f2e62f9" Jan 09 00:39:14 crc kubenswrapper[4945]: I0109 00:39:14.020326 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:39:14 crc kubenswrapper[4945]: E0109 00:39:14.020597 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:39:25 crc kubenswrapper[4945]: I0109 00:39:25.001349 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:39:25 crc kubenswrapper[4945]: E0109 00:39:25.002297 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:39:38 crc kubenswrapper[4945]: I0109 00:39:38.000201 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:39:38 crc kubenswrapper[4945]: E0109 00:39:38.001047 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:39:51 crc kubenswrapper[4945]: I0109 00:39:51.000611 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:39:51 crc kubenswrapper[4945]: E0109 00:39:51.001455 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:40:05 crc kubenswrapper[4945]: I0109 00:40:05.000709 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:40:05 crc kubenswrapper[4945]: E0109 00:40:05.001433 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:40:16 crc kubenswrapper[4945]: I0109 00:40:16.000450 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:40:16 crc kubenswrapper[4945]: E0109 00:40:16.001426 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.903161 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-6fgqt"] Jan 09 00:40:25 crc kubenswrapper[4945]: E0109 00:40:25.904069 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec48459-fea2-41a5-9754-36174bc30d41" containerName="storage" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.904083 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec48459-fea2-41a5-9754-36174bc30d41" containerName="storage" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.904219 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec48459-fea2-41a5-9754-36174bc30d41" containerName="storage" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.904978 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.908644 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.908916 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.909191 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-prtd7" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.909409 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.910035 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.936054 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.936121 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-config\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.936148 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22cp8\" (UniqueName: \"kubernetes.io/projected/bbd55f1d-058a-4825-92fe-7af6e0b8df70-kube-api-access-22cp8\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:25 crc kubenswrapper[4945]: I0109 00:40:25.941924 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-6fgqt"] Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.037805 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-config\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.037875 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22cp8\" (UniqueName: \"kubernetes.io/projected/bbd55f1d-058a-4825-92fe-7af6e0b8df70-kube-api-access-22cp8\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.037955 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.038964 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.038985 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-config\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.062223 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22cp8\" (UniqueName: \"kubernetes.io/projected/bbd55f1d-058a-4825-92fe-7af6e0b8df70-kube-api-access-22cp8\") pod \"dnsmasq-dns-5d7b5456f5-6fgqt\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.162018 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-8rcpg"] Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.163494 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.178297 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-8rcpg"] Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.227955 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.242816 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.242937 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-config\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.243019 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbxm\" (UniqueName: \"kubernetes.io/projected/2450d5ee-cd3f-4201-a1a5-8c2c74083937-kube-api-access-nlbxm\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.344018 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-config\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.344116 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlbxm\" (UniqueName: \"kubernetes.io/projected/2450d5ee-cd3f-4201-a1a5-8c2c74083937-kube-api-access-nlbxm\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.344168 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.345237 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-config\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.345304 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.380099 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlbxm\" (UniqueName: \"kubernetes.io/projected/2450d5ee-cd3f-4201-a1a5-8c2c74083937-kube-api-access-nlbxm\") pod \"dnsmasq-dns-98ddfc8f-8rcpg\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.485347 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.749116 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-6fgqt"] Jan 09 00:40:26 crc kubenswrapper[4945]: I0109 00:40:26.776474 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-8rcpg"] Jan 09 00:40:26 crc kubenswrapper[4945]: W0109 00:40:26.780561 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2450d5ee_cd3f_4201_a1a5_8c2c74083937.slice/crio-e553c0b5325a3f959fde289abb13a8355324f4d7da5a099be0426c074ef5cc48 WatchSource:0}: Error finding container e553c0b5325a3f959fde289abb13a8355324f4d7da5a099be0426c074ef5cc48: Status 404 returned error can't find the container with id e553c0b5325a3f959fde289abb13a8355324f4d7da5a099be0426c074ef5cc48 Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.006847 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.008321 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.011746 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.012163 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.013657 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x2c52" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.013774 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.013868 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.023927 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.153718 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.153775 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.153816 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.154064 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ecf3c5-c5ba-499a-a20b-bd1feee567da-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.154334 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.154493 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ecf3c5-c5ba-499a-a20b-bd1feee567da-pod-info\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.154597 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzmm\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-kube-api-access-vxzmm\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.154637 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.154687 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-server-conf\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270111 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ecf3c5-c5ba-499a-a20b-bd1feee567da-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270184 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270214 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ecf3c5-c5ba-499a-a20b-bd1feee567da-pod-info\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270243 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzmm\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-kube-api-access-vxzmm\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270259 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270276 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-server-conf\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270298 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270325 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270353 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.270938 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.271290 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.271765 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.272065 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-server-conf\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.273294 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.273328 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/964deba7dcfd6c062ba92bde00b7cb39634e1091b5c36c017c585df16cd8a8df/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.275767 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.275762 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ecf3c5-c5ba-499a-a20b-bd1feee567da-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.283725 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ecf3c5-c5ba-499a-a20b-bd1feee567da-pod-info\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.290873 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzmm\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-kube-api-access-vxzmm\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.308449 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"rabbitmq-server-0\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.349819 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.351085 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.354165 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.354304 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.354506 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xbpbg" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.354884 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.355848 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.369590 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.428216 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.511152 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.511472 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.511697 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.511846 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.511912 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.512087 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.512302 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.512348 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.512480 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62qtl\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-kube-api-access-62qtl\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.537592 4945 generic.go:334] "Generic (PLEG): container finished" podID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" containerID="0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566" exitCode=0 Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.537673 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" event={"ID":"bbd55f1d-058a-4825-92fe-7af6e0b8df70","Type":"ContainerDied","Data":"0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566"} Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.537708 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" event={"ID":"bbd55f1d-058a-4825-92fe-7af6e0b8df70","Type":"ContainerStarted","Data":"08b95037d6a2eb7db43efb107839a85c3cc6ab0389a4644407c466ab981d4b0d"} Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.547135 4945 generic.go:334] "Generic (PLEG): container finished" podID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" containerID="af9985d111de42164a39fd4e5dd4e1702458d395cdf3d74e7b400312ba64ea21" exitCode=0 Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.547179 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" event={"ID":"2450d5ee-cd3f-4201-a1a5-8c2c74083937","Type":"ContainerDied","Data":"af9985d111de42164a39fd4e5dd4e1702458d395cdf3d74e7b400312ba64ea21"} Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.547214 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" event={"ID":"2450d5ee-cd3f-4201-a1a5-8c2c74083937","Type":"ContainerStarted","Data":"e553c0b5325a3f959fde289abb13a8355324f4d7da5a099be0426c074ef5cc48"} Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.613807 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.613868 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.613896 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.613941 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62qtl\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-kube-api-access-62qtl\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.613965 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.614006 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.614036 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.614065 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.614086 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.614431 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.614515 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.615313 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.615762 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.616607 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.616652 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1d4b16e78021504bfff2e586d3387de3c3cb8321ff7b40542bbaf4c3df837d83/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.619901 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.620863 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.627616 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.640463 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62qtl\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-kube-api-access-62qtl\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.650640 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.722614 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:40:27 crc kubenswrapper[4945]: I0109 00:40:27.899036 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.129946 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.131190 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.133379 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.134956 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.135084 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5v584" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.135084 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.143346 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.151459 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.187493 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:40:28 crc kubenswrapper[4945]: W0109 00:40:28.198185 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2886ea2_b692_46ae_896d_3a5ff19ae5f8.slice/crio-221afdbbcac6cf93eba04adbc44fa8cd3df95b34d273e7144c4e6a51b771c0f1 WatchSource:0}: Error finding container 221afdbbcac6cf93eba04adbc44fa8cd3df95b34d273e7144c4e6a51b771c0f1: Status 404 returned error can't find the container with id 221afdbbcac6cf93eba04adbc44fa8cd3df95b34d273e7144c4e6a51b771c0f1 Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.226819 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4xv\" (UniqueName: \"kubernetes.io/projected/65eea646-4dc8-44a1-b394-3d4ce08867a6-kube-api-access-xs4xv\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.227140 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/65eea646-4dc8-44a1-b394-3d4ce08867a6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.227261 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa364fec-c7c4-4a0d-b285-b3c796381134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa364fec-c7c4-4a0d-b285-b3c796381134\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.227409 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eea646-4dc8-44a1-b394-3d4ce08867a6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.227529 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.227663 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eea646-4dc8-44a1-b394-3d4ce08867a6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.227771 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-kolla-config\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.227883 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-config-data-default\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.329434 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/65eea646-4dc8-44a1-b394-3d4ce08867a6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.329838 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aa364fec-c7c4-4a0d-b285-b3c796381134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa364fec-c7c4-4a0d-b285-b3c796381134\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.330014 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eea646-4dc8-44a1-b394-3d4ce08867a6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.330179 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.330850 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/65eea646-4dc8-44a1-b394-3d4ce08867a6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.332243 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.332601 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eea646-4dc8-44a1-b394-3d4ce08867a6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.332713 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-kolla-config\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.332841 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-config-data-default\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.333646 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-kolla-config\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.333920 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/65eea646-4dc8-44a1-b394-3d4ce08867a6-config-data-default\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.334246 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4xv\" (UniqueName: \"kubernetes.io/projected/65eea646-4dc8-44a1-b394-3d4ce08867a6-kube-api-access-xs4xv\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.335692 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eea646-4dc8-44a1-b394-3d4ce08867a6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.338979 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eea646-4dc8-44a1-b394-3d4ce08867a6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.341845 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.341883 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aa364fec-c7c4-4a0d-b285-b3c796381134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa364fec-c7c4-4a0d-b285-b3c796381134\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6a7e8636e5f867a7104cbe1a916ab77aa84979ce88ca62fd3de750ad1f99c042/globalmount\"" pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.356154 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4xv\" (UniqueName: \"kubernetes.io/projected/65eea646-4dc8-44a1-b394-3d4ce08867a6-kube-api-access-xs4xv\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.369224 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aa364fec-c7c4-4a0d-b285-b3c796381134\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa364fec-c7c4-4a0d-b285-b3c796381134\") pod \"openstack-galera-0\" (UID: \"65eea646-4dc8-44a1-b394-3d4ce08867a6\") " pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.446908 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.555051 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2886ea2-b692-46ae-896d-3a5ff19ae5f8","Type":"ContainerStarted","Data":"221afdbbcac6cf93eba04adbc44fa8cd3df95b34d273e7144c4e6a51b771c0f1"} Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.558192 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" event={"ID":"bbd55f1d-058a-4825-92fe-7af6e0b8df70","Type":"ContainerStarted","Data":"10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec"} Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.558588 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.559483 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ecf3c5-c5ba-499a-a20b-bd1feee567da","Type":"ContainerStarted","Data":"28f65f590835e437a5ac32638d0dfe876b0eee8f489b7aa26baee023a1a15ef8"} Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.561817 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" event={"ID":"2450d5ee-cd3f-4201-a1a5-8c2c74083937","Type":"ContainerStarted","Data":"f4c0d7c5c10267e91aac60b34360601ed0e22f954dd8c58c0473ebce5b995994"} Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.562058 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.580886 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" podStartSLOduration=3.580861534 podStartE2EDuration="3.580861534s" podCreationTimestamp="2026-01-09 00:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:40:28.57865079 +0000 UTC m=+5098.889809736" watchObservedRunningTime="2026-01-09 00:40:28.580861534 +0000 UTC m=+5098.892020480" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.606087 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" podStartSLOduration=2.606062193 podStartE2EDuration="2.606062193s" podCreationTimestamp="2026-01-09 00:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:40:28.600554868 +0000 UTC m=+5098.911713814" watchObservedRunningTime="2026-01-09 00:40:28.606062193 +0000 UTC m=+5098.917221129" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.718923 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.720124 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.723911 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-d4s2g" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.724148 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.734257 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.841785 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79652c70-3275-4435-ba88-786cb8beaf4e-config-data\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.841902 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/79652c70-3275-4435-ba88-786cb8beaf4e-kolla-config\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.841970 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7hdk\" (UniqueName: \"kubernetes.io/projected/79652c70-3275-4435-ba88-786cb8beaf4e-kube-api-access-z7hdk\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.943353 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7hdk\" (UniqueName: \"kubernetes.io/projected/79652c70-3275-4435-ba88-786cb8beaf4e-kube-api-access-z7hdk\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.943471 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79652c70-3275-4435-ba88-786cb8beaf4e-config-data\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.943510 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/79652c70-3275-4435-ba88-786cb8beaf4e-kolla-config\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.944342 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/79652c70-3275-4435-ba88-786cb8beaf4e-kolla-config\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.944669 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79652c70-3275-4435-ba88-786cb8beaf4e-config-data\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:28 crc kubenswrapper[4945]: I0109 00:40:28.960420 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7hdk\" (UniqueName: \"kubernetes.io/projected/79652c70-3275-4435-ba88-786cb8beaf4e-kube-api-access-z7hdk\") pod \"memcached-0\" (UID: \"79652c70-3275-4435-ba88-786cb8beaf4e\") " pod="openstack/memcached-0" Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.046691 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.094049 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 09 00:40:29 crc kubenswrapper[4945]: W0109 00:40:29.099263 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65eea646_4dc8_44a1_b394_3d4ce08867a6.slice/crio-5dba50d38dbbf3aec1de832d2cd2fc8eddbec183494a0e8d8f86da0eada6c39e WatchSource:0}: Error finding container 5dba50d38dbbf3aec1de832d2cd2fc8eddbec183494a0e8d8f86da0eada6c39e: Status 404 returned error can't find the container with id 5dba50d38dbbf3aec1de832d2cd2fc8eddbec183494a0e8d8f86da0eada6c39e Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.469116 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 09 00:40:29 crc kubenswrapper[4945]: W0109 00:40:29.476621 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79652c70_3275_4435_ba88_786cb8beaf4e.slice/crio-9750baf3335231c727b50eba8c5c182d72bc5ec600dd85efe6aa05de96b8f361 WatchSource:0}: Error finding container 9750baf3335231c727b50eba8c5c182d72bc5ec600dd85efe6aa05de96b8f361: Status 404 returned error can't find the container with id 9750baf3335231c727b50eba8c5c182d72bc5ec600dd85efe6aa05de96b8f361 Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.568818 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"65eea646-4dc8-44a1-b394-3d4ce08867a6","Type":"ContainerStarted","Data":"5dba50d38dbbf3aec1de832d2cd2fc8eddbec183494a0e8d8f86da0eada6c39e"} Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.570568 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"79652c70-3275-4435-ba88-786cb8beaf4e","Type":"ContainerStarted","Data":"9750baf3335231c727b50eba8c5c182d72bc5ec600dd85efe6aa05de96b8f361"} Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.884707 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.885982 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.888820 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.889327 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.889349 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.889899 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kq2t7" Jan 09 00:40:29 crc kubenswrapper[4945]: I0109 00:40:29.899628 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.007764 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:40:30 crc kubenswrapper[4945]: E0109 00:40:30.008154 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.058935 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a3f7c56-bed3-4e26-93b3-17627d2066bb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.059040 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a3f7c56-bed3-4e26-93b3-17627d2066bb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.059101 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a3f7c56-bed3-4e26-93b3-17627d2066bb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.059131 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.059165 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnb7z\" (UniqueName: \"kubernetes.io/projected/5a3f7c56-bed3-4e26-93b3-17627d2066bb-kube-api-access-qnb7z\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.059190 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.059211 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.059242 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cdeb9295-ea8c-45a4-8d21-80f6fcf76483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdeb9295-ea8c-45a4-8d21-80f6fcf76483\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.161311 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a3f7c56-bed3-4e26-93b3-17627d2066bb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.161697 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a3f7c56-bed3-4e26-93b3-17627d2066bb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.161841 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a3f7c56-bed3-4e26-93b3-17627d2066bb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.161892 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.161978 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnb7z\" (UniqueName: \"kubernetes.io/projected/5a3f7c56-bed3-4e26-93b3-17627d2066bb-kube-api-access-qnb7z\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.162068 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.162100 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.162175 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cdeb9295-ea8c-45a4-8d21-80f6fcf76483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdeb9295-ea8c-45a4-8d21-80f6fcf76483\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.163121 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a3f7c56-bed3-4e26-93b3-17627d2066bb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.169204 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.169374 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.169406 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a3f7c56-bed3-4e26-93b3-17627d2066bb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.169549 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.172552 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.172587 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cdeb9295-ea8c-45a4-8d21-80f6fcf76483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdeb9295-ea8c-45a4-8d21-80f6fcf76483\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/beed474a0dfb3e2f8b563a33f7791caa6e6c520d5ea6b7e287342c1113d259d2/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.173661 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.174103 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.174631 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a3f7c56-bed3-4e26-93b3-17627d2066bb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.176284 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a3f7c56-bed3-4e26-93b3-17627d2066bb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.180871 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnb7z\" (UniqueName: \"kubernetes.io/projected/5a3f7c56-bed3-4e26-93b3-17627d2066bb-kube-api-access-qnb7z\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.200619 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cdeb9295-ea8c-45a4-8d21-80f6fcf76483\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cdeb9295-ea8c-45a4-8d21-80f6fcf76483\") pod \"openstack-cell1-galera-0\" (UID: \"5a3f7c56-bed3-4e26-93b3-17627d2066bb\") " pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.252213 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kq2t7" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.261376 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.581012 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"65eea646-4dc8-44a1-b394-3d4ce08867a6","Type":"ContainerStarted","Data":"9ce566712a87dec3c4abda0ac956d01c44995d97b8a2e8c16c3ad789510ac010"} Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.582564 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2886ea2-b692-46ae-896d-3a5ff19ae5f8","Type":"ContainerStarted","Data":"6f866fbf59163f047d6af2c60285cfd2dd4e5c9e4ffb889e7c333d6a9b0e4c51"} Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.586490 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"79652c70-3275-4435-ba88-786cb8beaf4e","Type":"ContainerStarted","Data":"ccec7aa83ce3993d1c1d2f17890dffdea3e286655de82e5cfda32a7fb6fd2a16"} Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.586649 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.588271 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ecf3c5-c5ba-499a-a20b-bd1feee567da","Type":"ContainerStarted","Data":"6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20"} Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.684038 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.684014827 podStartE2EDuration="2.684014827s" podCreationTimestamp="2026-01-09 00:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:40:30.677238041 +0000 UTC m=+5100.988396987" watchObservedRunningTime="2026-01-09 00:40:30.684014827 +0000 UTC m=+5100.995173773" Jan 09 00:40:30 crc kubenswrapper[4945]: I0109 00:40:30.706590 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 09 00:40:31 crc kubenswrapper[4945]: I0109 00:40:31.596728 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a3f7c56-bed3-4e26-93b3-17627d2066bb","Type":"ContainerStarted","Data":"e152d835a767910aa7401d61302b0692f3ae4c1c199535dcf3f13ce559a1c556"} Jan 09 00:40:31 crc kubenswrapper[4945]: I0109 00:40:31.597333 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a3f7c56-bed3-4e26-93b3-17627d2066bb","Type":"ContainerStarted","Data":"9b22f4ae5071ed406ba0fd357450ace967cacfaf7a3b5530c72d2ab5751cef92"} Jan 09 00:40:33 crc kubenswrapper[4945]: I0109 00:40:33.616748 4945 generic.go:334] "Generic (PLEG): container finished" podID="65eea646-4dc8-44a1-b394-3d4ce08867a6" containerID="9ce566712a87dec3c4abda0ac956d01c44995d97b8a2e8c16c3ad789510ac010" exitCode=0 Jan 09 00:40:33 crc kubenswrapper[4945]: I0109 00:40:33.616841 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"65eea646-4dc8-44a1-b394-3d4ce08867a6","Type":"ContainerDied","Data":"9ce566712a87dec3c4abda0ac956d01c44995d97b8a2e8c16c3ad789510ac010"} Jan 09 00:40:34 crc kubenswrapper[4945]: I0109 00:40:34.049058 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 09 00:40:34 crc kubenswrapper[4945]: I0109 00:40:34.628792 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"65eea646-4dc8-44a1-b394-3d4ce08867a6","Type":"ContainerStarted","Data":"2dd93de7a8137cfb75c05a70f3fc7436b7953202a9dd77a22edb2ccd22f74cb8"} Jan 09 00:40:34 crc kubenswrapper[4945]: I0109 00:40:34.661547 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.661517935 podStartE2EDuration="7.661517935s" podCreationTimestamp="2026-01-09 00:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:40:34.656067841 +0000 UTC m=+5104.967226787" watchObservedRunningTime="2026-01-09 00:40:34.661517935 +0000 UTC m=+5104.972676881" Jan 09 00:40:35 crc kubenswrapper[4945]: I0109 00:40:35.642084 4945 generic.go:334] "Generic (PLEG): container finished" podID="5a3f7c56-bed3-4e26-93b3-17627d2066bb" containerID="e152d835a767910aa7401d61302b0692f3ae4c1c199535dcf3f13ce559a1c556" exitCode=0 Jan 09 00:40:35 crc kubenswrapper[4945]: I0109 00:40:35.642285 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a3f7c56-bed3-4e26-93b3-17627d2066bb","Type":"ContainerDied","Data":"e152d835a767910aa7401d61302b0692f3ae4c1c199535dcf3f13ce559a1c556"} Jan 09 00:40:36 crc kubenswrapper[4945]: I0109 00:40:36.230331 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:36 crc kubenswrapper[4945]: I0109 00:40:36.488055 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:40:36 crc kubenswrapper[4945]: I0109 00:40:36.539203 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-6fgqt"] Jan 09 00:40:36 crc kubenswrapper[4945]: I0109 00:40:36.653285 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" podUID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" containerName="dnsmasq-dns" containerID="cri-o://10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec" gracePeriod=10 Jan 09 00:40:36 crc kubenswrapper[4945]: I0109 00:40:36.653674 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a3f7c56-bed3-4e26-93b3-17627d2066bb","Type":"ContainerStarted","Data":"48ef914446ce1d55d3540313d5d79d5b619498d4e5ef12751e3e929bf8640edb"} Jan 09 00:40:36 crc kubenswrapper[4945]: I0109 00:40:36.688957 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.688931378 podStartE2EDuration="8.688931378s" podCreationTimestamp="2026-01-09 00:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:40:36.681749012 +0000 UTC m=+5106.992907988" watchObservedRunningTime="2026-01-09 00:40:36.688931378 +0000 UTC m=+5107.000090334" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.133743 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.167217 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-config\") pod \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.167706 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-dns-svc\") pod \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.167858 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22cp8\" (UniqueName: \"kubernetes.io/projected/bbd55f1d-058a-4825-92fe-7af6e0b8df70-kube-api-access-22cp8\") pod \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\" (UID: \"bbd55f1d-058a-4825-92fe-7af6e0b8df70\") " Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.189664 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd55f1d-058a-4825-92fe-7af6e0b8df70-kube-api-access-22cp8" (OuterVolumeSpecName: "kube-api-access-22cp8") pod "bbd55f1d-058a-4825-92fe-7af6e0b8df70" (UID: "bbd55f1d-058a-4825-92fe-7af6e0b8df70"). InnerVolumeSpecName "kube-api-access-22cp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.210633 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbd55f1d-058a-4825-92fe-7af6e0b8df70" (UID: "bbd55f1d-058a-4825-92fe-7af6e0b8df70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.223054 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-config" (OuterVolumeSpecName: "config") pod "bbd55f1d-058a-4825-92fe-7af6e0b8df70" (UID: "bbd55f1d-058a-4825-92fe-7af6e0b8df70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.270748 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22cp8\" (UniqueName: \"kubernetes.io/projected/bbd55f1d-058a-4825-92fe-7af6e0b8df70-kube-api-access-22cp8\") on node \"crc\" DevicePath \"\"" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.270794 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.270803 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbd55f1d-058a-4825-92fe-7af6e0b8df70-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.660938 4945 generic.go:334] "Generic (PLEG): container finished" podID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" containerID="10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec" exitCode=0 Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.660985 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" event={"ID":"bbd55f1d-058a-4825-92fe-7af6e0b8df70","Type":"ContainerDied","Data":"10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec"} Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.661036 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.661066 4945 scope.go:117] "RemoveContainer" containerID="10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.661047 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-6fgqt" event={"ID":"bbd55f1d-058a-4825-92fe-7af6e0b8df70","Type":"ContainerDied","Data":"08b95037d6a2eb7db43efb107839a85c3cc6ab0389a4644407c466ab981d4b0d"} Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.678473 4945 scope.go:117] "RemoveContainer" containerID="0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.696134 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-6fgqt"] Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.700568 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-6fgqt"] Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.704907 4945 scope.go:117] "RemoveContainer" containerID="10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec" Jan 09 00:40:37 crc kubenswrapper[4945]: E0109 00:40:37.705334 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec\": container with ID starting with 10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec not found: ID does not exist" containerID="10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.705372 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec"} err="failed to get container status \"10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec\": rpc error: code = NotFound desc = could not find container \"10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec\": container with ID starting with 10e6c2f67fa803361191e3296873221abda51721ee779c9f4017cab8ec2723ec not found: ID does not exist" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.705391 4945 scope.go:117] "RemoveContainer" containerID="0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566" Jan 09 00:40:37 crc kubenswrapper[4945]: E0109 00:40:37.705673 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566\": container with ID starting with 0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566 not found: ID does not exist" containerID="0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566" Jan 09 00:40:37 crc kubenswrapper[4945]: I0109 00:40:37.705727 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566"} err="failed to get container status \"0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566\": rpc error: code = NotFound desc = could not find container \"0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566\": container with ID starting with 0df82a5cdd098c0ced3a2d62e4c6bc66c2c1a922554fc4119ebf728358a23566 not found: ID does not exist" Jan 09 00:40:38 crc kubenswrapper[4945]: I0109 00:40:38.011940 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" path="/var/lib/kubelet/pods/bbd55f1d-058a-4825-92fe-7af6e0b8df70/volumes" Jan 09 00:40:38 crc kubenswrapper[4945]: I0109 00:40:38.447654 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 09 00:40:38 crc kubenswrapper[4945]: I0109 00:40:38.448034 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 09 00:40:40 crc kubenswrapper[4945]: I0109 00:40:40.262520 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:40 crc kubenswrapper[4945]: I0109 00:40:40.262599 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:40 crc kubenswrapper[4945]: I0109 00:40:40.354184 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:40 crc kubenswrapper[4945]: I0109 00:40:40.743413 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 09 00:40:40 crc kubenswrapper[4945]: I0109 00:40:40.754160 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 09 00:40:40 crc kubenswrapper[4945]: I0109 00:40:40.838507 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 09 00:40:41 crc kubenswrapper[4945]: I0109 00:40:41.000010 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:40:41 crc kubenswrapper[4945]: E0109 00:40:41.000299 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.097530 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-55qcj"] Jan 09 00:40:47 crc kubenswrapper[4945]: E0109 00:40:47.098187 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" containerName="init" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.098200 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" containerName="init" Jan 09 00:40:47 crc kubenswrapper[4945]: E0109 00:40:47.098210 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" containerName="dnsmasq-dns" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.098217 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" containerName="dnsmasq-dns" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.098354 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd55f1d-058a-4825-92fe-7af6e0b8df70" containerName="dnsmasq-dns" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.098892 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.101318 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.114770 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-55qcj"] Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.245856 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-operator-scripts\") pod \"root-account-create-update-55qcj\" (UID: \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\") " pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.245958 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pjld\" (UniqueName: \"kubernetes.io/projected/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-kube-api-access-4pjld\") pod \"root-account-create-update-55qcj\" (UID: \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\") " pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.347095 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pjld\" (UniqueName: \"kubernetes.io/projected/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-kube-api-access-4pjld\") pod \"root-account-create-update-55qcj\" (UID: \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\") " pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.347210 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-operator-scripts\") pod \"root-account-create-update-55qcj\" (UID: \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\") " pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.348196 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-operator-scripts\") pod \"root-account-create-update-55qcj\" (UID: \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\") " pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.381104 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pjld\" (UniqueName: \"kubernetes.io/projected/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-kube-api-access-4pjld\") pod \"root-account-create-update-55qcj\" (UID: \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\") " pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.420134 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:47 crc kubenswrapper[4945]: I0109 00:40:47.918110 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-55qcj"] Jan 09 00:40:48 crc kubenswrapper[4945]: I0109 00:40:48.742410 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-55qcj" event={"ID":"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6","Type":"ContainerStarted","Data":"a3a3a5aefbdc7ab6510773662b04e931eb7939ad75b38469761d9b46f0792df0"} Jan 09 00:40:48 crc kubenswrapper[4945]: I0109 00:40:48.742467 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-55qcj" event={"ID":"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6","Type":"ContainerStarted","Data":"4766e7bc11dcf90bdac14ac240051282d25dd072c5b803f4c8006dc972f9fe59"} Jan 09 00:40:48 crc kubenswrapper[4945]: I0109 00:40:48.760489 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-55qcj" podStartSLOduration=1.760461067 podStartE2EDuration="1.760461067s" podCreationTimestamp="2026-01-09 00:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:40:48.759464733 +0000 UTC m=+5119.070623709" watchObservedRunningTime="2026-01-09 00:40:48.760461067 +0000 UTC m=+5119.071620033" Jan 09 00:40:49 crc kubenswrapper[4945]: I0109 00:40:49.752872 4945 generic.go:334] "Generic (PLEG): container finished" podID="1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6" containerID="a3a3a5aefbdc7ab6510773662b04e931eb7939ad75b38469761d9b46f0792df0" exitCode=0 Jan 09 00:40:49 crc kubenswrapper[4945]: I0109 00:40:49.753364 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-55qcj" event={"ID":"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6","Type":"ContainerDied","Data":"a3a3a5aefbdc7ab6510773662b04e931eb7939ad75b38469761d9b46f0792df0"} Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.132084 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.213350 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pjld\" (UniqueName: \"kubernetes.io/projected/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-kube-api-access-4pjld\") pod \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\" (UID: \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\") " Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.213394 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-operator-scripts\") pod \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\" (UID: \"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6\") " Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.214431 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6" (UID: "1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.219370 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-kube-api-access-4pjld" (OuterVolumeSpecName: "kube-api-access-4pjld") pod "1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6" (UID: "1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6"). InnerVolumeSpecName "kube-api-access-4pjld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.315465 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.315524 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pjld\" (UniqueName: \"kubernetes.io/projected/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6-kube-api-access-4pjld\") on node \"crc\" DevicePath \"\"" Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.773688 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-55qcj" event={"ID":"1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6","Type":"ContainerDied","Data":"4766e7bc11dcf90bdac14ac240051282d25dd072c5b803f4c8006dc972f9fe59"} Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.774065 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4766e7bc11dcf90bdac14ac240051282d25dd072c5b803f4c8006dc972f9fe59" Jan 09 00:40:51 crc kubenswrapper[4945]: I0109 00:40:51.774148 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-55qcj" Jan 09 00:40:52 crc kubenswrapper[4945]: I0109 00:40:52.000690 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:40:52 crc kubenswrapper[4945]: E0109 00:40:52.001041 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:40:53 crc kubenswrapper[4945]: I0109 00:40:53.880099 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-55qcj"] Jan 09 00:40:53 crc kubenswrapper[4945]: I0109 00:40:53.897097 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-55qcj"] Jan 09 00:40:54 crc kubenswrapper[4945]: I0109 00:40:54.016639 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6" path="/var/lib/kubelet/pods/1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6/volumes" Jan 09 00:40:58 crc kubenswrapper[4945]: I0109 00:40:58.880845 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-24dgv"] Jan 09 00:40:58 crc kubenswrapper[4945]: E0109 00:40:58.881524 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6" containerName="mariadb-account-create-update" Jan 09 00:40:58 crc kubenswrapper[4945]: I0109 00:40:58.881543 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6" containerName="mariadb-account-create-update" Jan 09 00:40:58 crc kubenswrapper[4945]: I0109 00:40:58.881730 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9f3130-0e6b-47ed-b9e1-dd3fe6d3d0a6" containerName="mariadb-account-create-update" Jan 09 00:40:58 crc kubenswrapper[4945]: I0109 00:40:58.882574 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-24dgv" Jan 09 00:40:58 crc kubenswrapper[4945]: I0109 00:40:58.888526 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 09 00:40:58 crc kubenswrapper[4945]: I0109 00:40:58.890496 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-24dgv"] Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.031273 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr6p8\" (UniqueName: \"kubernetes.io/projected/6ce68989-d35b-4013-a5f3-dd43fdfa650c-kube-api-access-zr6p8\") pod \"root-account-create-update-24dgv\" (UID: \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\") " pod="openstack/root-account-create-update-24dgv" Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.031393 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce68989-d35b-4013-a5f3-dd43fdfa650c-operator-scripts\") pod \"root-account-create-update-24dgv\" (UID: \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\") " pod="openstack/root-account-create-update-24dgv" Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.133289 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr6p8\" (UniqueName: \"kubernetes.io/projected/6ce68989-d35b-4013-a5f3-dd43fdfa650c-kube-api-access-zr6p8\") pod \"root-account-create-update-24dgv\" (UID: \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\") " pod="openstack/root-account-create-update-24dgv" Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.133738 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce68989-d35b-4013-a5f3-dd43fdfa650c-operator-scripts\") pod \"root-account-create-update-24dgv\" (UID: \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\") " pod="openstack/root-account-create-update-24dgv" Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.134920 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce68989-d35b-4013-a5f3-dd43fdfa650c-operator-scripts\") pod \"root-account-create-update-24dgv\" (UID: \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\") " pod="openstack/root-account-create-update-24dgv" Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.156618 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr6p8\" (UniqueName: \"kubernetes.io/projected/6ce68989-d35b-4013-a5f3-dd43fdfa650c-kube-api-access-zr6p8\") pod \"root-account-create-update-24dgv\" (UID: \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\") " pod="openstack/root-account-create-update-24dgv" Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.198632 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-24dgv" Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.598186 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-24dgv"] Jan 09 00:40:59 crc kubenswrapper[4945]: I0109 00:40:59.830117 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-24dgv" event={"ID":"6ce68989-d35b-4013-a5f3-dd43fdfa650c","Type":"ContainerStarted","Data":"9e95643163cd73b1c72064317c0fdfd2242e0f950a4539acd994c04dc819c07a"} Jan 09 00:41:00 crc kubenswrapper[4945]: I0109 00:41:00.843277 4945 generic.go:334] "Generic (PLEG): container finished" podID="6ce68989-d35b-4013-a5f3-dd43fdfa650c" containerID="6858dbe110c02a5f6d6cee4a5264f29480832a4e465548abe029dddc8ffd4722" exitCode=0 Jan 09 00:41:00 crc kubenswrapper[4945]: I0109 00:41:00.843418 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-24dgv" event={"ID":"6ce68989-d35b-4013-a5f3-dd43fdfa650c","Type":"ContainerDied","Data":"6858dbe110c02a5f6d6cee4a5264f29480832a4e465548abe029dddc8ffd4722"} Jan 09 00:41:01 crc kubenswrapper[4945]: I0109 00:41:01.859467 4945 generic.go:334] "Generic (PLEG): container finished" podID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerID="6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20" exitCode=0 Jan 09 00:41:01 crc kubenswrapper[4945]: I0109 00:41:01.859597 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ecf3c5-c5ba-499a-a20b-bd1feee567da","Type":"ContainerDied","Data":"6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20"} Jan 09 00:41:01 crc kubenswrapper[4945]: I0109 00:41:01.868540 4945 generic.go:334] "Generic (PLEG): container finished" podID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerID="6f866fbf59163f047d6af2c60285cfd2dd4e5c9e4ffb889e7c333d6a9b0e4c51" exitCode=0 Jan 09 00:41:01 crc kubenswrapper[4945]: I0109 00:41:01.868702 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2886ea2-b692-46ae-896d-3a5ff19ae5f8","Type":"ContainerDied","Data":"6f866fbf59163f047d6af2c60285cfd2dd4e5c9e4ffb889e7c333d6a9b0e4c51"} Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.187983 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-24dgv" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.287639 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce68989-d35b-4013-a5f3-dd43fdfa650c-operator-scripts\") pod \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\" (UID: \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\") " Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.288015 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr6p8\" (UniqueName: \"kubernetes.io/projected/6ce68989-d35b-4013-a5f3-dd43fdfa650c-kube-api-access-zr6p8\") pod \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\" (UID: \"6ce68989-d35b-4013-a5f3-dd43fdfa650c\") " Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.288669 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ce68989-d35b-4013-a5f3-dd43fdfa650c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ce68989-d35b-4013-a5f3-dd43fdfa650c" (UID: "6ce68989-d35b-4013-a5f3-dd43fdfa650c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.293748 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce68989-d35b-4013-a5f3-dd43fdfa650c-kube-api-access-zr6p8" (OuterVolumeSpecName: "kube-api-access-zr6p8") pod "6ce68989-d35b-4013-a5f3-dd43fdfa650c" (UID: "6ce68989-d35b-4013-a5f3-dd43fdfa650c"). InnerVolumeSpecName "kube-api-access-zr6p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.390276 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr6p8\" (UniqueName: \"kubernetes.io/projected/6ce68989-d35b-4013-a5f3-dd43fdfa650c-kube-api-access-zr6p8\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.390621 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ce68989-d35b-4013-a5f3-dd43fdfa650c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.884007 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-24dgv" event={"ID":"6ce68989-d35b-4013-a5f3-dd43fdfa650c","Type":"ContainerDied","Data":"9e95643163cd73b1c72064317c0fdfd2242e0f950a4539acd994c04dc819c07a"} Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.884057 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e95643163cd73b1c72064317c0fdfd2242e0f950a4539acd994c04dc819c07a" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.884131 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-24dgv" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.888788 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2886ea2-b692-46ae-896d-3a5ff19ae5f8","Type":"ContainerStarted","Data":"a736305615684d73fa7d9c4150557dee2119749f837eabd8b47e9172a7ede707"} Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.889076 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.890946 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ecf3c5-c5ba-499a-a20b-bd1feee567da","Type":"ContainerStarted","Data":"12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb"} Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.891370 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.912470 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.912446272 podStartE2EDuration="36.912446272s" podCreationTimestamp="2026-01-09 00:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:41:02.909416597 +0000 UTC m=+5133.220575563" watchObservedRunningTime="2026-01-09 00:41:02.912446272 +0000 UTC m=+5133.223605228" Jan 09 00:41:02 crc kubenswrapper[4945]: I0109 00:41:02.947315 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.947290208 podStartE2EDuration="37.947290208s" podCreationTimestamp="2026-01-09 00:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:41:02.942579772 +0000 UTC m=+5133.253738728" watchObservedRunningTime="2026-01-09 00:41:02.947290208 +0000 UTC m=+5133.258449154" Jan 09 00:41:04 crc kubenswrapper[4945]: I0109 00:41:03.999873 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:41:04 crc kubenswrapper[4945]: E0109 00:41:04.000423 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:41:17 crc kubenswrapper[4945]: I0109 00:41:17.432254 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 09 00:41:17 crc kubenswrapper[4945]: I0109 00:41:17.726299 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:19 crc kubenswrapper[4945]: I0109 00:41:18.999952 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:41:19 crc kubenswrapper[4945]: E0109 00:41:19.000318 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:41:22 crc kubenswrapper[4945]: I0109 00:41:22.732946 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6cbk7"] Jan 09 00:41:22 crc kubenswrapper[4945]: E0109 00:41:22.733554 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce68989-d35b-4013-a5f3-dd43fdfa650c" containerName="mariadb-account-create-update" Jan 09 00:41:22 crc kubenswrapper[4945]: I0109 00:41:22.733566 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce68989-d35b-4013-a5f3-dd43fdfa650c" containerName="mariadb-account-create-update" Jan 09 00:41:22 crc kubenswrapper[4945]: I0109 00:41:22.733720 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce68989-d35b-4013-a5f3-dd43fdfa650c" containerName="mariadb-account-create-update" Jan 09 00:41:22 crc kubenswrapper[4945]: I0109 00:41:22.734849 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:22 crc kubenswrapper[4945]: I0109 00:41:22.748682 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cbk7"] Jan 09 00:41:22 crc kubenswrapper[4945]: I0109 00:41:22.905152 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-catalog-content\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:22 crc kubenswrapper[4945]: I0109 00:41:22.905551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-utilities\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:22 crc kubenswrapper[4945]: I0109 00:41:22.905583 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7qjg\" (UniqueName: \"kubernetes.io/projected/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-kube-api-access-s7qjg\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:23 crc kubenswrapper[4945]: I0109 00:41:23.007397 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-utilities\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:23 crc kubenswrapper[4945]: I0109 00:41:23.007443 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7qjg\" (UniqueName: \"kubernetes.io/projected/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-kube-api-access-s7qjg\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:23 crc kubenswrapper[4945]: I0109 00:41:23.007483 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-catalog-content\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:23 crc kubenswrapper[4945]: I0109 00:41:23.007924 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-catalog-content\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:23 crc kubenswrapper[4945]: I0109 00:41:23.008440 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-utilities\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:23 crc kubenswrapper[4945]: I0109 00:41:23.039191 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7qjg\" (UniqueName: \"kubernetes.io/projected/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-kube-api-access-s7qjg\") pod \"redhat-marketplace-6cbk7\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:23 crc kubenswrapper[4945]: I0109 00:41:23.070955 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.440658 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-q7624"] Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.442308 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.461534 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-q7624"] Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.571599 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cbk7"] Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.617519 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l89ks\" (UniqueName: \"kubernetes.io/projected/5b3db303-0c17-45f6-846d-d34d14652a5a-kube-api-access-l89ks\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.617592 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.617615 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-config\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.719439 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l89ks\" (UniqueName: \"kubernetes.io/projected/5b3db303-0c17-45f6-846d-d34d14652a5a-kube-api-access-l89ks\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.719829 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-config\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.719862 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.720824 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.720859 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-config\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.740493 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l89ks\" (UniqueName: \"kubernetes.io/projected/5b3db303-0c17-45f6-846d-d34d14652a5a-kube-api-access-l89ks\") pod \"dnsmasq-dns-5b7946d7b9-q7624\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:23.769874 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:24.038074 4945 generic.go:334] "Generic (PLEG): container finished" podID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerID="b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8" exitCode=0 Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:24.038130 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cbk7" event={"ID":"ed7ffec1-7a53-452c-9ad4-81edbea4bc74","Type":"ContainerDied","Data":"b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8"} Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:24.038174 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cbk7" event={"ID":"ed7ffec1-7a53-452c-9ad4-81edbea4bc74","Type":"ContainerStarted","Data":"d19ba451a864eced95db6e3402beb476e3aa8351044e4d18c260c1b59024a01d"} Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:24.203856 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:41:24 crc kubenswrapper[4945]: I0109 00:41:24.489212 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-q7624"] Jan 09 00:41:25 crc kubenswrapper[4945]: I0109 00:41:25.017222 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:41:25 crc kubenswrapper[4945]: I0109 00:41:25.049302 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cbk7" event={"ID":"ed7ffec1-7a53-452c-9ad4-81edbea4bc74","Type":"ContainerStarted","Data":"42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c"} Jan 09 00:41:25 crc kubenswrapper[4945]: I0109 00:41:25.054336 4945 generic.go:334] "Generic (PLEG): container finished" podID="5b3db303-0c17-45f6-846d-d34d14652a5a" containerID="c31795334f4cee1db3cb451b5fc10c0f856ebbf97b797b716f274d6c37742e60" exitCode=0 Jan 09 00:41:25 crc kubenswrapper[4945]: I0109 00:41:25.054398 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" event={"ID":"5b3db303-0c17-45f6-846d-d34d14652a5a","Type":"ContainerDied","Data":"c31795334f4cee1db3cb451b5fc10c0f856ebbf97b797b716f274d6c37742e60"} Jan 09 00:41:25 crc kubenswrapper[4945]: I0109 00:41:25.054434 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" event={"ID":"5b3db303-0c17-45f6-846d-d34d14652a5a","Type":"ContainerStarted","Data":"d41bdf9465912ec17d888c8ae16d481c5a5e6697763695b816901412b7859dfa"} Jan 09 00:41:26 crc kubenswrapper[4945]: I0109 00:41:26.056922 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerName="rabbitmq" containerID="cri-o://12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb" gracePeriod=604799 Jan 09 00:41:26 crc kubenswrapper[4945]: I0109 00:41:26.062875 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" event={"ID":"5b3db303-0c17-45f6-846d-d34d14652a5a","Type":"ContainerStarted","Data":"6a51ceda202d236c8ef69e2503babc7f6316564a6e6bd24c62750acab9604ec8"} Jan 09 00:41:26 crc kubenswrapper[4945]: I0109 00:41:26.063087 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:26 crc kubenswrapper[4945]: I0109 00:41:26.064601 4945 generic.go:334] "Generic (PLEG): container finished" podID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerID="42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c" exitCode=0 Jan 09 00:41:26 crc kubenswrapper[4945]: I0109 00:41:26.064636 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cbk7" event={"ID":"ed7ffec1-7a53-452c-9ad4-81edbea4bc74","Type":"ContainerDied","Data":"42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c"} Jan 09 00:41:26 crc kubenswrapper[4945]: I0109 00:41:26.088479 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" podStartSLOduration=3.088453577 podStartE2EDuration="3.088453577s" podCreationTimestamp="2026-01-09 00:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:41:26.083763192 +0000 UTC m=+5156.394922148" watchObservedRunningTime="2026-01-09 00:41:26.088453577 +0000 UTC m=+5156.399612533" Jan 09 00:41:26 crc kubenswrapper[4945]: I0109 00:41:26.793024 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerName="rabbitmq" containerID="cri-o://a736305615684d73fa7d9c4150557dee2119749f837eabd8b47e9172a7ede707" gracePeriod=604799 Jan 09 00:41:27 crc kubenswrapper[4945]: I0109 00:41:27.074860 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cbk7" event={"ID":"ed7ffec1-7a53-452c-9ad4-81edbea4bc74","Type":"ContainerStarted","Data":"580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f"} Jan 09 00:41:27 crc kubenswrapper[4945]: I0109 00:41:27.096636 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6cbk7" podStartSLOduration=2.551189741 podStartE2EDuration="5.096606357s" podCreationTimestamp="2026-01-09 00:41:22 +0000 UTC" firstStartedPulling="2026-01-09 00:41:24.041129524 +0000 UTC m=+5154.352288470" lastFinishedPulling="2026-01-09 00:41:26.58654614 +0000 UTC m=+5156.897705086" observedRunningTime="2026-01-09 00:41:27.090807065 +0000 UTC m=+5157.401966011" watchObservedRunningTime="2026-01-09 00:41:27.096606357 +0000 UTC m=+5157.407765313" Jan 09 00:41:27 crc kubenswrapper[4945]: I0109 00:41:27.429955 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.241:5672: connect: connection refused" Jan 09 00:41:27 crc kubenswrapper[4945]: I0109 00:41:27.724270 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.242:5672: connect: connection refused" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.026040 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.072113 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.073208 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.124728 4945 generic.go:334] "Generic (PLEG): container finished" podID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerID="12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb" exitCode=0 Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.124793 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.124803 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ecf3c5-c5ba-499a-a20b-bd1feee567da","Type":"ContainerDied","Data":"12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb"} Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.124831 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"91ecf3c5-c5ba-499a-a20b-bd1feee567da","Type":"ContainerDied","Data":"28f65f590835e437a5ac32638d0dfe876b0eee8f489b7aa26baee023a1a15ef8"} Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.124847 4945 scope.go:117] "RemoveContainer" containerID="12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.132581 4945 generic.go:334] "Generic (PLEG): container finished" podID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerID="a736305615684d73fa7d9c4150557dee2119749f837eabd8b47e9172a7ede707" exitCode=0 Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.133121 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2886ea2-b692-46ae-896d-3a5ff19ae5f8","Type":"ContainerDied","Data":"a736305615684d73fa7d9c4150557dee2119749f837eabd8b47e9172a7ede707"} Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.136104 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.170516 4945 scope.go:117] "RemoveContainer" containerID="6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.185142 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ecf3c5-c5ba-499a-a20b-bd1feee567da-pod-info\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.185190 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-plugins-conf\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.185219 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-plugins\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.185277 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ecf3c5-c5ba-499a-a20b-bd1feee567da-erlang-cookie-secret\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.185385 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.185455 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzmm\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-kube-api-access-vxzmm\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.185479 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-erlang-cookie\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.186510 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.186533 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-confd\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.186702 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-server-conf\") pod \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\" (UID: \"91ecf3c5-c5ba-499a-a20b-bd1feee567da\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.187428 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.187768 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.187788 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.188487 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.191454 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ecf3c5-c5ba-499a-a20b-bd1feee567da-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.192832 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-kube-api-access-vxzmm" (OuterVolumeSpecName: "kube-api-access-vxzmm") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "kube-api-access-vxzmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.194067 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/91ecf3c5-c5ba-499a-a20b-bd1feee567da-pod-info" (OuterVolumeSpecName: "pod-info") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.201450 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656" (OuterVolumeSpecName: "persistence") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "pvc-da15286c-6b28-4837-a475-a25466b59656". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.222941 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-server-conf" (OuterVolumeSpecName: "server-conf") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.241811 4945 scope.go:117] "RemoveContainer" containerID="12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb" Jan 09 00:41:33 crc kubenswrapper[4945]: E0109 00:41:33.242546 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb\": container with ID starting with 12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb not found: ID does not exist" containerID="12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.242589 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb"} err="failed to get container status \"12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb\": rpc error: code = NotFound desc = could not find container \"12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb\": container with ID starting with 12dca279c6dd8421dd3209f9df64afc7b8d6fc99de31c4b708460be2344bfdcb not found: ID does not exist" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.242618 4945 scope.go:117] "RemoveContainer" containerID="6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20" Jan 09 00:41:33 crc kubenswrapper[4945]: E0109 00:41:33.242958 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20\": container with ID starting with 6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20 not found: ID does not exist" containerID="6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.242979 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20"} err="failed to get container status \"6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20\": rpc error: code = NotFound desc = could not find container \"6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20\": container with ID starting with 6e36e1eb7dd72c9b36dc3686533488b757d4e35b90b2bdcec8cb9fed88bf4a20 not found: ID does not exist" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.272776 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.283601 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "91ecf3c5-c5ba-499a-a20b-bd1feee567da" (UID: "91ecf3c5-c5ba-499a-a20b-bd1feee567da"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289364 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-plugins\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289647 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62qtl\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-kube-api-access-62qtl\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289674 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-server-conf\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289766 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289795 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-confd\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289817 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-erlang-cookie-secret\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289853 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-erlang-cookie\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289907 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-pod-info\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.289945 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-plugins-conf\") pod \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\" (UID: \"e2886ea2-b692-46ae-896d-3a5ff19ae5f8\") " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.290166 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.290184 4945 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-server-conf\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.290193 4945 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/91ecf3c5-c5ba-499a-a20b-bd1feee567da-pod-info\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.290200 4945 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/91ecf3c5-c5ba-499a-a20b-bd1feee567da-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.290209 4945 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/91ecf3c5-c5ba-499a-a20b-bd1feee567da-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.290230 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") on node \"crc\" " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.290240 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxzmm\" (UniqueName: \"kubernetes.io/projected/91ecf3c5-c5ba-499a-a20b-bd1feee567da-kube-api-access-vxzmm\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.293619 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.294877 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.295670 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.301478 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.319369 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-pod-info" (OuterVolumeSpecName: "pod-info") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.319402 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-kube-api-access-62qtl" (OuterVolumeSpecName: "kube-api-access-62qtl") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "kube-api-access-62qtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.334148 4945 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.334358 4945 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-da15286c-6b28-4837-a475-a25466b59656" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656") on node "crc" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.337825 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-server-conf" (OuterVolumeSpecName: "server-conf") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.344487 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42" (OuterVolumeSpecName: "persistence") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "pvc-65a3a386-336e-4b54-abec-f4916a516c42". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.382315 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e2886ea2-b692-46ae-896d-3a5ff19ae5f8" (UID: "e2886ea2-b692-46ae-896d-3a5ff19ae5f8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392086 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392111 4945 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-pod-info\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392121 4945 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392132 4945 reconciler_common.go:293] "Volume detached for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392144 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392154 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62qtl\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-kube-api-access-62qtl\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392162 4945 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-server-conf\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392197 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") on node \"crc\" " Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392207 4945 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.392219 4945 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2886ea2-b692-46ae-896d-3a5ff19ae5f8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.412698 4945 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.412979 4945 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-65a3a386-336e-4b54-abec-f4916a516c42" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42") on node "crc" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.466071 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.467095 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.496227 4945 reconciler_common.go:293] "Volume detached for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.496272 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:41:33 crc kubenswrapper[4945]: E0109 00:41:33.496552 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerName="setup-container" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.496569 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerName="setup-container" Jan 09 00:41:33 crc kubenswrapper[4945]: E0109 00:41:33.496576 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerName="rabbitmq" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.496583 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerName="rabbitmq" Jan 09 00:41:33 crc kubenswrapper[4945]: E0109 00:41:33.496601 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerName="rabbitmq" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.496607 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerName="rabbitmq" Jan 09 00:41:33 crc kubenswrapper[4945]: E0109 00:41:33.496628 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerName="setup-container" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.496634 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerName="setup-container" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.496784 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" containerName="rabbitmq" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.496801 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" containerName="rabbitmq" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.497732 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.501120 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.501331 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.501686 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.501855 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x2c52" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.502625 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.504302 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.700975 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.701106 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc1824e6-f578-45fb-9536-91976e922955-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.701156 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc1824e6-f578-45fb-9536-91976e922955-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.701220 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc1824e6-f578-45fb-9536-91976e922955-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.701249 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.701277 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.701320 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4lnp\" (UniqueName: \"kubernetes.io/projected/cc1824e6-f578-45fb-9536-91976e922955-kube-api-access-s4lnp\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.701348 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.701372 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc1824e6-f578-45fb-9536-91976e922955-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.772040 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.802597 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc1824e6-f578-45fb-9536-91976e922955-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.802874 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.802956 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.803059 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4lnp\" (UniqueName: \"kubernetes.io/projected/cc1824e6-f578-45fb-9536-91976e922955-kube-api-access-s4lnp\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.803250 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.804343 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc1824e6-f578-45fb-9536-91976e922955-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.804455 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.804583 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc1824e6-f578-45fb-9536-91976e922955-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.804662 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc1824e6-f578-45fb-9536-91976e922955-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.803624 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc1824e6-f578-45fb-9536-91976e922955-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.803519 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.803461 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.806119 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc1824e6-f578-45fb-9536-91976e922955-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.808931 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc1824e6-f578-45fb-9536-91976e922955-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.809469 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.809523 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/964deba7dcfd6c062ba92bde00b7cb39634e1091b5c36c017c585df16cd8a8df/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.809626 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc1824e6-f578-45fb-9536-91976e922955-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.822314 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-8rcpg"] Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.822754 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" podUID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" containerName="dnsmasq-dns" containerID="cri-o://f4c0d7c5c10267e91aac60b34360601ed0e22f954dd8c58c0473ebce5b995994" gracePeriod=10 Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.824822 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc1824e6-f578-45fb-9536-91976e922955-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.840395 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4lnp\" (UniqueName: \"kubernetes.io/projected/cc1824e6-f578-45fb-9536-91976e922955-kube-api-access-s4lnp\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:33 crc kubenswrapper[4945]: I0109 00:41:33.865657 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da15286c-6b28-4837-a475-a25466b59656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da15286c-6b28-4837-a475-a25466b59656\") pod \"rabbitmq-server-0\" (UID: \"cc1824e6-f578-45fb-9536-91976e922955\") " pod="openstack/rabbitmq-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.004652 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:41:34 crc kubenswrapper[4945]: E0109 00:41:34.004875 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.013970 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ecf3c5-c5ba-499a-a20b-bd1feee567da" path="/var/lib/kubelet/pods/91ecf3c5-c5ba-499a-a20b-bd1feee567da/volumes" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.127092 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.141413 4945 generic.go:334] "Generic (PLEG): container finished" podID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" containerID="f4c0d7c5c10267e91aac60b34360601ed0e22f954dd8c58c0473ebce5b995994" exitCode=0 Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.141493 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" event={"ID":"2450d5ee-cd3f-4201-a1a5-8c2c74083937","Type":"ContainerDied","Data":"f4c0d7c5c10267e91aac60b34360601ed0e22f954dd8c58c0473ebce5b995994"} Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.144491 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2886ea2-b692-46ae-896d-3a5ff19ae5f8","Type":"ContainerDied","Data":"221afdbbcac6cf93eba04adbc44fa8cd3df95b34d273e7144c4e6a51b771c0f1"} Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.144531 4945 scope.go:117] "RemoveContainer" containerID="a736305615684d73fa7d9c4150557dee2119749f837eabd8b47e9172a7ede707" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.144624 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.201956 4945 scope.go:117] "RemoveContainer" containerID="6f866fbf59163f047d6af2c60285cfd2dd4e5c9e4ffb889e7c333d6a9b0e4c51" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.213017 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.214785 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.229587 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.241762 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.262417 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:41:34 crc kubenswrapper[4945]: E0109 00:41:34.262755 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" containerName="dnsmasq-dns" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.262768 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" containerName="dnsmasq-dns" Jan 09 00:41:34 crc kubenswrapper[4945]: E0109 00:41:34.262778 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" containerName="init" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.262784 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" containerName="init" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.262923 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" containerName="dnsmasq-dns" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.263957 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.266487 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.266767 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.268686 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.268960 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.269154 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-xbpbg" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.317584 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.344610 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cbk7"] Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.416925 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-dns-svc\") pod \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417125 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbxm\" (UniqueName: \"kubernetes.io/projected/2450d5ee-cd3f-4201-a1a5-8c2c74083937-kube-api-access-nlbxm\") pod \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417192 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-config\") pod \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\" (UID: \"2450d5ee-cd3f-4201-a1a5-8c2c74083937\") " Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417482 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a5d3098-d52c-489b-8a1d-64ac1aed714c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417549 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a5d3098-d52c-489b-8a1d-64ac1aed714c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417572 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a5d3098-d52c-489b-8a1d-64ac1aed714c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417593 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417643 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417662 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snvnt\" (UniqueName: \"kubernetes.io/projected/2a5d3098-d52c-489b-8a1d-64ac1aed714c-kube-api-access-snvnt\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417881 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.417975 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.418011 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a5d3098-d52c-489b-8a1d-64ac1aed714c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.422380 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2450d5ee-cd3f-4201-a1a5-8c2c74083937-kube-api-access-nlbxm" (OuterVolumeSpecName: "kube-api-access-nlbxm") pod "2450d5ee-cd3f-4201-a1a5-8c2c74083937" (UID: "2450d5ee-cd3f-4201-a1a5-8c2c74083937"). InnerVolumeSpecName "kube-api-access-nlbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.451878 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-config" (OuterVolumeSpecName: "config") pod "2450d5ee-cd3f-4201-a1a5-8c2c74083937" (UID: "2450d5ee-cd3f-4201-a1a5-8c2c74083937"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.455033 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2450d5ee-cd3f-4201-a1a5-8c2c74083937" (UID: "2450d5ee-cd3f-4201-a1a5-8c2c74083937"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519063 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519141 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snvnt\" (UniqueName: \"kubernetes.io/projected/2a5d3098-d52c-489b-8a1d-64ac1aed714c-kube-api-access-snvnt\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519219 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519254 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519298 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a5d3098-d52c-489b-8a1d-64ac1aed714c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519447 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a5d3098-d52c-489b-8a1d-64ac1aed714c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519469 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a5d3098-d52c-489b-8a1d-64ac1aed714c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519486 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a5d3098-d52c-489b-8a1d-64ac1aed714c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519529 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519614 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlbxm\" (UniqueName: \"kubernetes.io/projected/2450d5ee-cd3f-4201-a1a5-8c2c74083937-kube-api-access-nlbxm\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519662 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.519700 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2450d5ee-cd3f-4201-a1a5-8c2c74083937-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.521442 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.522368 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.523281 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2a5d3098-d52c-489b-8a1d-64ac1aed714c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.523532 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2a5d3098-d52c-489b-8a1d-64ac1aed714c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.523594 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2a5d3098-d52c-489b-8a1d-64ac1aed714c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.523702 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2a5d3098-d52c-489b-8a1d-64ac1aed714c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.524417 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.524532 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1d4b16e78021504bfff2e586d3387de3c3cb8321ff7b40542bbaf4c3df837d83/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.524553 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2a5d3098-d52c-489b-8a1d-64ac1aed714c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.536097 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snvnt\" (UniqueName: \"kubernetes.io/projected/2a5d3098-d52c-489b-8a1d-64ac1aed714c-kube-api-access-snvnt\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.546865 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65a3a386-336e-4b54-abec-f4916a516c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65a3a386-336e-4b54-abec-f4916a516c42\") pod \"rabbitmq-cell1-server-0\" (UID: \"2a5d3098-d52c-489b-8a1d-64ac1aed714c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.604909 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:41:34 crc kubenswrapper[4945]: I0109 00:41:34.617149 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.049551 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.157442 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc1824e6-f578-45fb-9536-91976e922955","Type":"ContainerStarted","Data":"2796fa0cf27cb4268d7553759fb9f98c172a874834b9387274f26f63df7ef207"} Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.159831 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" event={"ID":"2450d5ee-cd3f-4201-a1a5-8c2c74083937","Type":"ContainerDied","Data":"e553c0b5325a3f959fde289abb13a8355324f4d7da5a099be0426c074ef5cc48"} Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.159864 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-8rcpg" Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.159881 4945 scope.go:117] "RemoveContainer" containerID="f4c0d7c5c10267e91aac60b34360601ed0e22f954dd8c58c0473ebce5b995994" Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.174320 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a5d3098-d52c-489b-8a1d-64ac1aed714c","Type":"ContainerStarted","Data":"5da27be8acad81b8341e76e779bb7744351265f87cd4f9adb0a444a77d7b99bd"} Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.196835 4945 scope.go:117] "RemoveContainer" containerID="af9985d111de42164a39fd4e5dd4e1702458d395cdf3d74e7b400312ba64ea21" Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.216497 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-8rcpg"] Jan 09 00:41:35 crc kubenswrapper[4945]: I0109 00:41:35.230071 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-8rcpg"] Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.010747 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2450d5ee-cd3f-4201-a1a5-8c2c74083937" path="/var/lib/kubelet/pods/2450d5ee-cd3f-4201-a1a5-8c2c74083937/volumes" Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.011610 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2886ea2-b692-46ae-896d-3a5ff19ae5f8" path="/var/lib/kubelet/pods/e2886ea2-b692-46ae-896d-3a5ff19ae5f8/volumes" Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.184592 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6cbk7" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerName="registry-server" containerID="cri-o://580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f" gracePeriod=2 Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.185643 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc1824e6-f578-45fb-9536-91976e922955","Type":"ContainerStarted","Data":"92e497d3f3ad80ab33584b5725683a7f9945b547f88a8da21ae9c5b8d3d702df"} Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.654898 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.757622 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7qjg\" (UniqueName: \"kubernetes.io/projected/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-kube-api-access-s7qjg\") pod \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.757736 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-utilities\") pod \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.757826 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-catalog-content\") pod \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\" (UID: \"ed7ffec1-7a53-452c-9ad4-81edbea4bc74\") " Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.758829 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-utilities" (OuterVolumeSpecName: "utilities") pod "ed7ffec1-7a53-452c-9ad4-81edbea4bc74" (UID: "ed7ffec1-7a53-452c-9ad4-81edbea4bc74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.762316 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-kube-api-access-s7qjg" (OuterVolumeSpecName: "kube-api-access-s7qjg") pod "ed7ffec1-7a53-452c-9ad4-81edbea4bc74" (UID: "ed7ffec1-7a53-452c-9ad4-81edbea4bc74"). InnerVolumeSpecName "kube-api-access-s7qjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.781243 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed7ffec1-7a53-452c-9ad4-81edbea4bc74" (UID: "ed7ffec1-7a53-452c-9ad4-81edbea4bc74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.859822 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.859862 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7qjg\" (UniqueName: \"kubernetes.io/projected/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-kube-api-access-s7qjg\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:36 crc kubenswrapper[4945]: I0109 00:41:36.859875 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed7ffec1-7a53-452c-9ad4-81edbea4bc74-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.196614 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a5d3098-d52c-489b-8a1d-64ac1aed714c","Type":"ContainerStarted","Data":"e7009fa1555844f50740e0a00e83fa42aaf9f91f2959334a59c5c3473c934d21"} Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.200774 4945 generic.go:334] "Generic (PLEG): container finished" podID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerID="580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f" exitCode=0 Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.200874 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cbk7" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.200969 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cbk7" event={"ID":"ed7ffec1-7a53-452c-9ad4-81edbea4bc74","Type":"ContainerDied","Data":"580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f"} Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.201079 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cbk7" event={"ID":"ed7ffec1-7a53-452c-9ad4-81edbea4bc74","Type":"ContainerDied","Data":"d19ba451a864eced95db6e3402beb476e3aa8351044e4d18c260c1b59024a01d"} Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.201122 4945 scope.go:117] "RemoveContainer" containerID="580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.250569 4945 scope.go:117] "RemoveContainer" containerID="42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.257976 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cbk7"] Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.264136 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cbk7"] Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.282561 4945 scope.go:117] "RemoveContainer" containerID="b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.314241 4945 scope.go:117] "RemoveContainer" containerID="580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f" Jan 09 00:41:37 crc kubenswrapper[4945]: E0109 00:41:37.314854 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f\": container with ID starting with 580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f not found: ID does not exist" containerID="580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.314895 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f"} err="failed to get container status \"580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f\": rpc error: code = NotFound desc = could not find container \"580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f\": container with ID starting with 580777926de72b91c2b8798e8749661256465ba5bceb9ded3193765c3665596f not found: ID does not exist" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.314927 4945 scope.go:117] "RemoveContainer" containerID="42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c" Jan 09 00:41:37 crc kubenswrapper[4945]: E0109 00:41:37.315556 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c\": container with ID starting with 42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c not found: ID does not exist" containerID="42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.315603 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c"} err="failed to get container status \"42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c\": rpc error: code = NotFound desc = could not find container \"42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c\": container with ID starting with 42228ae346a015e73849959501f580755be6b71a51c9edf58a52fe7900bee27c not found: ID does not exist" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.315622 4945 scope.go:117] "RemoveContainer" containerID="b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8" Jan 09 00:41:37 crc kubenswrapper[4945]: E0109 00:41:37.316668 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8\": container with ID starting with b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8 not found: ID does not exist" containerID="b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8" Jan 09 00:41:37 crc kubenswrapper[4945]: I0109 00:41:37.316729 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8"} err="failed to get container status \"b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8\": rpc error: code = NotFound desc = could not find container \"b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8\": container with ID starting with b5913599e95bf7edad130576d441646e511b070bb12caf314dd44319e3da6ff8 not found: ID does not exist" Jan 09 00:41:38 crc kubenswrapper[4945]: I0109 00:41:38.012313 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" path="/var/lib/kubelet/pods/ed7ffec1-7a53-452c-9ad4-81edbea4bc74/volumes" Jan 09 00:41:45 crc kubenswrapper[4945]: I0109 00:41:45.000813 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:41:45 crc kubenswrapper[4945]: E0109 00:41:45.001569 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:41:58 crc kubenswrapper[4945]: I0109 00:41:58.000227 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:41:58 crc kubenswrapper[4945]: E0109 00:41:58.000950 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.606257 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dkbcd"] Jan 09 00:42:04 crc kubenswrapper[4945]: E0109 00:42:04.607106 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerName="registry-server" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.607120 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerName="registry-server" Jan 09 00:42:04 crc kubenswrapper[4945]: E0109 00:42:04.607133 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerName="extract-content" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.607139 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerName="extract-content" Jan 09 00:42:04 crc kubenswrapper[4945]: E0109 00:42:04.607164 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerName="extract-utilities" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.607171 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerName="extract-utilities" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.607300 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7ffec1-7a53-452c-9ad4-81edbea4bc74" containerName="registry-server" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.608381 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.620869 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dkbcd"] Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.680571 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-utilities\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.680693 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tggrs\" (UniqueName: \"kubernetes.io/projected/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-kube-api-access-tggrs\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.680722 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-catalog-content\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.781844 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tggrs\" (UniqueName: \"kubernetes.io/projected/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-kube-api-access-tggrs\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.781929 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-catalog-content\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.781986 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-utilities\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.782699 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-catalog-content\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.782730 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-utilities\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.812252 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tggrs\" (UniqueName: \"kubernetes.io/projected/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-kube-api-access-tggrs\") pod \"community-operators-dkbcd\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:04 crc kubenswrapper[4945]: I0109 00:42:04.931158 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:05 crc kubenswrapper[4945]: I0109 00:42:05.481859 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dkbcd"] Jan 09 00:42:05 crc kubenswrapper[4945]: W0109 00:42:05.489558 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e85bcc1_7b6c_4c1a_88f1_c3d0c0652801.slice/crio-7d5aec84fd6cd8069073b571b0470426cc659190bf7da4a7f7395976f9324d68 WatchSource:0}: Error finding container 7d5aec84fd6cd8069073b571b0470426cc659190bf7da4a7f7395976f9324d68: Status 404 returned error can't find the container with id 7d5aec84fd6cd8069073b571b0470426cc659190bf7da4a7f7395976f9324d68 Jan 09 00:42:06 crc kubenswrapper[4945]: I0109 00:42:06.414676 4945 generic.go:334] "Generic (PLEG): container finished" podID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerID="52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078" exitCode=0 Jan 09 00:42:06 crc kubenswrapper[4945]: I0109 00:42:06.414801 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkbcd" event={"ID":"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801","Type":"ContainerDied","Data":"52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078"} Jan 09 00:42:06 crc kubenswrapper[4945]: I0109 00:42:06.415013 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkbcd" event={"ID":"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801","Type":"ContainerStarted","Data":"7d5aec84fd6cd8069073b571b0470426cc659190bf7da4a7f7395976f9324d68"} Jan 09 00:42:06 crc kubenswrapper[4945]: I0109 00:42:06.416960 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 00:42:07 crc kubenswrapper[4945]: I0109 00:42:07.423693 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkbcd" event={"ID":"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801","Type":"ContainerStarted","Data":"454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e"} Jan 09 00:42:08 crc kubenswrapper[4945]: I0109 00:42:08.432276 4945 generic.go:334] "Generic (PLEG): container finished" podID="cc1824e6-f578-45fb-9536-91976e922955" containerID="92e497d3f3ad80ab33584b5725683a7f9945b547f88a8da21ae9c5b8d3d702df" exitCode=0 Jan 09 00:42:08 crc kubenswrapper[4945]: I0109 00:42:08.432384 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc1824e6-f578-45fb-9536-91976e922955","Type":"ContainerDied","Data":"92e497d3f3ad80ab33584b5725683a7f9945b547f88a8da21ae9c5b8d3d702df"} Jan 09 00:42:08 crc kubenswrapper[4945]: I0109 00:42:08.436611 4945 generic.go:334] "Generic (PLEG): container finished" podID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerID="454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e" exitCode=0 Jan 09 00:42:08 crc kubenswrapper[4945]: I0109 00:42:08.436665 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkbcd" event={"ID":"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801","Type":"ContainerDied","Data":"454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e"} Jan 09 00:42:09 crc kubenswrapper[4945]: I0109 00:42:09.000869 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:42:09 crc kubenswrapper[4945]: E0109 00:42:09.001453 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:42:09 crc kubenswrapper[4945]: I0109 00:42:09.446058 4945 generic.go:334] "Generic (PLEG): container finished" podID="2a5d3098-d52c-489b-8a1d-64ac1aed714c" containerID="e7009fa1555844f50740e0a00e83fa42aaf9f91f2959334a59c5c3473c934d21" exitCode=0 Jan 09 00:42:09 crc kubenswrapper[4945]: I0109 00:42:09.446136 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a5d3098-d52c-489b-8a1d-64ac1aed714c","Type":"ContainerDied","Data":"e7009fa1555844f50740e0a00e83fa42aaf9f91f2959334a59c5c3473c934d21"} Jan 09 00:42:09 crc kubenswrapper[4945]: I0109 00:42:09.448726 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc1824e6-f578-45fb-9536-91976e922955","Type":"ContainerStarted","Data":"b274faa11ca5eea34471cf3e0cd5195e181fcc499ef7879c5ca4ccccdb7b12b2"} Jan 09 00:42:09 crc kubenswrapper[4945]: I0109 00:42:09.449041 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 09 00:42:09 crc kubenswrapper[4945]: I0109 00:42:09.452602 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkbcd" event={"ID":"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801","Type":"ContainerStarted","Data":"f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593"} Jan 09 00:42:09 crc kubenswrapper[4945]: I0109 00:42:09.510313 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.510289332 podStartE2EDuration="36.510289332s" podCreationTimestamp="2026-01-09 00:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:42:09.506005907 +0000 UTC m=+5199.817164873" watchObservedRunningTime="2026-01-09 00:42:09.510289332 +0000 UTC m=+5199.821448278" Jan 09 00:42:09 crc kubenswrapper[4945]: I0109 00:42:09.534281 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dkbcd" podStartSLOduration=2.971060307 podStartE2EDuration="5.53425023s" podCreationTimestamp="2026-01-09 00:42:04 +0000 UTC" firstStartedPulling="2026-01-09 00:42:06.416622841 +0000 UTC m=+5196.727781787" lastFinishedPulling="2026-01-09 00:42:08.979812744 +0000 UTC m=+5199.290971710" observedRunningTime="2026-01-09 00:42:09.531919103 +0000 UTC m=+5199.843078099" watchObservedRunningTime="2026-01-09 00:42:09.53425023 +0000 UTC m=+5199.845409186" Jan 09 00:42:10 crc kubenswrapper[4945]: I0109 00:42:10.461690 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2a5d3098-d52c-489b-8a1d-64ac1aed714c","Type":"ContainerStarted","Data":"f2f03c4db0a865816a151d5aa2571c3e762da03220d24e1ac06306e29dae64aa"} Jan 09 00:42:10 crc kubenswrapper[4945]: I0109 00:42:10.462662 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:42:10 crc kubenswrapper[4945]: I0109 00:42:10.485457 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.485437332 podStartE2EDuration="36.485437332s" podCreationTimestamp="2026-01-09 00:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:42:10.480355877 +0000 UTC m=+5200.791514813" watchObservedRunningTime="2026-01-09 00:42:10.485437332 +0000 UTC m=+5200.796596278" Jan 09 00:42:14 crc kubenswrapper[4945]: I0109 00:42:14.931593 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:14 crc kubenswrapper[4945]: I0109 00:42:14.932200 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:14 crc kubenswrapper[4945]: I0109 00:42:14.970955 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:15 crc kubenswrapper[4945]: I0109 00:42:15.544789 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:15 crc kubenswrapper[4945]: I0109 00:42:15.602732 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dkbcd"] Jan 09 00:42:17 crc kubenswrapper[4945]: I0109 00:42:17.507965 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dkbcd" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerName="registry-server" containerID="cri-o://f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593" gracePeriod=2 Jan 09 00:42:17 crc kubenswrapper[4945]: I0109 00:42:17.942025 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.071457 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tggrs\" (UniqueName: \"kubernetes.io/projected/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-kube-api-access-tggrs\") pod \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.072214 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-catalog-content\") pod \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.072393 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-utilities\") pod \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\" (UID: \"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801\") " Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.073460 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-utilities" (OuterVolumeSpecName: "utilities") pod "4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" (UID: "4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.079100 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-kube-api-access-tggrs" (OuterVolumeSpecName: "kube-api-access-tggrs") pod "4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" (UID: "4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801"). InnerVolumeSpecName "kube-api-access-tggrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.122556 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" (UID: "4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.175028 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tggrs\" (UniqueName: \"kubernetes.io/projected/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-kube-api-access-tggrs\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.175083 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.175104 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.518392 4945 generic.go:334] "Generic (PLEG): container finished" podID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerID="f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593" exitCode=0 Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.518441 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkbcd" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.518439 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkbcd" event={"ID":"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801","Type":"ContainerDied","Data":"f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593"} Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.518619 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkbcd" event={"ID":"4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801","Type":"ContainerDied","Data":"7d5aec84fd6cd8069073b571b0470426cc659190bf7da4a7f7395976f9324d68"} Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.518667 4945 scope.go:117] "RemoveContainer" containerID="f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.537142 4945 scope.go:117] "RemoveContainer" containerID="454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.558060 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dkbcd"] Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.558380 4945 scope.go:117] "RemoveContainer" containerID="52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.565875 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dkbcd"] Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.586280 4945 scope.go:117] "RemoveContainer" containerID="f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593" Jan 09 00:42:18 crc kubenswrapper[4945]: E0109 00:42:18.586965 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593\": container with ID starting with f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593 not found: ID does not exist" containerID="f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.587045 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593"} err="failed to get container status \"f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593\": rpc error: code = NotFound desc = could not find container \"f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593\": container with ID starting with f7f304c91f7ac162dfd454bd8c62348637beaee10657b8cda5a8db8e559d7593 not found: ID does not exist" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.587088 4945 scope.go:117] "RemoveContainer" containerID="454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e" Jan 09 00:42:18 crc kubenswrapper[4945]: E0109 00:42:18.587647 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e\": container with ID starting with 454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e not found: ID does not exist" containerID="454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.587697 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e"} err="failed to get container status \"454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e\": rpc error: code = NotFound desc = could not find container \"454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e\": container with ID starting with 454a24b28e0835004fae96df968d316aaa8f3521fa57aae7e029561d022d757e not found: ID does not exist" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.587737 4945 scope.go:117] "RemoveContainer" containerID="52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078" Jan 09 00:42:18 crc kubenswrapper[4945]: E0109 00:42:18.588170 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078\": container with ID starting with 52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078 not found: ID does not exist" containerID="52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078" Jan 09 00:42:18 crc kubenswrapper[4945]: I0109 00:42:18.588204 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078"} err="failed to get container status \"52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078\": rpc error: code = NotFound desc = could not find container \"52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078\": container with ID starting with 52e8328aae2a01cb1be2a5b422a0d2ee421830ca9bcd7081dba8b572de384078 not found: ID does not exist" Jan 09 00:42:20 crc kubenswrapper[4945]: I0109 00:42:20.011520 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" path="/var/lib/kubelet/pods/4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801/volumes" Jan 09 00:42:23 crc kubenswrapper[4945]: I0109 00:42:23.001734 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:42:23 crc kubenswrapper[4945]: E0109 00:42:23.002675 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:42:24 crc kubenswrapper[4945]: I0109 00:42:24.131253 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 09 00:42:24 crc kubenswrapper[4945]: I0109 00:42:24.608113 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 09 00:42:34 crc kubenswrapper[4945]: I0109 00:42:34.001536 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:42:34 crc kubenswrapper[4945]: E0109 00:42:34.002586 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.098064 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Jan 09 00:42:35 crc kubenswrapper[4945]: E0109 00:42:35.098604 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerName="registry-server" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.098626 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerName="registry-server" Jan 09 00:42:35 crc kubenswrapper[4945]: E0109 00:42:35.098654 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerName="extract-utilities" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.098663 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerName="extract-utilities" Jan 09 00:42:35 crc kubenswrapper[4945]: E0109 00:42:35.098691 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerName="extract-content" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.098700 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerName="extract-content" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.098887 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e85bcc1-7b6c-4c1a-88f1-c3d0c0652801" containerName="registry-server" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.099755 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.102063 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vgpfs" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.121123 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.140971 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8td2\" (UniqueName: \"kubernetes.io/projected/f0bc3c7a-ab06-4678-ace1-acb37bfe327a-kube-api-access-d8td2\") pod \"mariadb-client-1-default\" (UID: \"f0bc3c7a-ab06-4678-ace1-acb37bfe327a\") " pod="openstack/mariadb-client-1-default" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.242828 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8td2\" (UniqueName: \"kubernetes.io/projected/f0bc3c7a-ab06-4678-ace1-acb37bfe327a-kube-api-access-d8td2\") pod \"mariadb-client-1-default\" (UID: \"f0bc3c7a-ab06-4678-ace1-acb37bfe327a\") " pod="openstack/mariadb-client-1-default" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.266669 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8td2\" (UniqueName: \"kubernetes.io/projected/f0bc3c7a-ab06-4678-ace1-acb37bfe327a-kube-api-access-d8td2\") pod \"mariadb-client-1-default\" (UID: \"f0bc3c7a-ab06-4678-ace1-acb37bfe327a\") " pod="openstack/mariadb-client-1-default" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.433459 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 09 00:42:35 crc kubenswrapper[4945]: I0109 00:42:35.891452 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 09 00:42:36 crc kubenswrapper[4945]: I0109 00:42:36.649409 4945 generic.go:334] "Generic (PLEG): container finished" podID="f0bc3c7a-ab06-4678-ace1-acb37bfe327a" containerID="652ef72f73f570473a60e90d1ace883139cfbd7b551f3ddebd55b51c5fdeb292" exitCode=0 Jan 09 00:42:36 crc kubenswrapper[4945]: I0109 00:42:36.649470 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"f0bc3c7a-ab06-4678-ace1-acb37bfe327a","Type":"ContainerDied","Data":"652ef72f73f570473a60e90d1ace883139cfbd7b551f3ddebd55b51c5fdeb292"} Jan 09 00:42:36 crc kubenswrapper[4945]: I0109 00:42:36.649505 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"f0bc3c7a-ab06-4678-ace1-acb37bfe327a","Type":"ContainerStarted","Data":"013ccd0aec59f605c815f62638a185016561e329b054c962c7f5e0bef143a4ec"} Jan 09 00:42:37 crc kubenswrapper[4945]: I0109 00:42:37.964691 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 09 00:42:37 crc kubenswrapper[4945]: I0109 00:42:37.991085 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_f0bc3c7a-ab06-4678-ace1-acb37bfe327a/mariadb-client-1-default/0.log" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.019533 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.025674 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.086492 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8td2\" (UniqueName: \"kubernetes.io/projected/f0bc3c7a-ab06-4678-ace1-acb37bfe327a-kube-api-access-d8td2\") pod \"f0bc3c7a-ab06-4678-ace1-acb37bfe327a\" (UID: \"f0bc3c7a-ab06-4678-ace1-acb37bfe327a\") " Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.097288 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0bc3c7a-ab06-4678-ace1-acb37bfe327a-kube-api-access-d8td2" (OuterVolumeSpecName: "kube-api-access-d8td2") pod "f0bc3c7a-ab06-4678-ace1-acb37bfe327a" (UID: "f0bc3c7a-ab06-4678-ace1-acb37bfe327a"). InnerVolumeSpecName "kube-api-access-d8td2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.189222 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8td2\" (UniqueName: \"kubernetes.io/projected/f0bc3c7a-ab06-4678-ace1-acb37bfe327a-kube-api-access-d8td2\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.539276 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Jan 09 00:42:38 crc kubenswrapper[4945]: E0109 00:42:38.540253 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0bc3c7a-ab06-4678-ace1-acb37bfe327a" containerName="mariadb-client-1-default" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.540281 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0bc3c7a-ab06-4678-ace1-acb37bfe327a" containerName="mariadb-client-1-default" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.540984 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0bc3c7a-ab06-4678-ace1-acb37bfe327a" containerName="mariadb-client-1-default" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.543544 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.557551 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.595901 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfq4\" (UniqueName: \"kubernetes.io/projected/03c9a11e-4422-4854-9493-1dd01eb53653-kube-api-access-6lfq4\") pod \"mariadb-client-2-default\" (UID: \"03c9a11e-4422-4854-9493-1dd01eb53653\") " pod="openstack/mariadb-client-2-default" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.664804 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="013ccd0aec59f605c815f62638a185016561e329b054c962c7f5e0bef143a4ec" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.664877 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.700063 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfq4\" (UniqueName: \"kubernetes.io/projected/03c9a11e-4422-4854-9493-1dd01eb53653-kube-api-access-6lfq4\") pod \"mariadb-client-2-default\" (UID: \"03c9a11e-4422-4854-9493-1dd01eb53653\") " pod="openstack/mariadb-client-2-default" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.728534 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfq4\" (UniqueName: \"kubernetes.io/projected/03c9a11e-4422-4854-9493-1dd01eb53653-kube-api-access-6lfq4\") pod \"mariadb-client-2-default\" (UID: \"03c9a11e-4422-4854-9493-1dd01eb53653\") " pod="openstack/mariadb-client-2-default" Jan 09 00:42:38 crc kubenswrapper[4945]: I0109 00:42:38.864454 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 09 00:42:39 crc kubenswrapper[4945]: I0109 00:42:39.154811 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 09 00:42:39 crc kubenswrapper[4945]: I0109 00:42:39.672919 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"03c9a11e-4422-4854-9493-1dd01eb53653","Type":"ContainerStarted","Data":"c57ab41e0d40f2c0f6d1853c2c75003ad2526af66665563936280c1a9112c362"} Jan 09 00:42:39 crc kubenswrapper[4945]: I0109 00:42:39.672971 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"03c9a11e-4422-4854-9493-1dd01eb53653","Type":"ContainerStarted","Data":"df7a05fe1db3a5a1744907747d69ed0b5a68a00a890b76a3f4d8a5e1e7e3f301"} Jan 09 00:42:39 crc kubenswrapper[4945]: I0109 00:42:39.690856 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.69083482 podStartE2EDuration="1.69083482s" podCreationTimestamp="2026-01-09 00:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:42:39.688860671 +0000 UTC m=+5230.000019637" watchObservedRunningTime="2026-01-09 00:42:39.69083482 +0000 UTC m=+5230.001993766" Jan 09 00:42:40 crc kubenswrapper[4945]: I0109 00:42:40.011292 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0bc3c7a-ab06-4678-ace1-acb37bfe327a" path="/var/lib/kubelet/pods/f0bc3c7a-ab06-4678-ace1-acb37bfe327a/volumes" Jan 09 00:42:40 crc kubenswrapper[4945]: I0109 00:42:40.681761 4945 generic.go:334] "Generic (PLEG): container finished" podID="03c9a11e-4422-4854-9493-1dd01eb53653" containerID="c57ab41e0d40f2c0f6d1853c2c75003ad2526af66665563936280c1a9112c362" exitCode=1 Jan 09 00:42:40 crc kubenswrapper[4945]: I0109 00:42:40.681884 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"03c9a11e-4422-4854-9493-1dd01eb53653","Type":"ContainerDied","Data":"c57ab41e0d40f2c0f6d1853c2c75003ad2526af66665563936280c1a9112c362"} Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.099653 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.139537 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.147009 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.149609 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lfq4\" (UniqueName: \"kubernetes.io/projected/03c9a11e-4422-4854-9493-1dd01eb53653-kube-api-access-6lfq4\") pod \"03c9a11e-4422-4854-9493-1dd01eb53653\" (UID: \"03c9a11e-4422-4854-9493-1dd01eb53653\") " Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.155135 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c9a11e-4422-4854-9493-1dd01eb53653-kube-api-access-6lfq4" (OuterVolumeSpecName: "kube-api-access-6lfq4") pod "03c9a11e-4422-4854-9493-1dd01eb53653" (UID: "03c9a11e-4422-4854-9493-1dd01eb53653"). InnerVolumeSpecName "kube-api-access-6lfq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.251964 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lfq4\" (UniqueName: \"kubernetes.io/projected/03c9a11e-4422-4854-9493-1dd01eb53653-kube-api-access-6lfq4\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.582131 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Jan 09 00:42:42 crc kubenswrapper[4945]: E0109 00:42:42.582479 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c9a11e-4422-4854-9493-1dd01eb53653" containerName="mariadb-client-2-default" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.582502 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c9a11e-4422-4854-9493-1dd01eb53653" containerName="mariadb-client-2-default" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.582685 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c9a11e-4422-4854-9493-1dd01eb53653" containerName="mariadb-client-2-default" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.583330 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.591879 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.659550 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp7xg\" (UniqueName: \"kubernetes.io/projected/71128ea3-e4bf-43c2-88ca-6a0a96b41052-kube-api-access-cp7xg\") pod \"mariadb-client-1\" (UID: \"71128ea3-e4bf-43c2-88ca-6a0a96b41052\") " pod="openstack/mariadb-client-1" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.698924 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df7a05fe1db3a5a1744907747d69ed0b5a68a00a890b76a3f4d8a5e1e7e3f301" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.699024 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.760938 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp7xg\" (UniqueName: \"kubernetes.io/projected/71128ea3-e4bf-43c2-88ca-6a0a96b41052-kube-api-access-cp7xg\") pod \"mariadb-client-1\" (UID: \"71128ea3-e4bf-43c2-88ca-6a0a96b41052\") " pod="openstack/mariadb-client-1" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.785965 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp7xg\" (UniqueName: \"kubernetes.io/projected/71128ea3-e4bf-43c2-88ca-6a0a96b41052-kube-api-access-cp7xg\") pod \"mariadb-client-1\" (UID: \"71128ea3-e4bf-43c2-88ca-6a0a96b41052\") " pod="openstack/mariadb-client-1" Jan 09 00:42:42 crc kubenswrapper[4945]: I0109 00:42:42.905399 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 09 00:42:43 crc kubenswrapper[4945]: I0109 00:42:43.509496 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Jan 09 00:42:43 crc kubenswrapper[4945]: I0109 00:42:43.710528 4945 generic.go:334] "Generic (PLEG): container finished" podID="71128ea3-e4bf-43c2-88ca-6a0a96b41052" containerID="cf108471d9c21386ad45b7dbc3ddc8c7548e85efdd0395b169e572ac96f9be29" exitCode=0 Jan 09 00:42:43 crc kubenswrapper[4945]: I0109 00:42:43.710636 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"71128ea3-e4bf-43c2-88ca-6a0a96b41052","Type":"ContainerDied","Data":"cf108471d9c21386ad45b7dbc3ddc8c7548e85efdd0395b169e572ac96f9be29"} Jan 09 00:42:43 crc kubenswrapper[4945]: I0109 00:42:43.710923 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"71128ea3-e4bf-43c2-88ca-6a0a96b41052","Type":"ContainerStarted","Data":"26cbef9f057714dde7ceae2d9ad7eadbb8fd59163dd341cbb992474cfea6f209"} Jan 09 00:42:44 crc kubenswrapper[4945]: I0109 00:42:44.013873 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c9a11e-4422-4854-9493-1dd01eb53653" path="/var/lib/kubelet/pods/03c9a11e-4422-4854-9493-1dd01eb53653/volumes" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.061154 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.080585 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_71128ea3-e4bf-43c2-88ca-6a0a96b41052/mariadb-client-1/0.log" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.092769 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp7xg\" (UniqueName: \"kubernetes.io/projected/71128ea3-e4bf-43c2-88ca-6a0a96b41052-kube-api-access-cp7xg\") pod \"71128ea3-e4bf-43c2-88ca-6a0a96b41052\" (UID: \"71128ea3-e4bf-43c2-88ca-6a0a96b41052\") " Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.100045 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71128ea3-e4bf-43c2-88ca-6a0a96b41052-kube-api-access-cp7xg" (OuterVolumeSpecName: "kube-api-access-cp7xg") pod "71128ea3-e4bf-43c2-88ca-6a0a96b41052" (UID: "71128ea3-e4bf-43c2-88ca-6a0a96b41052"). InnerVolumeSpecName "kube-api-access-cp7xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.110478 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.118847 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.194922 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp7xg\" (UniqueName: \"kubernetes.io/projected/71128ea3-e4bf-43c2-88ca-6a0a96b41052-kube-api-access-cp7xg\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.559475 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Jan 09 00:42:45 crc kubenswrapper[4945]: E0109 00:42:45.559850 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71128ea3-e4bf-43c2-88ca-6a0a96b41052" containerName="mariadb-client-1" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.559873 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="71128ea3-e4bf-43c2-88ca-6a0a96b41052" containerName="mariadb-client-1" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.560068 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="71128ea3-e4bf-43c2-88ca-6a0a96b41052" containerName="mariadb-client-1" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.560541 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.566738 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.606598 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45v9\" (UniqueName: \"kubernetes.io/projected/d402c1a9-cdca-4382-9f6c-b59de3f89ec8-kube-api-access-l45v9\") pod \"mariadb-client-4-default\" (UID: \"d402c1a9-cdca-4382-9f6c-b59de3f89ec8\") " pod="openstack/mariadb-client-4-default" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.708114 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45v9\" (UniqueName: \"kubernetes.io/projected/d402c1a9-cdca-4382-9f6c-b59de3f89ec8-kube-api-access-l45v9\") pod \"mariadb-client-4-default\" (UID: \"d402c1a9-cdca-4382-9f6c-b59de3f89ec8\") " pod="openstack/mariadb-client-4-default" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.726520 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.729095 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26cbef9f057714dde7ceae2d9ad7eadbb8fd59163dd341cbb992474cfea6f209" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.739878 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45v9\" (UniqueName: \"kubernetes.io/projected/d402c1a9-cdca-4382-9f6c-b59de3f89ec8-kube-api-access-l45v9\") pod \"mariadb-client-4-default\" (UID: \"d402c1a9-cdca-4382-9f6c-b59de3f89ec8\") " pod="openstack/mariadb-client-4-default" Jan 09 00:42:45 crc kubenswrapper[4945]: I0109 00:42:45.926633 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 09 00:42:46 crc kubenswrapper[4945]: I0109 00:42:46.021335 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71128ea3-e4bf-43c2-88ca-6a0a96b41052" path="/var/lib/kubelet/pods/71128ea3-e4bf-43c2-88ca-6a0a96b41052/volumes" Jan 09 00:42:46 crc kubenswrapper[4945]: I0109 00:42:46.406815 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 09 00:42:46 crc kubenswrapper[4945]: W0109 00:42:46.414101 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd402c1a9_cdca_4382_9f6c_b59de3f89ec8.slice/crio-d65aa976faecf7c40ed3a7cdd3565b620c75d654129f41be000cbb300dbac29a WatchSource:0}: Error finding container d65aa976faecf7c40ed3a7cdd3565b620c75d654129f41be000cbb300dbac29a: Status 404 returned error can't find the container with id d65aa976faecf7c40ed3a7cdd3565b620c75d654129f41be000cbb300dbac29a Jan 09 00:42:46 crc kubenswrapper[4945]: I0109 00:42:46.751244 4945 generic.go:334] "Generic (PLEG): container finished" podID="d402c1a9-cdca-4382-9f6c-b59de3f89ec8" containerID="cc5d3bd38a3ae510c5ea9981d4ee2a4a3359e7a7bd3f2d0db1434ccccfabe669" exitCode=0 Jan 09 00:42:46 crc kubenswrapper[4945]: I0109 00:42:46.751328 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"d402c1a9-cdca-4382-9f6c-b59de3f89ec8","Type":"ContainerDied","Data":"cc5d3bd38a3ae510c5ea9981d4ee2a4a3359e7a7bd3f2d0db1434ccccfabe669"} Jan 09 00:42:46 crc kubenswrapper[4945]: I0109 00:42:46.751540 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"d402c1a9-cdca-4382-9f6c-b59de3f89ec8","Type":"ContainerStarted","Data":"d65aa976faecf7c40ed3a7cdd3565b620c75d654129f41be000cbb300dbac29a"} Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.159397 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.184733 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_d402c1a9-cdca-4382-9f6c-b59de3f89ec8/mariadb-client-4-default/0.log" Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.213497 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.221782 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.250630 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l45v9\" (UniqueName: \"kubernetes.io/projected/d402c1a9-cdca-4382-9f6c-b59de3f89ec8-kube-api-access-l45v9\") pod \"d402c1a9-cdca-4382-9f6c-b59de3f89ec8\" (UID: \"d402c1a9-cdca-4382-9f6c-b59de3f89ec8\") " Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.256445 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d402c1a9-cdca-4382-9f6c-b59de3f89ec8-kube-api-access-l45v9" (OuterVolumeSpecName: "kube-api-access-l45v9") pod "d402c1a9-cdca-4382-9f6c-b59de3f89ec8" (UID: "d402c1a9-cdca-4382-9f6c-b59de3f89ec8"). InnerVolumeSpecName "kube-api-access-l45v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.352229 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l45v9\" (UniqueName: \"kubernetes.io/projected/d402c1a9-cdca-4382-9f6c-b59de3f89ec8-kube-api-access-l45v9\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.766935 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d65aa976faecf7c40ed3a7cdd3565b620c75d654129f41be000cbb300dbac29a" Jan 09 00:42:48 crc kubenswrapper[4945]: I0109 00:42:48.767062 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Jan 09 00:42:49 crc kubenswrapper[4945]: I0109 00:42:49.000402 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:42:49 crc kubenswrapper[4945]: E0109 00:42:49.000746 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:42:50 crc kubenswrapper[4945]: I0109 00:42:50.011964 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d402c1a9-cdca-4382-9f6c-b59de3f89ec8" path="/var/lib/kubelet/pods/d402c1a9-cdca-4382-9f6c-b59de3f89ec8/volumes" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.644310 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Jan 09 00:42:52 crc kubenswrapper[4945]: E0109 00:42:52.644717 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d402c1a9-cdca-4382-9f6c-b59de3f89ec8" containerName="mariadb-client-4-default" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.644734 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d402c1a9-cdca-4382-9f6c-b59de3f89ec8" containerName="mariadb-client-4-default" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.644932 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d402c1a9-cdca-4382-9f6c-b59de3f89ec8" containerName="mariadb-client-4-default" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.645547 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.653827 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.655521 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vgpfs" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.822086 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6672\" (UniqueName: \"kubernetes.io/projected/3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f-kube-api-access-n6672\") pod \"mariadb-client-5-default\" (UID: \"3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f\") " pod="openstack/mariadb-client-5-default" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.924126 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6672\" (UniqueName: \"kubernetes.io/projected/3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f-kube-api-access-n6672\") pod \"mariadb-client-5-default\" (UID: \"3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f\") " pod="openstack/mariadb-client-5-default" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.947270 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6672\" (UniqueName: \"kubernetes.io/projected/3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f-kube-api-access-n6672\") pod \"mariadb-client-5-default\" (UID: \"3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f\") " pod="openstack/mariadb-client-5-default" Jan 09 00:42:52 crc kubenswrapper[4945]: I0109 00:42:52.966797 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 09 00:42:53 crc kubenswrapper[4945]: I0109 00:42:53.445913 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 09 00:42:53 crc kubenswrapper[4945]: I0109 00:42:53.813483 4945 generic.go:334] "Generic (PLEG): container finished" podID="3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f" containerID="65888d6dad32b95212993485934cdfe9b5de233ff8b1ac28b6b939849b026055" exitCode=0 Jan 09 00:42:53 crc kubenswrapper[4945]: I0109 00:42:53.813597 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f","Type":"ContainerDied","Data":"65888d6dad32b95212993485934cdfe9b5de233ff8b1ac28b6b939849b026055"} Jan 09 00:42:53 crc kubenswrapper[4945]: I0109 00:42:53.813777 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f","Type":"ContainerStarted","Data":"91b7eeaa98378b839a1fbfd5f683673dbe75199d42caa9e600f4997756a7457c"} Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.262923 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.282600 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f/mariadb-client-5-default/0.log" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.310307 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.316058 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.358617 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6672\" (UniqueName: \"kubernetes.io/projected/3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f-kube-api-access-n6672\") pod \"3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f\" (UID: \"3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f\") " Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.364397 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f-kube-api-access-n6672" (OuterVolumeSpecName: "kube-api-access-n6672") pod "3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f" (UID: "3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f"). InnerVolumeSpecName "kube-api-access-n6672". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.448349 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Jan 09 00:42:55 crc kubenswrapper[4945]: E0109 00:42:55.448884 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f" containerName="mariadb-client-5-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.448907 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f" containerName="mariadb-client-5-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.449145 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f" containerName="mariadb-client-5-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.450058 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.460282 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6672\" (UniqueName: \"kubernetes.io/projected/3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f-kube-api-access-n6672\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.469085 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.561659 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vng\" (UniqueName: \"kubernetes.io/projected/b16787e3-9cbb-4a2f-a627-75e568544df2-kube-api-access-p8vng\") pod \"mariadb-client-6-default\" (UID: \"b16787e3-9cbb-4a2f-a627-75e568544df2\") " pod="openstack/mariadb-client-6-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.663279 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vng\" (UniqueName: \"kubernetes.io/projected/b16787e3-9cbb-4a2f-a627-75e568544df2-kube-api-access-p8vng\") pod \"mariadb-client-6-default\" (UID: \"b16787e3-9cbb-4a2f-a627-75e568544df2\") " pod="openstack/mariadb-client-6-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.681134 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vng\" (UniqueName: \"kubernetes.io/projected/b16787e3-9cbb-4a2f-a627-75e568544df2-kube-api-access-p8vng\") pod \"mariadb-client-6-default\" (UID: \"b16787e3-9cbb-4a2f-a627-75e568544df2\") " pod="openstack/mariadb-client-6-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.770228 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.848756 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b7eeaa98378b839a1fbfd5f683673dbe75199d42caa9e600f4997756a7457c" Jan 09 00:42:55 crc kubenswrapper[4945]: I0109 00:42:55.848840 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Jan 09 00:42:56 crc kubenswrapper[4945]: I0109 00:42:56.010178 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f" path="/var/lib/kubelet/pods/3c62f6b8-5d3f-406a-bf7f-7ed1ba88293f/volumes" Jan 09 00:42:56 crc kubenswrapper[4945]: I0109 00:42:56.255320 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 09 00:42:56 crc kubenswrapper[4945]: W0109 00:42:56.263303 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb16787e3_9cbb_4a2f_a627_75e568544df2.slice/crio-6a5a9e29e171133df264021d57d8b05f3ed20f96b3f81d379bb289598be4e9f3 WatchSource:0}: Error finding container 6a5a9e29e171133df264021d57d8b05f3ed20f96b3f81d379bb289598be4e9f3: Status 404 returned error can't find the container with id 6a5a9e29e171133df264021d57d8b05f3ed20f96b3f81d379bb289598be4e9f3 Jan 09 00:42:56 crc kubenswrapper[4945]: I0109 00:42:56.857074 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"b16787e3-9cbb-4a2f-a627-75e568544df2","Type":"ContainerStarted","Data":"3769760ab2d4020d86abe427bf66a6213bf5c96c602a0384791cd76180d8a031"} Jan 09 00:42:56 crc kubenswrapper[4945]: I0109 00:42:56.857469 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"b16787e3-9cbb-4a2f-a627-75e568544df2","Type":"ContainerStarted","Data":"6a5a9e29e171133df264021d57d8b05f3ed20f96b3f81d379bb289598be4e9f3"} Jan 09 00:42:56 crc kubenswrapper[4945]: I0109 00:42:56.874825 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.8747989189999998 podStartE2EDuration="1.874798919s" podCreationTimestamp="2026-01-09 00:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:42:56.874592404 +0000 UTC m=+5247.185751420" watchObservedRunningTime="2026-01-09 00:42:56.874798919 +0000 UTC m=+5247.185957895" Jan 09 00:42:56 crc kubenswrapper[4945]: I0109 00:42:56.952667 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_b16787e3-9cbb-4a2f-a627-75e568544df2/mariadb-client-6-default/0.log" Jan 09 00:42:57 crc kubenswrapper[4945]: I0109 00:42:57.866257 4945 generic.go:334] "Generic (PLEG): container finished" podID="b16787e3-9cbb-4a2f-a627-75e568544df2" containerID="3769760ab2d4020d86abe427bf66a6213bf5c96c602a0384791cd76180d8a031" exitCode=1 Jan 09 00:42:57 crc kubenswrapper[4945]: I0109 00:42:57.866317 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"b16787e3-9cbb-4a2f-a627-75e568544df2","Type":"ContainerDied","Data":"3769760ab2d4020d86abe427bf66a6213bf5c96c602a0384791cd76180d8a031"} Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.199888 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.240930 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.247636 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.318771 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8vng\" (UniqueName: \"kubernetes.io/projected/b16787e3-9cbb-4a2f-a627-75e568544df2-kube-api-access-p8vng\") pod \"b16787e3-9cbb-4a2f-a627-75e568544df2\" (UID: \"b16787e3-9cbb-4a2f-a627-75e568544df2\") " Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.329313 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16787e3-9cbb-4a2f-a627-75e568544df2-kube-api-access-p8vng" (OuterVolumeSpecName: "kube-api-access-p8vng") pod "b16787e3-9cbb-4a2f-a627-75e568544df2" (UID: "b16787e3-9cbb-4a2f-a627-75e568544df2"). InnerVolumeSpecName "kube-api-access-p8vng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.380016 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Jan 09 00:42:59 crc kubenswrapper[4945]: E0109 00:42:59.380403 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16787e3-9cbb-4a2f-a627-75e568544df2" containerName="mariadb-client-6-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.380423 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16787e3-9cbb-4a2f-a627-75e568544df2" containerName="mariadb-client-6-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.380627 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16787e3-9cbb-4a2f-a627-75e568544df2" containerName="mariadb-client-6-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.381212 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.388788 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.437964 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8vng\" (UniqueName: \"kubernetes.io/projected/b16787e3-9cbb-4a2f-a627-75e568544df2-kube-api-access-p8vng\") on node \"crc\" DevicePath \"\"" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.539820 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp6zs\" (UniqueName: \"kubernetes.io/projected/174301d4-1836-4101-afc7-1d1e4568c637-kube-api-access-mp6zs\") pod \"mariadb-client-7-default\" (UID: \"174301d4-1836-4101-afc7-1d1e4568c637\") " pod="openstack/mariadb-client-7-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.641026 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp6zs\" (UniqueName: \"kubernetes.io/projected/174301d4-1836-4101-afc7-1d1e4568c637-kube-api-access-mp6zs\") pod \"mariadb-client-7-default\" (UID: \"174301d4-1836-4101-afc7-1d1e4568c637\") " pod="openstack/mariadb-client-7-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.660284 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp6zs\" (UniqueName: \"kubernetes.io/projected/174301d4-1836-4101-afc7-1d1e4568c637-kube-api-access-mp6zs\") pod \"mariadb-client-7-default\" (UID: \"174301d4-1836-4101-afc7-1d1e4568c637\") " pod="openstack/mariadb-client-7-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.753603 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.888787 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a5a9e29e171133df264021d57d8b05f3ed20f96b3f81d379bb289598be4e9f3" Jan 09 00:42:59 crc kubenswrapper[4945]: I0109 00:42:59.888884 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Jan 09 00:43:00 crc kubenswrapper[4945]: I0109 00:43:00.007419 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:43:00 crc kubenswrapper[4945]: E0109 00:43:00.007788 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:43:00 crc kubenswrapper[4945]: I0109 00:43:00.010141 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16787e3-9cbb-4a2f-a627-75e568544df2" path="/var/lib/kubelet/pods/b16787e3-9cbb-4a2f-a627-75e568544df2/volumes" Jan 09 00:43:00 crc kubenswrapper[4945]: I0109 00:43:00.312138 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 09 00:43:00 crc kubenswrapper[4945]: I0109 00:43:00.901573 4945 generic.go:334] "Generic (PLEG): container finished" podID="174301d4-1836-4101-afc7-1d1e4568c637" containerID="cc80cb24ea02ac0ef5c53bbf92f131bf83f2b7c00ede2dce0af809083b1614bd" exitCode=0 Jan 09 00:43:00 crc kubenswrapper[4945]: I0109 00:43:00.901667 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"174301d4-1836-4101-afc7-1d1e4568c637","Type":"ContainerDied","Data":"cc80cb24ea02ac0ef5c53bbf92f131bf83f2b7c00ede2dce0af809083b1614bd"} Jan 09 00:43:00 crc kubenswrapper[4945]: I0109 00:43:00.902118 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"174301d4-1836-4101-afc7-1d1e4568c637","Type":"ContainerStarted","Data":"1a2080bd7bd83bba8569e9c516373ac84fe10f42f56f17b02251ff1c2133637a"} Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.238774 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.258599 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_174301d4-1836-4101-afc7-1d1e4568c637/mariadb-client-7-default/0.log" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.286639 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.291888 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.381582 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp6zs\" (UniqueName: \"kubernetes.io/projected/174301d4-1836-4101-afc7-1d1e4568c637-kube-api-access-mp6zs\") pod \"174301d4-1836-4101-afc7-1d1e4568c637\" (UID: \"174301d4-1836-4101-afc7-1d1e4568c637\") " Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.388327 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174301d4-1836-4101-afc7-1d1e4568c637-kube-api-access-mp6zs" (OuterVolumeSpecName: "kube-api-access-mp6zs") pod "174301d4-1836-4101-afc7-1d1e4568c637" (UID: "174301d4-1836-4101-afc7-1d1e4568c637"). InnerVolumeSpecName "kube-api-access-mp6zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.408545 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Jan 09 00:43:02 crc kubenswrapper[4945]: E0109 00:43:02.409356 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174301d4-1836-4101-afc7-1d1e4568c637" containerName="mariadb-client-7-default" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.409423 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="174301d4-1836-4101-afc7-1d1e4568c637" containerName="mariadb-client-7-default" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.409759 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="174301d4-1836-4101-afc7-1d1e4568c637" containerName="mariadb-client-7-default" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.410422 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.415480 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.483091 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp6zs\" (UniqueName: \"kubernetes.io/projected/174301d4-1836-4101-afc7-1d1e4568c637-kube-api-access-mp6zs\") on node \"crc\" DevicePath \"\"" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.584953 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rglxt\" (UniqueName: \"kubernetes.io/projected/c11f4ce2-74bd-4745-8e78-15a0a6303d1d-kube-api-access-rglxt\") pod \"mariadb-client-2\" (UID: \"c11f4ce2-74bd-4745-8e78-15a0a6303d1d\") " pod="openstack/mariadb-client-2" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.686724 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rglxt\" (UniqueName: \"kubernetes.io/projected/c11f4ce2-74bd-4745-8e78-15a0a6303d1d-kube-api-access-rglxt\") pod \"mariadb-client-2\" (UID: \"c11f4ce2-74bd-4745-8e78-15a0a6303d1d\") " pod="openstack/mariadb-client-2" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.709184 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rglxt\" (UniqueName: \"kubernetes.io/projected/c11f4ce2-74bd-4745-8e78-15a0a6303d1d-kube-api-access-rglxt\") pod \"mariadb-client-2\" (UID: \"c11f4ce2-74bd-4745-8e78-15a0a6303d1d\") " pod="openstack/mariadb-client-2" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.748350 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.920590 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2080bd7bd83bba8569e9c516373ac84fe10f42f56f17b02251ff1c2133637a" Jan 09 00:43:02 crc kubenswrapper[4945]: I0109 00:43:02.920655 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Jan 09 00:43:03 crc kubenswrapper[4945]: I0109 00:43:03.268892 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Jan 09 00:43:03 crc kubenswrapper[4945]: W0109 00:43:03.276962 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc11f4ce2_74bd_4745_8e78_15a0a6303d1d.slice/crio-174d135c0cd174c8181f4166a8d03d0c412fb7736799d7a6f62b6f97ef300c70 WatchSource:0}: Error finding container 174d135c0cd174c8181f4166a8d03d0c412fb7736799d7a6f62b6f97ef300c70: Status 404 returned error can't find the container with id 174d135c0cd174c8181f4166a8d03d0c412fb7736799d7a6f62b6f97ef300c70 Jan 09 00:43:03 crc kubenswrapper[4945]: I0109 00:43:03.935780 4945 generic.go:334] "Generic (PLEG): container finished" podID="c11f4ce2-74bd-4745-8e78-15a0a6303d1d" containerID="aee23294eadbf328ca138ea4ebbdbdeae9ec0e729f66ef3ee6dc19974784ad63" exitCode=0 Jan 09 00:43:03 crc kubenswrapper[4945]: I0109 00:43:03.936060 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"c11f4ce2-74bd-4745-8e78-15a0a6303d1d","Type":"ContainerDied","Data":"aee23294eadbf328ca138ea4ebbdbdeae9ec0e729f66ef3ee6dc19974784ad63"} Jan 09 00:43:03 crc kubenswrapper[4945]: I0109 00:43:03.936151 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"c11f4ce2-74bd-4745-8e78-15a0a6303d1d","Type":"ContainerStarted","Data":"174d135c0cd174c8181f4166a8d03d0c412fb7736799d7a6f62b6f97ef300c70"} Jan 09 00:43:04 crc kubenswrapper[4945]: I0109 00:43:04.021109 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174301d4-1836-4101-afc7-1d1e4568c637" path="/var/lib/kubelet/pods/174301d4-1836-4101-afc7-1d1e4568c637/volumes" Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.308363 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.331416 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_c11f4ce2-74bd-4745-8e78-15a0a6303d1d/mariadb-client-2/0.log" Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.361341 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.367145 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.432304 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rglxt\" (UniqueName: \"kubernetes.io/projected/c11f4ce2-74bd-4745-8e78-15a0a6303d1d-kube-api-access-rglxt\") pod \"c11f4ce2-74bd-4745-8e78-15a0a6303d1d\" (UID: \"c11f4ce2-74bd-4745-8e78-15a0a6303d1d\") " Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.438196 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11f4ce2-74bd-4745-8e78-15a0a6303d1d-kube-api-access-rglxt" (OuterVolumeSpecName: "kube-api-access-rglxt") pod "c11f4ce2-74bd-4745-8e78-15a0a6303d1d" (UID: "c11f4ce2-74bd-4745-8e78-15a0a6303d1d"). InnerVolumeSpecName "kube-api-access-rglxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.534540 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rglxt\" (UniqueName: \"kubernetes.io/projected/c11f4ce2-74bd-4745-8e78-15a0a6303d1d-kube-api-access-rglxt\") on node \"crc\" DevicePath \"\"" Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.956013 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="174d135c0cd174c8181f4166a8d03d0c412fb7736799d7a6f62b6f97ef300c70" Jan 09 00:43:05 crc kubenswrapper[4945]: I0109 00:43:05.956071 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Jan 09 00:43:06 crc kubenswrapper[4945]: I0109 00:43:06.011149 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11f4ce2-74bd-4745-8e78-15a0a6303d1d" path="/var/lib/kubelet/pods/c11f4ce2-74bd-4745-8e78-15a0a6303d1d/volumes" Jan 09 00:43:15 crc kubenswrapper[4945]: I0109 00:43:15.000864 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:43:15 crc kubenswrapper[4945]: E0109 00:43:15.001917 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:43:26 crc kubenswrapper[4945]: I0109 00:43:26.001064 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:43:26 crc kubenswrapper[4945]: E0109 00:43:26.001858 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:43:35 crc kubenswrapper[4945]: I0109 00:43:35.876159 4945 scope.go:117] "RemoveContainer" containerID="19433dd7ebeded123b956bcc6f1000ef59ba9a41302a00d2d0d2c5031b060d9a" Jan 09 00:43:37 crc kubenswrapper[4945]: I0109 00:43:37.001155 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:43:37 crc kubenswrapper[4945]: E0109 00:43:37.001449 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:43:48 crc kubenswrapper[4945]: I0109 00:43:48.003940 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:43:48 crc kubenswrapper[4945]: E0109 00:43:48.005381 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:44:03 crc kubenswrapper[4945]: I0109 00:44:03.000050 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:44:03 crc kubenswrapper[4945]: E0109 00:44:03.000995 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:44:17 crc kubenswrapper[4945]: I0109 00:44:17.001295 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:44:17 crc kubenswrapper[4945]: I0109 00:44:17.537835 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"f956e706e3b54964ca3dda380ed40fcf584aa07099065dbe710da5a117358406"} Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.140963 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb"] Jan 09 00:45:00 crc kubenswrapper[4945]: E0109 00:45:00.141830 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11f4ce2-74bd-4745-8e78-15a0a6303d1d" containerName="mariadb-client-2" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.141844 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11f4ce2-74bd-4745-8e78-15a0a6303d1d" containerName="mariadb-client-2" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.142054 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11f4ce2-74bd-4745-8e78-15a0a6303d1d" containerName="mariadb-client-2" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.142686 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.145704 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.149958 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.153373 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb"] Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.298466 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtkr\" (UniqueName: \"kubernetes.io/projected/7cf06166-4b8d-4bd5-b20a-f77619a52a56-kube-api-access-bqtkr\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.298542 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cf06166-4b8d-4bd5-b20a-f77619a52a56-config-volume\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.298623 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cf06166-4b8d-4bd5-b20a-f77619a52a56-secret-volume\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.400120 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cf06166-4b8d-4bd5-b20a-f77619a52a56-secret-volume\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.400203 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtkr\" (UniqueName: \"kubernetes.io/projected/7cf06166-4b8d-4bd5-b20a-f77619a52a56-kube-api-access-bqtkr\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.400240 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cf06166-4b8d-4bd5-b20a-f77619a52a56-config-volume\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.401330 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cf06166-4b8d-4bd5-b20a-f77619a52a56-config-volume\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.406017 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cf06166-4b8d-4bd5-b20a-f77619a52a56-secret-volume\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.420146 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtkr\" (UniqueName: \"kubernetes.io/projected/7cf06166-4b8d-4bd5-b20a-f77619a52a56-kube-api-access-bqtkr\") pod \"collect-profiles-29465325-vdhrb\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.468342 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:00 crc kubenswrapper[4945]: I0109 00:45:00.905630 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb"] Jan 09 00:45:01 crc kubenswrapper[4945]: I0109 00:45:01.916906 4945 generic.go:334] "Generic (PLEG): container finished" podID="7cf06166-4b8d-4bd5-b20a-f77619a52a56" containerID="5e5b0d80a1527f5de36ebd0fe40ef783870f7719fae66e0bad7004f86fa0a085" exitCode=0 Jan 09 00:45:01 crc kubenswrapper[4945]: I0109 00:45:01.917053 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" event={"ID":"7cf06166-4b8d-4bd5-b20a-f77619a52a56","Type":"ContainerDied","Data":"5e5b0d80a1527f5de36ebd0fe40ef783870f7719fae66e0bad7004f86fa0a085"} Jan 09 00:45:01 crc kubenswrapper[4945]: I0109 00:45:01.917247 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" event={"ID":"7cf06166-4b8d-4bd5-b20a-f77619a52a56","Type":"ContainerStarted","Data":"c862156d7a8ae8495fd7167d6ac80c53e6299385aa83e3e602cc6a4ec2302590"} Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.214972 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.356408 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cf06166-4b8d-4bd5-b20a-f77619a52a56-secret-volume\") pod \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.356547 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqtkr\" (UniqueName: \"kubernetes.io/projected/7cf06166-4b8d-4bd5-b20a-f77619a52a56-kube-api-access-bqtkr\") pod \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.356576 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cf06166-4b8d-4bd5-b20a-f77619a52a56-config-volume\") pod \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\" (UID: \"7cf06166-4b8d-4bd5-b20a-f77619a52a56\") " Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.357377 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cf06166-4b8d-4bd5-b20a-f77619a52a56-config-volume" (OuterVolumeSpecName: "config-volume") pod "7cf06166-4b8d-4bd5-b20a-f77619a52a56" (UID: "7cf06166-4b8d-4bd5-b20a-f77619a52a56"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.362633 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf06166-4b8d-4bd5-b20a-f77619a52a56-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7cf06166-4b8d-4bd5-b20a-f77619a52a56" (UID: "7cf06166-4b8d-4bd5-b20a-f77619a52a56"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.362758 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf06166-4b8d-4bd5-b20a-f77619a52a56-kube-api-access-bqtkr" (OuterVolumeSpecName: "kube-api-access-bqtkr") pod "7cf06166-4b8d-4bd5-b20a-f77619a52a56" (UID: "7cf06166-4b8d-4bd5-b20a-f77619a52a56"). InnerVolumeSpecName "kube-api-access-bqtkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.458260 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7cf06166-4b8d-4bd5-b20a-f77619a52a56-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.458306 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqtkr\" (UniqueName: \"kubernetes.io/projected/7cf06166-4b8d-4bd5-b20a-f77619a52a56-kube-api-access-bqtkr\") on node \"crc\" DevicePath \"\"" Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.458315 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cf06166-4b8d-4bd5-b20a-f77619a52a56-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.933827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" event={"ID":"7cf06166-4b8d-4bd5-b20a-f77619a52a56","Type":"ContainerDied","Data":"c862156d7a8ae8495fd7167d6ac80c53e6299385aa83e3e602cc6a4ec2302590"} Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.934249 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c862156d7a8ae8495fd7167d6ac80c53e6299385aa83e3e602cc6a4ec2302590" Jan 09 00:45:03 crc kubenswrapper[4945]: I0109 00:45:03.933917 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb" Jan 09 00:45:04 crc kubenswrapper[4945]: I0109 00:45:04.290791 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599"] Jan 09 00:45:04 crc kubenswrapper[4945]: I0109 00:45:04.296286 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465280-g2599"] Jan 09 00:45:04 crc kubenswrapper[4945]: I0109 00:45:04.981717 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wsbkt"] Jan 09 00:45:04 crc kubenswrapper[4945]: E0109 00:45:04.982274 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf06166-4b8d-4bd5-b20a-f77619a52a56" containerName="collect-profiles" Jan 09 00:45:04 crc kubenswrapper[4945]: I0109 00:45:04.982297 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf06166-4b8d-4bd5-b20a-f77619a52a56" containerName="collect-profiles" Jan 09 00:45:04 crc kubenswrapper[4945]: I0109 00:45:04.982489 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf06166-4b8d-4bd5-b20a-f77619a52a56" containerName="collect-profiles" Jan 09 00:45:04 crc kubenswrapper[4945]: I0109 00:45:04.983987 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.009630 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsbkt"] Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.084029 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlx9l\" (UniqueName: \"kubernetes.io/projected/003369b9-62ab-4197-ae2b-84659b620b9e-kube-api-access-zlx9l\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.084095 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-utilities\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.084133 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-catalog-content\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.185675 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlx9l\" (UniqueName: \"kubernetes.io/projected/003369b9-62ab-4197-ae2b-84659b620b9e-kube-api-access-zlx9l\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.185747 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-utilities\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.185775 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-catalog-content\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.186537 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-utilities\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.186562 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-catalog-content\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.208081 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlx9l\" (UniqueName: \"kubernetes.io/projected/003369b9-62ab-4197-ae2b-84659b620b9e-kube-api-access-zlx9l\") pod \"redhat-operators-wsbkt\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.301717 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.784465 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsbkt"] Jan 09 00:45:05 crc kubenswrapper[4945]: I0109 00:45:05.949647 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsbkt" event={"ID":"003369b9-62ab-4197-ae2b-84659b620b9e","Type":"ContainerStarted","Data":"5df1a2273f5072f4fdaa99f3ffa8f7fabc28416f54b1b1c839d9f094278c5532"} Jan 09 00:45:06 crc kubenswrapper[4945]: I0109 00:45:06.009303 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57b79d3-0b89-4f67-afd3-024709104516" path="/var/lib/kubelet/pods/b57b79d3-0b89-4f67-afd3-024709104516/volumes" Jan 09 00:45:06 crc kubenswrapper[4945]: I0109 00:45:06.961574 4945 generic.go:334] "Generic (PLEG): container finished" podID="003369b9-62ab-4197-ae2b-84659b620b9e" containerID="de54ce9653ea416a58a86bf6913cf5a27230393855da54789ccec4d1a0dcddd3" exitCode=0 Jan 09 00:45:06 crc kubenswrapper[4945]: I0109 00:45:06.961971 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsbkt" event={"ID":"003369b9-62ab-4197-ae2b-84659b620b9e","Type":"ContainerDied","Data":"de54ce9653ea416a58a86bf6913cf5a27230393855da54789ccec4d1a0dcddd3"} Jan 09 00:45:07 crc kubenswrapper[4945]: I0109 00:45:07.970736 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsbkt" event={"ID":"003369b9-62ab-4197-ae2b-84659b620b9e","Type":"ContainerStarted","Data":"4d7cdae91e2e37720466fb04df538451d8f42f595cf71fce7f24be729a2f7f34"} Jan 09 00:45:08 crc kubenswrapper[4945]: I0109 00:45:08.981288 4945 generic.go:334] "Generic (PLEG): container finished" podID="003369b9-62ab-4197-ae2b-84659b620b9e" containerID="4d7cdae91e2e37720466fb04df538451d8f42f595cf71fce7f24be729a2f7f34" exitCode=0 Jan 09 00:45:08 crc kubenswrapper[4945]: I0109 00:45:08.981347 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsbkt" event={"ID":"003369b9-62ab-4197-ae2b-84659b620b9e","Type":"ContainerDied","Data":"4d7cdae91e2e37720466fb04df538451d8f42f595cf71fce7f24be729a2f7f34"} Jan 09 00:45:10 crc kubenswrapper[4945]: I0109 00:45:10.996798 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsbkt" event={"ID":"003369b9-62ab-4197-ae2b-84659b620b9e","Type":"ContainerStarted","Data":"9d43fe94e873b5efe4f39ef3dbd7d8a440b256a2b08b56e0186120f7e02bcdf5"} Jan 09 00:45:11 crc kubenswrapper[4945]: I0109 00:45:11.014944 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wsbkt" podStartSLOduration=4.081249816 podStartE2EDuration="7.014916117s" podCreationTimestamp="2026-01-09 00:45:04 +0000 UTC" firstStartedPulling="2026-01-09 00:45:06.964614441 +0000 UTC m=+5377.275773407" lastFinishedPulling="2026-01-09 00:45:09.898280762 +0000 UTC m=+5380.209439708" observedRunningTime="2026-01-09 00:45:11.012971569 +0000 UTC m=+5381.324130515" watchObservedRunningTime="2026-01-09 00:45:11.014916117 +0000 UTC m=+5381.326075063" Jan 09 00:45:15 crc kubenswrapper[4945]: I0109 00:45:15.302618 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:15 crc kubenswrapper[4945]: I0109 00:45:15.302967 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:15 crc kubenswrapper[4945]: I0109 00:45:15.342559 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:16 crc kubenswrapper[4945]: I0109 00:45:16.072300 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:16 crc kubenswrapper[4945]: I0109 00:45:16.119070 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsbkt"] Jan 09 00:45:18 crc kubenswrapper[4945]: I0109 00:45:18.045805 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wsbkt" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" containerName="registry-server" containerID="cri-o://9d43fe94e873b5efe4f39ef3dbd7d8a440b256a2b08b56e0186120f7e02bcdf5" gracePeriod=2 Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.073148 4945 generic.go:334] "Generic (PLEG): container finished" podID="003369b9-62ab-4197-ae2b-84659b620b9e" containerID="9d43fe94e873b5efe4f39ef3dbd7d8a440b256a2b08b56e0186120f7e02bcdf5" exitCode=0 Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.073257 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsbkt" event={"ID":"003369b9-62ab-4197-ae2b-84659b620b9e","Type":"ContainerDied","Data":"9d43fe94e873b5efe4f39ef3dbd7d8a440b256a2b08b56e0186120f7e02bcdf5"} Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.153552 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.248393 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-utilities\") pod \"003369b9-62ab-4197-ae2b-84659b620b9e\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.248936 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlx9l\" (UniqueName: \"kubernetes.io/projected/003369b9-62ab-4197-ae2b-84659b620b9e-kube-api-access-zlx9l\") pod \"003369b9-62ab-4197-ae2b-84659b620b9e\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.249040 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-catalog-content\") pod \"003369b9-62ab-4197-ae2b-84659b620b9e\" (UID: \"003369b9-62ab-4197-ae2b-84659b620b9e\") " Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.249704 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-utilities" (OuterVolumeSpecName: "utilities") pod "003369b9-62ab-4197-ae2b-84659b620b9e" (UID: "003369b9-62ab-4197-ae2b-84659b620b9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.256026 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/003369b9-62ab-4197-ae2b-84659b620b9e-kube-api-access-zlx9l" (OuterVolumeSpecName: "kube-api-access-zlx9l") pod "003369b9-62ab-4197-ae2b-84659b620b9e" (UID: "003369b9-62ab-4197-ae2b-84659b620b9e"). InnerVolumeSpecName "kube-api-access-zlx9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.350882 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.350922 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlx9l\" (UniqueName: \"kubernetes.io/projected/003369b9-62ab-4197-ae2b-84659b620b9e-kube-api-access-zlx9l\") on node \"crc\" DevicePath \"\"" Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.371606 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "003369b9-62ab-4197-ae2b-84659b620b9e" (UID: "003369b9-62ab-4197-ae2b-84659b620b9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:45:21 crc kubenswrapper[4945]: I0109 00:45:21.451808 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/003369b9-62ab-4197-ae2b-84659b620b9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:45:22 crc kubenswrapper[4945]: I0109 00:45:22.083764 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsbkt" event={"ID":"003369b9-62ab-4197-ae2b-84659b620b9e","Type":"ContainerDied","Data":"5df1a2273f5072f4fdaa99f3ffa8f7fabc28416f54b1b1c839d9f094278c5532"} Jan 09 00:45:22 crc kubenswrapper[4945]: I0109 00:45:22.083840 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsbkt" Jan 09 00:45:22 crc kubenswrapper[4945]: I0109 00:45:22.085829 4945 scope.go:117] "RemoveContainer" containerID="9d43fe94e873b5efe4f39ef3dbd7d8a440b256a2b08b56e0186120f7e02bcdf5" Jan 09 00:45:22 crc kubenswrapper[4945]: I0109 00:45:22.112846 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsbkt"] Jan 09 00:45:22 crc kubenswrapper[4945]: I0109 00:45:22.119789 4945 scope.go:117] "RemoveContainer" containerID="4d7cdae91e2e37720466fb04df538451d8f42f595cf71fce7f24be729a2f7f34" Jan 09 00:45:22 crc kubenswrapper[4945]: I0109 00:45:22.121734 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wsbkt"] Jan 09 00:45:22 crc kubenswrapper[4945]: I0109 00:45:22.151684 4945 scope.go:117] "RemoveContainer" containerID="de54ce9653ea416a58a86bf6913cf5a27230393855da54789ccec4d1a0dcddd3" Jan 09 00:45:24 crc kubenswrapper[4945]: I0109 00:45:24.009321 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" path="/var/lib/kubelet/pods/003369b9-62ab-4197-ae2b-84659b620b9e/volumes" Jan 09 00:45:35 crc kubenswrapper[4945]: I0109 00:45:35.942298 4945 scope.go:117] "RemoveContainer" containerID="c8d0dcc849b3ea8c67e9cfb11238b55ab0ae5786f6b26b8dd8a1999f60c26686" Jan 09 00:46:43 crc kubenswrapper[4945]: I0109 00:46:43.578531 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:46:43 crc kubenswrapper[4945]: I0109 00:46:43.579233 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:47:13 crc kubenswrapper[4945]: I0109 00:47:13.578722 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:47:13 crc kubenswrapper[4945]: I0109 00:47:13.579286 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:47:36 crc kubenswrapper[4945]: I0109 00:47:36.011023 4945 scope.go:117] "RemoveContainer" containerID="a3a3a5aefbdc7ab6510773662b04e931eb7939ad75b38469761d9b46f0792df0" Jan 09 00:47:43 crc kubenswrapper[4945]: I0109 00:47:43.578781 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:47:43 crc kubenswrapper[4945]: I0109 00:47:43.580479 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:47:43 crc kubenswrapper[4945]: I0109 00:47:43.580609 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:47:43 crc kubenswrapper[4945]: I0109 00:47:43.581582 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f956e706e3b54964ca3dda380ed40fcf584aa07099065dbe710da5a117358406"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:47:43 crc kubenswrapper[4945]: I0109 00:47:43.581678 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://f956e706e3b54964ca3dda380ed40fcf584aa07099065dbe710da5a117358406" gracePeriod=600 Jan 09 00:47:44 crc kubenswrapper[4945]: I0109 00:47:44.362384 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="f956e706e3b54964ca3dda380ed40fcf584aa07099065dbe710da5a117358406" exitCode=0 Jan 09 00:47:44 crc kubenswrapper[4945]: I0109 00:47:44.362448 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"f956e706e3b54964ca3dda380ed40fcf584aa07099065dbe710da5a117358406"} Jan 09 00:47:44 crc kubenswrapper[4945]: I0109 00:47:44.362936 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb"} Jan 09 00:47:44 crc kubenswrapper[4945]: I0109 00:47:44.362981 4945 scope.go:117] "RemoveContainer" containerID="42b8dcfbef12325fc76b8c1bf96d6789ad8424a665066c821aaae50c4867f7f5" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.309453 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 09 00:48:15 crc kubenswrapper[4945]: E0109 00:48:15.310373 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" containerName="extract-utilities" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.310390 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" containerName="extract-utilities" Jan 09 00:48:15 crc kubenswrapper[4945]: E0109 00:48:15.310408 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" containerName="extract-content" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.310416 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" containerName="extract-content" Jan 09 00:48:15 crc kubenswrapper[4945]: E0109 00:48:15.310441 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" containerName="registry-server" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.310450 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" containerName="registry-server" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.310629 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="003369b9-62ab-4197-ae2b-84659b620b9e" containerName="registry-server" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.311313 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.313658 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vgpfs" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.320538 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.454963 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\") pod \"mariadb-copy-data\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") " pod="openstack/mariadb-copy-data" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.455095 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxtr\" (UniqueName: \"kubernetes.io/projected/8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa-kube-api-access-fsxtr\") pod \"mariadb-copy-data\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") " pod="openstack/mariadb-copy-data" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.556701 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\") pod \"mariadb-copy-data\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") " pod="openstack/mariadb-copy-data" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.556853 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxtr\" (UniqueName: \"kubernetes.io/projected/8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa-kube-api-access-fsxtr\") pod \"mariadb-copy-data\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") " pod="openstack/mariadb-copy-data" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.560387 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.560424 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\") pod \"mariadb-copy-data\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e04460d876137a9e9c39c6b14466898e403cecacf697cd81d22ec7d183ac7315/globalmount\"" pod="openstack/mariadb-copy-data" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.588164 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxtr\" (UniqueName: \"kubernetes.io/projected/8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa-kube-api-access-fsxtr\") pod \"mariadb-copy-data\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") " pod="openstack/mariadb-copy-data" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.591236 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\") pod \"mariadb-copy-data\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") " pod="openstack/mariadb-copy-data" Jan 09 00:48:15 crc kubenswrapper[4945]: I0109 00:48:15.635477 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 09 00:48:16 crc kubenswrapper[4945]: I0109 00:48:16.116181 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 09 00:48:16 crc kubenswrapper[4945]: W0109 00:48:16.123132 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e9de6e1_00ce_4ba3_b593_2f05b26ef4fa.slice/crio-382c57369fb561ddad4f2d8b27d197d7a6eed40617b875234be9c4c1546183cb WatchSource:0}: Error finding container 382c57369fb561ddad4f2d8b27d197d7a6eed40617b875234be9c4c1546183cb: Status 404 returned error can't find the container with id 382c57369fb561ddad4f2d8b27d197d7a6eed40617b875234be9c4c1546183cb Jan 09 00:48:16 crc kubenswrapper[4945]: I0109 00:48:16.608508 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa","Type":"ContainerStarted","Data":"052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a"} Jan 09 00:48:16 crc kubenswrapper[4945]: I0109 00:48:16.608578 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa","Type":"ContainerStarted","Data":"382c57369fb561ddad4f2d8b27d197d7a6eed40617b875234be9c4c1546183cb"} Jan 09 00:48:16 crc kubenswrapper[4945]: I0109 00:48:16.630447 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.630426724 podStartE2EDuration="2.630426724s" podCreationTimestamp="2026-01-09 00:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:48:16.622253384 +0000 UTC m=+5566.933412330" watchObservedRunningTime="2026-01-09 00:48:16.630426724 +0000 UTC m=+5566.941585660" Jan 09 00:48:19 crc kubenswrapper[4945]: I0109 00:48:19.376378 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:19 crc kubenswrapper[4945]: I0109 00:48:19.378551 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 09 00:48:19 crc kubenswrapper[4945]: I0109 00:48:19.383531 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:19 crc kubenswrapper[4945]: I0109 00:48:19.514249 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l547r\" (UniqueName: \"kubernetes.io/projected/fe9d26cc-bd76-4415-8239-752a1a5ba304-kube-api-access-l547r\") pod \"mariadb-client\" (UID: \"fe9d26cc-bd76-4415-8239-752a1a5ba304\") " pod="openstack/mariadb-client" Jan 09 00:48:19 crc kubenswrapper[4945]: I0109 00:48:19.615589 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l547r\" (UniqueName: \"kubernetes.io/projected/fe9d26cc-bd76-4415-8239-752a1a5ba304-kube-api-access-l547r\") pod \"mariadb-client\" (UID: \"fe9d26cc-bd76-4415-8239-752a1a5ba304\") " pod="openstack/mariadb-client" Jan 09 00:48:19 crc kubenswrapper[4945]: I0109 00:48:19.635970 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l547r\" (UniqueName: \"kubernetes.io/projected/fe9d26cc-bd76-4415-8239-752a1a5ba304-kube-api-access-l547r\") pod \"mariadb-client\" (UID: \"fe9d26cc-bd76-4415-8239-752a1a5ba304\") " pod="openstack/mariadb-client" Jan 09 00:48:19 crc kubenswrapper[4945]: I0109 00:48:19.701926 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 09 00:48:20 crc kubenswrapper[4945]: I0109 00:48:20.097020 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:20 crc kubenswrapper[4945]: W0109 00:48:20.102195 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe9d26cc_bd76_4415_8239_752a1a5ba304.slice/crio-55d47d820fa123d63997bd5e7d0fb182a8b7227a758d4a6c993ab521ecb4f86f WatchSource:0}: Error finding container 55d47d820fa123d63997bd5e7d0fb182a8b7227a758d4a6c993ab521ecb4f86f: Status 404 returned error can't find the container with id 55d47d820fa123d63997bd5e7d0fb182a8b7227a758d4a6c993ab521ecb4f86f Jan 09 00:48:20 crc kubenswrapper[4945]: I0109 00:48:20.638904 4945 generic.go:334] "Generic (PLEG): container finished" podID="fe9d26cc-bd76-4415-8239-752a1a5ba304" containerID="8dd84795942fc9763e6a73d30b08c173b2dab3485a28e2f0fa21f44dc06bdbe9" exitCode=0 Jan 09 00:48:20 crc kubenswrapper[4945]: I0109 00:48:20.638977 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe9d26cc-bd76-4415-8239-752a1a5ba304","Type":"ContainerDied","Data":"8dd84795942fc9763e6a73d30b08c173b2dab3485a28e2f0fa21f44dc06bdbe9"} Jan 09 00:48:20 crc kubenswrapper[4945]: I0109 00:48:20.639193 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"fe9d26cc-bd76-4415-8239-752a1a5ba304","Type":"ContainerStarted","Data":"55d47d820fa123d63997bd5e7d0fb182a8b7227a758d4a6c993ab521ecb4f86f"} Jan 09 00:48:21 crc kubenswrapper[4945]: I0109 00:48:21.988947 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.017019 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_fe9d26cc-bd76-4415-8239-752a1a5ba304/mariadb-client/0.log" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.044943 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.050054 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.155565 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l547r\" (UniqueName: \"kubernetes.io/projected/fe9d26cc-bd76-4415-8239-752a1a5ba304-kube-api-access-l547r\") pod \"fe9d26cc-bd76-4415-8239-752a1a5ba304\" (UID: \"fe9d26cc-bd76-4415-8239-752a1a5ba304\") " Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.179257 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe9d26cc-bd76-4415-8239-752a1a5ba304-kube-api-access-l547r" (OuterVolumeSpecName: "kube-api-access-l547r") pod "fe9d26cc-bd76-4415-8239-752a1a5ba304" (UID: "fe9d26cc-bd76-4415-8239-752a1a5ba304"). InnerVolumeSpecName "kube-api-access-l547r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.189548 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:22 crc kubenswrapper[4945]: E0109 00:48:22.189889 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe9d26cc-bd76-4415-8239-752a1a5ba304" containerName="mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.189906 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe9d26cc-bd76-4415-8239-752a1a5ba304" containerName="mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.190084 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe9d26cc-bd76-4415-8239-752a1a5ba304" containerName="mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.190942 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.198064 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.257476 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l547r\" (UniqueName: \"kubernetes.io/projected/fe9d26cc-bd76-4415-8239-752a1a5ba304-kube-api-access-l547r\") on node \"crc\" DevicePath \"\"" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.358854 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l7g5\" (UniqueName: \"kubernetes.io/projected/5d2d54af-e73b-45ed-89f4-bf80d98dd66c-kube-api-access-6l7g5\") pod \"mariadb-client\" (UID: \"5d2d54af-e73b-45ed-89f4-bf80d98dd66c\") " pod="openstack/mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.461117 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l7g5\" (UniqueName: \"kubernetes.io/projected/5d2d54af-e73b-45ed-89f4-bf80d98dd66c-kube-api-access-6l7g5\") pod \"mariadb-client\" (UID: \"5d2d54af-e73b-45ed-89f4-bf80d98dd66c\") " pod="openstack/mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.491739 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l7g5\" (UniqueName: \"kubernetes.io/projected/5d2d54af-e73b-45ed-89f4-bf80d98dd66c-kube-api-access-6l7g5\") pod \"mariadb-client\" (UID: \"5d2d54af-e73b-45ed-89f4-bf80d98dd66c\") " pod="openstack/mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.508496 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.662436 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d47d820fa123d63997bd5e7d0fb182a8b7227a758d4a6c993ab521ecb4f86f" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.662732 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.685692 4945 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="fe9d26cc-bd76-4415-8239-752a1a5ba304" podUID="5d2d54af-e73b-45ed-89f4-bf80d98dd66c" Jan 09 00:48:22 crc kubenswrapper[4945]: I0109 00:48:22.936367 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:22 crc kubenswrapper[4945]: W0109 00:48:22.945489 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d2d54af_e73b_45ed_89f4_bf80d98dd66c.slice/crio-6577d90b66e8f8854dca5cd8d04cb637fd00a27c304c2115f2bf860eccbc4da4 WatchSource:0}: Error finding container 6577d90b66e8f8854dca5cd8d04cb637fd00a27c304c2115f2bf860eccbc4da4: Status 404 returned error can't find the container with id 6577d90b66e8f8854dca5cd8d04cb637fd00a27c304c2115f2bf860eccbc4da4 Jan 09 00:48:23 crc kubenswrapper[4945]: I0109 00:48:23.673309 4945 generic.go:334] "Generic (PLEG): container finished" podID="5d2d54af-e73b-45ed-89f4-bf80d98dd66c" containerID="fdd14c843320a2b0200077ab4bc9e6403ef9ae620d8a7407c13be17e6882c05d" exitCode=0 Jan 09 00:48:23 crc kubenswrapper[4945]: I0109 00:48:23.673359 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"5d2d54af-e73b-45ed-89f4-bf80d98dd66c","Type":"ContainerDied","Data":"fdd14c843320a2b0200077ab4bc9e6403ef9ae620d8a7407c13be17e6882c05d"} Jan 09 00:48:23 crc kubenswrapper[4945]: I0109 00:48:23.673391 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"5d2d54af-e73b-45ed-89f4-bf80d98dd66c","Type":"ContainerStarted","Data":"6577d90b66e8f8854dca5cd8d04cb637fd00a27c304c2115f2bf860eccbc4da4"} Jan 09 00:48:24 crc kubenswrapper[4945]: I0109 00:48:24.011358 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe9d26cc-bd76-4415-8239-752a1a5ba304" path="/var/lib/kubelet/pods/fe9d26cc-bd76-4415-8239-752a1a5ba304/volumes" Jan 09 00:48:24 crc kubenswrapper[4945]: I0109 00:48:24.988583 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 09 00:48:25 crc kubenswrapper[4945]: I0109 00:48:25.005312 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_5d2d54af-e73b-45ed-89f4-bf80d98dd66c/mariadb-client/0.log" Jan 09 00:48:25 crc kubenswrapper[4945]: I0109 00:48:25.031368 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:25 crc kubenswrapper[4945]: I0109 00:48:25.038827 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 09 00:48:25 crc kubenswrapper[4945]: I0109 00:48:25.104781 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l7g5\" (UniqueName: \"kubernetes.io/projected/5d2d54af-e73b-45ed-89f4-bf80d98dd66c-kube-api-access-6l7g5\") pod \"5d2d54af-e73b-45ed-89f4-bf80d98dd66c\" (UID: \"5d2d54af-e73b-45ed-89f4-bf80d98dd66c\") " Jan 09 00:48:25 crc kubenswrapper[4945]: I0109 00:48:25.110273 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d2d54af-e73b-45ed-89f4-bf80d98dd66c-kube-api-access-6l7g5" (OuterVolumeSpecName: "kube-api-access-6l7g5") pod "5d2d54af-e73b-45ed-89f4-bf80d98dd66c" (UID: "5d2d54af-e73b-45ed-89f4-bf80d98dd66c"). InnerVolumeSpecName "kube-api-access-6l7g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:48:25 crc kubenswrapper[4945]: I0109 00:48:25.207265 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l7g5\" (UniqueName: \"kubernetes.io/projected/5d2d54af-e73b-45ed-89f4-bf80d98dd66c-kube-api-access-6l7g5\") on node \"crc\" DevicePath \"\"" Jan 09 00:48:25 crc kubenswrapper[4945]: I0109 00:48:25.688511 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6577d90b66e8f8854dca5cd8d04cb637fd00a27c304c2115f2bf860eccbc4da4" Jan 09 00:48:25 crc kubenswrapper[4945]: I0109 00:48:25.688598 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 09 00:48:26 crc kubenswrapper[4945]: I0109 00:48:26.010413 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d2d54af-e73b-45ed-89f4-bf80d98dd66c" path="/var/lib/kubelet/pods/5d2d54af-e73b-45ed-89f4-bf80d98dd66c/volumes" Jan 09 00:48:36 crc kubenswrapper[4945]: I0109 00:48:36.058174 4945 scope.go:117] "RemoveContainer" containerID="652ef72f73f570473a60e90d1ace883139cfbd7b551f3ddebd55b51c5fdeb292" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.310836 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 00:48:59 crc kubenswrapper[4945]: E0109 00:48:59.311722 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d2d54af-e73b-45ed-89f4-bf80d98dd66c" containerName="mariadb-client" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.311816 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d2d54af-e73b-45ed-89f4-bf80d98dd66c" containerName="mariadb-client" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.312077 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d2d54af-e73b-45ed-89f4-bf80d98dd66c" containerName="mariadb-client" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.313221 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.315531 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.315768 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rwg74" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.316015 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.323851 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.332012 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.333619 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.343184 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.345028 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.354836 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.380908 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473395 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcc6x\" (UniqueName: \"kubernetes.io/projected/8a88c188-1123-4040-8ad7-4622fcbb1715-kube-api-access-lcc6x\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473442 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473465 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766c4239-387b-46ba-9cc8-933d55f0a636-config\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473481 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473513 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a88c188-1123-4040-8ad7-4622fcbb1715-config\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473633 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/766c4239-387b-46ba-9cc8-933d55f0a636-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473687 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a88c188-1123-4040-8ad7-4622fcbb1715-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473749 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-69efb235-6e3f-4bda-a2d6-3b4dd6234ba4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69efb235-6e3f-4bda-a2d6-3b4dd6234ba4\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473774 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e710be7d-1212-498f-a711-5b5255651ec0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e710be7d-1212-498f-a711-5b5255651ec0\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473836 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a88c188-1123-4040-8ad7-4622fcbb1715-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473856 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a88c188-1123-4040-8ad7-4622fcbb1715-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473879 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7nv\" (UniqueName: \"kubernetes.io/projected/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-kube-api-access-5m7nv\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.473897 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bef79c5-371d-4736-8d66-b3405216cca0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bef79c5-371d-4736-8d66-b3405216cca0\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.474156 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-config\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.475202 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/766c4239-387b-46ba-9cc8-933d55f0a636-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.475351 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mqfj\" (UniqueName: \"kubernetes.io/projected/766c4239-387b-46ba-9cc8-933d55f0a636-kube-api-access-7mqfj\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.475551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766c4239-387b-46ba-9cc8-933d55f0a636-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.475684 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.521033 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.526284 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.530736 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9ccdk" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.531064 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.531232 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.535430 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.550626 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.552417 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.565171 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.568371 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581582 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/766c4239-387b-46ba-9cc8-933d55f0a636-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581638 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a88c188-1123-4040-8ad7-4622fcbb1715-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581672 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-69efb235-6e3f-4bda-a2d6-3b4dd6234ba4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69efb235-6e3f-4bda-a2d6-3b4dd6234ba4\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581697 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e710be7d-1212-498f-a711-5b5255651ec0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e710be7d-1212-498f-a711-5b5255651ec0\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581728 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef85a57-00f5-483f-8641-a0cfb51b4045-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581758 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a88c188-1123-4040-8ad7-4622fcbb1715-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581786 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a88c188-1123-4040-8ad7-4622fcbb1715-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581816 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m7nv\" (UniqueName: \"kubernetes.io/projected/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-kube-api-access-5m7nv\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581843 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bef79c5-371d-4736-8d66-b3405216cca0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bef79c5-371d-4736-8d66-b3405216cca0\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581879 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-config\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581903 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/766c4239-387b-46ba-9cc8-933d55f0a636-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581932 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69ch4\" (UniqueName: \"kubernetes.io/projected/0ef85a57-00f5-483f-8641-a0cfb51b4045-kube-api-access-69ch4\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581955 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mqfj\" (UniqueName: \"kubernetes.io/projected/766c4239-387b-46ba-9cc8-933d55f0a636-kube-api-access-7mqfj\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.581981 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ef85a57-00f5-483f-8641-a0cfb51b4045-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582039 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6208106a-47b5-4777-8027-54b48d127251\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6208106a-47b5-4777-8027-54b48d127251\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582061 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766c4239-387b-46ba-9cc8-933d55f0a636-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582079 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582102 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ef85a57-00f5-483f-8641-a0cfb51b4045-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582134 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef85a57-00f5-483f-8641-a0cfb51b4045-config\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582165 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcc6x\" (UniqueName: \"kubernetes.io/projected/8a88c188-1123-4040-8ad7-4622fcbb1715-kube-api-access-lcc6x\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582187 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582211 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766c4239-387b-46ba-9cc8-933d55f0a636-config\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582236 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.582302 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a88c188-1123-4040-8ad7-4622fcbb1715-config\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.583306 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a88c188-1123-4040-8ad7-4622fcbb1715-config\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.583721 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/766c4239-387b-46ba-9cc8-933d55f0a636-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.584071 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8a88c188-1123-4040-8ad7-4622fcbb1715-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.586789 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-config\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.594902 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/766c4239-387b-46ba-9cc8-933d55f0a636-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.595043 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.595406 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.598979 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.599542 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766c4239-387b-46ba-9cc8-933d55f0a636-config\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.602151 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.605368 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a88c188-1123-4040-8ad7-4622fcbb1715-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.617129 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mqfj\" (UniqueName: \"kubernetes.io/projected/766c4239-387b-46ba-9cc8-933d55f0a636-kube-api-access-7mqfj\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.623253 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766c4239-387b-46ba-9cc8-933d55f0a636-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.624063 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.624870 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a88c188-1123-4040-8ad7-4622fcbb1715-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.638796 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.638850 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bef79c5-371d-4736-8d66-b3405216cca0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bef79c5-371d-4736-8d66-b3405216cca0\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/654a7f87d3f69a9aab3d1468463f3b81883c9c9e5c4880a55b9e174f3c9503fa/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.638957 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.639011 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e710be7d-1212-498f-a711-5b5255651ec0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e710be7d-1212-498f-a711-5b5255651ec0\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aa953b3272f31e98007269cfc0f6c717831a100419ad55d86d0329bb17dc1101/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.639599 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.639631 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-69efb235-6e3f-4bda-a2d6-3b4dd6234ba4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69efb235-6e3f-4bda-a2d6-3b4dd6234ba4\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/33704343c462e2ef7686f8792ea34b3812748987e793b423f58c31af72d4953a/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.648946 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcc6x\" (UniqueName: \"kubernetes.io/projected/8a88c188-1123-4040-8ad7-4622fcbb1715-kube-api-access-lcc6x\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.655301 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m7nv\" (UniqueName: \"kubernetes.io/projected/88e271bf-a6e7-4db1-9f1c-7d3260cbeb29-kube-api-access-5m7nv\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.684278 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef85a57-00f5-483f-8641-a0cfb51b4045-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.684393 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69ch4\" (UniqueName: \"kubernetes.io/projected/0ef85a57-00f5-483f-8641-a0cfb51b4045-kube-api-access-69ch4\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.684421 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ef85a57-00f5-483f-8641-a0cfb51b4045-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.684461 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6208106a-47b5-4777-8027-54b48d127251\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6208106a-47b5-4777-8027-54b48d127251\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.684487 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ef85a57-00f5-483f-8641-a0cfb51b4045-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.684522 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef85a57-00f5-483f-8641-a0cfb51b4045-config\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.685620 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ef85a57-00f5-483f-8641-a0cfb51b4045-config\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.688423 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ef85a57-00f5-483f-8641-a0cfb51b4045-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.691852 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ef85a57-00f5-483f-8641-a0cfb51b4045-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.759841 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef85a57-00f5-483f-8641-a0cfb51b4045-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.780875 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69ch4\" (UniqueName: \"kubernetes.io/projected/0ef85a57-00f5-483f-8641-a0cfb51b4045-kube-api-access-69ch4\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.782138 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.782174 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6208106a-47b5-4777-8027-54b48d127251\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6208106a-47b5-4777-8027-54b48d127251\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69001f8f33f40900e59d3f6373256533f2a44f2a6b2dd33dc06476d1d167b577/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787521 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae9bb09d-4258-4dae-b69e-28ea7f437c63-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787600 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99f4dd38-74e0-43f9-951e-39b35d884b9e-config\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787625 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9bb09d-4258-4dae-b69e-28ea7f437c63-config\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787646 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f4dd38-74e0-43f9-951e-39b35d884b9e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787668 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xgpc\" (UniqueName: \"kubernetes.io/projected/ae9bb09d-4258-4dae-b69e-28ea7f437c63-kube-api-access-5xgpc\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787685 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae9bb09d-4258-4dae-b69e-28ea7f437c63-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787706 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f4dd38-74e0-43f9-951e-39b35d884b9e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787725 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99f4dd38-74e0-43f9-951e-39b35d884b9e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787804 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4p5f\" (UniqueName: \"kubernetes.io/projected/99f4dd38-74e0-43f9-951e-39b35d884b9e-kube-api-access-f4p5f\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787825 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7aa65414-7c88-4ae0-9202-a2cfd4a1dab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7aa65414-7c88-4ae0-9202-a2cfd4a1dab7\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787851 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae9bb09d-4258-4dae-b69e-28ea7f437c63-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.787876 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf1de6a7-5a9e-4e97-98a2-a0bdfc429c58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf1de6a7-5a9e-4e97-98a2-a0bdfc429c58\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.809627 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bef79c5-371d-4736-8d66-b3405216cca0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bef79c5-371d-4736-8d66-b3405216cca0\") pod \"ovsdbserver-nb-2\" (UID: \"766c4239-387b-46ba-9cc8-933d55f0a636\") " pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.812511 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-69efb235-6e3f-4bda-a2d6-3b4dd6234ba4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-69efb235-6e3f-4bda-a2d6-3b4dd6234ba4\") pod \"ovsdbserver-nb-1\" (UID: \"8a88c188-1123-4040-8ad7-4622fcbb1715\") " pod="openstack/ovsdbserver-nb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.814024 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e710be7d-1212-498f-a711-5b5255651ec0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e710be7d-1212-498f-a711-5b5255651ec0\") pod \"ovsdbserver-nb-0\" (UID: \"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29\") " pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.816743 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6208106a-47b5-4777-8027-54b48d127251\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6208106a-47b5-4777-8027-54b48d127251\") pod \"ovsdbserver-sb-0\" (UID: \"0ef85a57-00f5-483f-8641-a0cfb51b4045\") " pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.856675 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889650 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae9bb09d-4258-4dae-b69e-28ea7f437c63-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99f4dd38-74e0-43f9-951e-39b35d884b9e-config\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889750 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9bb09d-4258-4dae-b69e-28ea7f437c63-config\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889767 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f4dd38-74e0-43f9-951e-39b35d884b9e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889787 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xgpc\" (UniqueName: \"kubernetes.io/projected/ae9bb09d-4258-4dae-b69e-28ea7f437c63-kube-api-access-5xgpc\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889805 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae9bb09d-4258-4dae-b69e-28ea7f437c63-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889824 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f4dd38-74e0-43f9-951e-39b35d884b9e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889841 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99f4dd38-74e0-43f9-951e-39b35d884b9e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889894 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4p5f\" (UniqueName: \"kubernetes.io/projected/99f4dd38-74e0-43f9-951e-39b35d884b9e-kube-api-access-f4p5f\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889913 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7aa65414-7c88-4ae0-9202-a2cfd4a1dab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7aa65414-7c88-4ae0-9202-a2cfd4a1dab7\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889935 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae9bb09d-4258-4dae-b69e-28ea7f437c63-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.889959 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf1de6a7-5a9e-4e97-98a2-a0bdfc429c58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf1de6a7-5a9e-4e97-98a2-a0bdfc429c58\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.891058 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae9bb09d-4258-4dae-b69e-28ea7f437c63-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.891390 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/99f4dd38-74e0-43f9-951e-39b35d884b9e-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.891404 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99f4dd38-74e0-43f9-951e-39b35d884b9e-config\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.891915 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae9bb09d-4258-4dae-b69e-28ea7f437c63-config\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.892439 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae9bb09d-4258-4dae-b69e-28ea7f437c63-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.893346 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99f4dd38-74e0-43f9-951e-39b35d884b9e-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.895293 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.895363 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf1de6a7-5a9e-4e97-98a2-a0bdfc429c58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf1de6a7-5a9e-4e97-98a2-a0bdfc429c58\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8a7411023e74c1586d2776be581500b8d6230d57f52356956c201823245adf90/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.895761 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.895792 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7aa65414-7c88-4ae0-9202-a2cfd4a1dab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7aa65414-7c88-4ae0-9202-a2cfd4a1dab7\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3b2b41c85dac526d5366e10c7cb6eada301bf23fc34b3148760bd046b9ee0b81/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.898440 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99f4dd38-74e0-43f9-951e-39b35d884b9e-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.899452 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae9bb09d-4258-4dae-b69e-28ea7f437c63-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.907459 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xgpc\" (UniqueName: \"kubernetes.io/projected/ae9bb09d-4258-4dae-b69e-28ea7f437c63-kube-api-access-5xgpc\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.908759 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4p5f\" (UniqueName: \"kubernetes.io/projected/99f4dd38-74e0-43f9-951e-39b35d884b9e-kube-api-access-f4p5f\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.924844 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf1de6a7-5a9e-4e97-98a2-a0bdfc429c58\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf1de6a7-5a9e-4e97-98a2-a0bdfc429c58\") pod \"ovsdbserver-sb-1\" (UID: \"ae9bb09d-4258-4dae-b69e-28ea7f437c63\") " pod="openstack/ovsdbserver-sb-1" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.950706 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7aa65414-7c88-4ae0-9202-a2cfd4a1dab7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7aa65414-7c88-4ae0-9202-a2cfd4a1dab7\") pod \"ovsdbserver-sb-2\" (UID: \"99f4dd38-74e0-43f9-951e-39b35d884b9e\") " pod="openstack/ovsdbserver-sb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.951585 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.964708 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 09 00:48:59 crc kubenswrapper[4945]: I0109 00:48:59.978330 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.069913 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.184303 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.363631 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.532758 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 09 00:49:00 crc kubenswrapper[4945]: W0109 00:49:00.549686 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88e271bf_a6e7_4db1_9f1c_7d3260cbeb29.slice/crio-b67c9368ba9da9fc83b9776eb7a844b9798123cdbaa239253713c8d40a3193e8 WatchSource:0}: Error finding container b67c9368ba9da9fc83b9776eb7a844b9798123cdbaa239253713c8d40a3193e8: Status 404 returned error can't find the container with id b67c9368ba9da9fc83b9776eb7a844b9798123cdbaa239253713c8d40a3193e8 Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.658171 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 09 00:49:00 crc kubenswrapper[4945]: W0109 00:49:00.668839 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod766c4239_387b_46ba_9cc8_933d55f0a636.slice/crio-749e3437e7a5fc538770e2aa66d45be91b5219f90b96f5a6c8119d0a13054040 WatchSource:0}: Error finding container 749e3437e7a5fc538770e2aa66d45be91b5219f90b96f5a6c8119d0a13054040: Status 404 returned error can't find the container with id 749e3437e7a5fc538770e2aa66d45be91b5219f90b96f5a6c8119d0a13054040 Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.729445 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 09 00:49:00 crc kubenswrapper[4945]: W0109 00:49:00.732764 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae9bb09d_4258_4dae_b69e_28ea7f437c63.slice/crio-ee28f97c59a0d036101c3287e4d9ee97a1ddfde528af990108faf74600204d02 WatchSource:0}: Error finding container ee28f97c59a0d036101c3287e4d9ee97a1ddfde528af990108faf74600204d02: Status 404 returned error can't find the container with id ee28f97c59a0d036101c3287e4d9ee97a1ddfde528af990108faf74600204d02 Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.838350 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.965204 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"766c4239-387b-46ba-9cc8-933d55f0a636","Type":"ContainerStarted","Data":"e826269e323716045e0723f2af28d786a8a19036059b42958c0f0d555b0296fd"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.965260 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"766c4239-387b-46ba-9cc8-933d55f0a636","Type":"ContainerStarted","Data":"749e3437e7a5fc538770e2aa66d45be91b5219f90b96f5a6c8119d0a13054040"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.967650 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"99f4dd38-74e0-43f9-951e-39b35d884b9e","Type":"ContainerStarted","Data":"91e164fcb60c24ab2f11d97839b3b5cf279d814832aa27cad445ba236107a6c6"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.969558 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0ef85a57-00f5-483f-8641-a0cfb51b4045","Type":"ContainerStarted","Data":"7b7b112fc265b5412674671741a672fb477ac7f1b5144c100d4d38f34a47eb79"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.969588 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0ef85a57-00f5-483f-8641-a0cfb51b4045","Type":"ContainerStarted","Data":"ad3635fb68b1b21f4f38f0da769d4eecd1786225e1c27fe88d936b88bd35acc6"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.969603 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0ef85a57-00f5-483f-8641-a0cfb51b4045","Type":"ContainerStarted","Data":"d2795a3fd4ecc01b54661c2f1d2b544a78d18d45a874d0946e1b6d97d8d035c3"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.971498 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29","Type":"ContainerStarted","Data":"c1dc09345746bcd4a230c168de2e5b34aa47092acbc50349e934b0832c63c81e"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.971543 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29","Type":"ContainerStarted","Data":"b67c9368ba9da9fc83b9776eb7a844b9798123cdbaa239253713c8d40a3193e8"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.974158 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ae9bb09d-4258-4dae-b69e-28ea7f437c63","Type":"ContainerStarted","Data":"e99aa703bc74573b635ca9f8b2e715a68389551fa1e34736f3e01b61200ab0ab"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.974191 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ae9bb09d-4258-4dae-b69e-28ea7f437c63","Type":"ContainerStarted","Data":"ee28f97c59a0d036101c3287e4d9ee97a1ddfde528af990108faf74600204d02"} Jan 09 00:49:00 crc kubenswrapper[4945]: I0109 00:49:00.990581 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.990557894 podStartE2EDuration="2.990557894s" podCreationTimestamp="2026-01-09 00:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:00.988815231 +0000 UTC m=+5611.299974197" watchObservedRunningTime="2026-01-09 00:49:00.990557894 +0000 UTC m=+5611.301716840" Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.264241 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 09 00:49:01 crc kubenswrapper[4945]: W0109 00:49:01.265912 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a88c188_1123_4040_8ad7_4622fcbb1715.slice/crio-1fb54962d68b19b4fd7b55979a0be8fb0255b19eacc61b65d373c20ae1789ff8 WatchSource:0}: Error finding container 1fb54962d68b19b4fd7b55979a0be8fb0255b19eacc61b65d373c20ae1789ff8: Status 404 returned error can't find the container with id 1fb54962d68b19b4fd7b55979a0be8fb0255b19eacc61b65d373c20ae1789ff8 Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.986078 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"99f4dd38-74e0-43f9-951e-39b35d884b9e","Type":"ContainerStarted","Data":"9b6c6e3801bd4b432a7454d4be8d2024739ac4fa40af457e4b1c08bf5bcabac6"} Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.986612 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"99f4dd38-74e0-43f9-951e-39b35d884b9e","Type":"ContainerStarted","Data":"e80a71ba496bcef5966cbb48c2029d04ce3fc23065c31aaaed030cc3e1be3c54"} Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.988964 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"88e271bf-a6e7-4db1-9f1c-7d3260cbeb29","Type":"ContainerStarted","Data":"1ed439a0530ab202e25087729a1a24b76d3744a2af7f49fb9a57bdebc7a8f6d1"} Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.990620 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8a88c188-1123-4040-8ad7-4622fcbb1715","Type":"ContainerStarted","Data":"ae2b4e9a17d5a0371535c766e7142f1f8890c2aa2b1807e544841a4963c019dd"} Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.990653 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8a88c188-1123-4040-8ad7-4622fcbb1715","Type":"ContainerStarted","Data":"7739560df91645af74fc3a4f08adddda15a1d2ceb96cc9a2f07acec17d6c6e68"} Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.990667 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"8a88c188-1123-4040-8ad7-4622fcbb1715","Type":"ContainerStarted","Data":"1fb54962d68b19b4fd7b55979a0be8fb0255b19eacc61b65d373c20ae1789ff8"} Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.993199 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"ae9bb09d-4258-4dae-b69e-28ea7f437c63","Type":"ContainerStarted","Data":"1addf6a988209f5ecd1ae6a843de24d002bc9e7da9059dc50e063e05b99b1f47"} Jan 09 00:49:01 crc kubenswrapper[4945]: I0109 00:49:01.995069 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"766c4239-387b-46ba-9cc8-933d55f0a636","Type":"ContainerStarted","Data":"f575d1adcd37647af9817962f7fbfb4a534a9110f07c55e6d3cd86526960c027"} Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.015877 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.015855915 podStartE2EDuration="4.015855915s" podCreationTimestamp="2026-01-09 00:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:02.007754776 +0000 UTC m=+5612.318913722" watchObservedRunningTime="2026-01-09 00:49:02.015855915 +0000 UTC m=+5612.327014861" Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.036883 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.036858161 podStartE2EDuration="4.036858161s" podCreationTimestamp="2026-01-09 00:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:02.034502723 +0000 UTC m=+5612.345661699" watchObservedRunningTime="2026-01-09 00:49:02.036858161 +0000 UTC m=+5612.348017137" Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.068124 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.068101649 podStartE2EDuration="4.068101649s" podCreationTimestamp="2026-01-09 00:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:02.060962903 +0000 UTC m=+5612.372121879" watchObservedRunningTime="2026-01-09 00:49:02.068101649 +0000 UTC m=+5612.379260595" Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.086042 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.086017869 podStartE2EDuration="4.086017869s" podCreationTimestamp="2026-01-09 00:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:02.07752451 +0000 UTC m=+5612.388683456" watchObservedRunningTime="2026-01-09 00:49:02.086017869 +0000 UTC m=+5612.397176825" Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.107868 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.107841785 podStartE2EDuration="4.107841785s" podCreationTimestamp="2026-01-09 00:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:02.105054516 +0000 UTC m=+5612.416213462" watchObservedRunningTime="2026-01-09 00:49:02.107841785 +0000 UTC m=+5612.419000731" Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.857134 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.952069 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.965387 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 09 00:49:02 crc kubenswrapper[4945]: I0109 00:49:02.978943 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 09 00:49:03 crc kubenswrapper[4945]: I0109 00:49:03.070772 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 09 00:49:03 crc kubenswrapper[4945]: I0109 00:49:03.111633 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 09 00:49:03 crc kubenswrapper[4945]: I0109 00:49:03.184742 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 09 00:49:03 crc kubenswrapper[4945]: I0109 00:49:03.222057 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 09 00:49:04 crc kubenswrapper[4945]: I0109 00:49:04.011754 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 09 00:49:04 crc kubenswrapper[4945]: I0109 00:49:04.011815 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 09 00:49:04 crc kubenswrapper[4945]: I0109 00:49:04.857558 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 09 00:49:04 crc kubenswrapper[4945]: I0109 00:49:04.952021 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 09 00:49:04 crc kubenswrapper[4945]: I0109 00:49:04.965356 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 09 00:49:04 crc kubenswrapper[4945]: I0109 00:49:04.979362 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.067003 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.087318 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.269442 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66c7c8797-r42n4"] Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.271061 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.273229 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.281861 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66c7c8797-r42n4"] Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.387725 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-dns-svc\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.387811 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-config\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.387848 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-ovsdbserver-sb\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.388063 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgw2d\" (UniqueName: \"kubernetes.io/projected/ae127fab-553c-460f-9952-c9cc20307aa1-kube-api-access-sgw2d\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.490102 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-config\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.490163 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-ovsdbserver-sb\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.490196 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgw2d\" (UniqueName: \"kubernetes.io/projected/ae127fab-553c-460f-9952-c9cc20307aa1-kube-api-access-sgw2d\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.490246 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-dns-svc\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.491063 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-dns-svc\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.491079 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-config\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.491205 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-ovsdbserver-sb\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.511717 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgw2d\" (UniqueName: \"kubernetes.io/projected/ae127fab-553c-460f-9952-c9cc20307aa1-kube-api-access-sgw2d\") pod \"dnsmasq-dns-66c7c8797-r42n4\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.605793 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.906377 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.944480 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 09 00:49:05 crc kubenswrapper[4945]: I0109 00:49:05.992824 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.014631 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.030875 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 09 00:49:06 crc kubenswrapper[4945]: W0109 00:49:06.062248 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae127fab_553c_460f_9952_c9cc20307aa1.slice/crio-0727ab33cac46c03c5b940b5becb2d0ee955cc78c58312c4a58b1ac1fcc6eb72 WatchSource:0}: Error finding container 0727ab33cac46c03c5b940b5becb2d0ee955cc78c58312c4a58b1ac1fcc6eb72: Status 404 returned error can't find the container with id 0727ab33cac46c03c5b940b5becb2d0ee955cc78c58312c4a58b1ac1fcc6eb72 Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.085197 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.090868 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.091512 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.114806 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66c7c8797-r42n4"] Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.329281 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66c7c8797-r42n4"] Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.369200 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c4774c875-vx44d"] Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.371121 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.380299 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.392304 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c4774c875-vx44d"] Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.408922 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-sb\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.409034 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-nb\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.409123 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-config\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.409305 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-dns-svc\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.409364 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nrgp\" (UniqueName: \"kubernetes.io/projected/7ce3dc1d-72b9-4512-8567-f7514125a3cd-kube-api-access-6nrgp\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.510511 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-dns-svc\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.510574 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nrgp\" (UniqueName: \"kubernetes.io/projected/7ce3dc1d-72b9-4512-8567-f7514125a3cd-kube-api-access-6nrgp\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.510615 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-sb\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.510646 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-nb\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.510682 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-config\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.511774 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-config\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.511811 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-sb\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.511914 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-dns-svc\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.512646 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-nb\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.530824 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nrgp\" (UniqueName: \"kubernetes.io/projected/7ce3dc1d-72b9-4512-8567-f7514125a3cd-kube-api-access-6nrgp\") pod \"dnsmasq-dns-6c4774c875-vx44d\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:06 crc kubenswrapper[4945]: I0109 00:49:06.694314 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:07 crc kubenswrapper[4945]: I0109 00:49:07.045675 4945 generic.go:334] "Generic (PLEG): container finished" podID="ae127fab-553c-460f-9952-c9cc20307aa1" containerID="bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5" exitCode=0 Jan 09 00:49:07 crc kubenswrapper[4945]: I0109 00:49:07.045764 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" event={"ID":"ae127fab-553c-460f-9952-c9cc20307aa1","Type":"ContainerDied","Data":"bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5"} Jan 09 00:49:07 crc kubenswrapper[4945]: I0109 00:49:07.046162 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" event={"ID":"ae127fab-553c-460f-9952-c9cc20307aa1","Type":"ContainerStarted","Data":"0727ab33cac46c03c5b940b5becb2d0ee955cc78c58312c4a58b1ac1fcc6eb72"} Jan 09 00:49:07 crc kubenswrapper[4945]: W0109 00:49:07.105910 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ce3dc1d_72b9_4512_8567_f7514125a3cd.slice/crio-8165b53f1c111ce0a182b1dc0490d5eb4df5e8607fb290373d7e8b6636d523f7 WatchSource:0}: Error finding container 8165b53f1c111ce0a182b1dc0490d5eb4df5e8607fb290373d7e8b6636d523f7: Status 404 returned error can't find the container with id 8165b53f1c111ce0a182b1dc0490d5eb4df5e8607fb290373d7e8b6636d523f7 Jan 09 00:49:07 crc kubenswrapper[4945]: I0109 00:49:07.111282 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c4774c875-vx44d"] Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.056410 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" event={"ID":"ae127fab-553c-460f-9952-c9cc20307aa1","Type":"ContainerStarted","Data":"f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7"} Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.056724 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.056545 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" podUID="ae127fab-553c-460f-9952-c9cc20307aa1" containerName="dnsmasq-dns" containerID="cri-o://f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7" gracePeriod=10 Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.062298 4945 generic.go:334] "Generic (PLEG): container finished" podID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" containerID="5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000" exitCode=0 Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.062345 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" event={"ID":"7ce3dc1d-72b9-4512-8567-f7514125a3cd","Type":"ContainerDied","Data":"5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000"} Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.062373 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" event={"ID":"7ce3dc1d-72b9-4512-8567-f7514125a3cd","Type":"ContainerStarted","Data":"8165b53f1c111ce0a182b1dc0490d5eb4df5e8607fb290373d7e8b6636d523f7"} Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.112278 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" podStartSLOduration=3.112258583 podStartE2EDuration="3.112258583s" podCreationTimestamp="2026-01-09 00:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:08.087820733 +0000 UTC m=+5618.398979689" watchObservedRunningTime="2026-01-09 00:49:08.112258583 +0000 UTC m=+5618.423417519" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.449536 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.549864 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-ovsdbserver-sb\") pod \"ae127fab-553c-460f-9952-c9cc20307aa1\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.549913 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgw2d\" (UniqueName: \"kubernetes.io/projected/ae127fab-553c-460f-9952-c9cc20307aa1-kube-api-access-sgw2d\") pod \"ae127fab-553c-460f-9952-c9cc20307aa1\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.549951 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-dns-svc\") pod \"ae127fab-553c-460f-9952-c9cc20307aa1\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.550035 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-config\") pod \"ae127fab-553c-460f-9952-c9cc20307aa1\" (UID: \"ae127fab-553c-460f-9952-c9cc20307aa1\") " Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.556639 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae127fab-553c-460f-9952-c9cc20307aa1-kube-api-access-sgw2d" (OuterVolumeSpecName: "kube-api-access-sgw2d") pod "ae127fab-553c-460f-9952-c9cc20307aa1" (UID: "ae127fab-553c-460f-9952-c9cc20307aa1"). InnerVolumeSpecName "kube-api-access-sgw2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.595986 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae127fab-553c-460f-9952-c9cc20307aa1" (UID: "ae127fab-553c-460f-9952-c9cc20307aa1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.603863 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-config" (OuterVolumeSpecName: "config") pod "ae127fab-553c-460f-9952-c9cc20307aa1" (UID: "ae127fab-553c-460f-9952-c9cc20307aa1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.604849 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae127fab-553c-460f-9952-c9cc20307aa1" (UID: "ae127fab-553c-460f-9952-c9cc20307aa1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.652408 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.652777 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgw2d\" (UniqueName: \"kubernetes.io/projected/ae127fab-553c-460f-9952-c9cc20307aa1-kube-api-access-sgw2d\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.652790 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:08 crc kubenswrapper[4945]: I0109 00:49:08.652828 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae127fab-553c-460f-9952-c9cc20307aa1-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.073834 4945 generic.go:334] "Generic (PLEG): container finished" podID="ae127fab-553c-460f-9952-c9cc20307aa1" containerID="f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7" exitCode=0 Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.073894 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.073902 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" event={"ID":"ae127fab-553c-460f-9952-c9cc20307aa1","Type":"ContainerDied","Data":"f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7"} Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.074012 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c7c8797-r42n4" event={"ID":"ae127fab-553c-460f-9952-c9cc20307aa1","Type":"ContainerDied","Data":"0727ab33cac46c03c5b940b5becb2d0ee955cc78c58312c4a58b1ac1fcc6eb72"} Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.074040 4945 scope.go:117] "RemoveContainer" containerID="f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.077435 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" event={"ID":"7ce3dc1d-72b9-4512-8567-f7514125a3cd","Type":"ContainerStarted","Data":"ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38"} Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.077775 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.114754 4945 scope.go:117] "RemoveContainer" containerID="bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.123119 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" podStartSLOduration=3.12309105 podStartE2EDuration="3.12309105s" podCreationTimestamp="2026-01-09 00:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:09.115501883 +0000 UTC m=+5619.426660829" watchObservedRunningTime="2026-01-09 00:49:09.12309105 +0000 UTC m=+5619.434249996" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.142676 4945 scope.go:117] "RemoveContainer" containerID="f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.144896 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66c7c8797-r42n4"] Jan 09 00:49:09 crc kubenswrapper[4945]: E0109 00:49:09.146139 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7\": container with ID starting with f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7 not found: ID does not exist" containerID="f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.146191 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7"} err="failed to get container status \"f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7\": rpc error: code = NotFound desc = could not find container \"f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7\": container with ID starting with f5b9e5647ea5f954c0ea912767a82b685b7d40c2d386e27f18a79757ff6d16f7 not found: ID does not exist" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.146218 4945 scope.go:117] "RemoveContainer" containerID="bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5" Jan 09 00:49:09 crc kubenswrapper[4945]: E0109 00:49:09.146965 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5\": container with ID starting with bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5 not found: ID does not exist" containerID="bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.147026 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5"} err="failed to get container status \"bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5\": rpc error: code = NotFound desc = could not find container \"bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5\": container with ID starting with bca14182d9aab1e301fc6957cdc2702f0493b3e181eedd184d039b2d5fc46fa5 not found: ID does not exist" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.152229 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66c7c8797-r42n4"] Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.209844 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 09 00:49:09 crc kubenswrapper[4945]: E0109 00:49:09.210283 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae127fab-553c-460f-9952-c9cc20307aa1" containerName="init" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.210298 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae127fab-553c-460f-9952-c9cc20307aa1" containerName="init" Jan 09 00:49:09 crc kubenswrapper[4945]: E0109 00:49:09.210324 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae127fab-553c-460f-9952-c9cc20307aa1" containerName="dnsmasq-dns" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.210330 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae127fab-553c-460f-9952-c9cc20307aa1" containerName="dnsmasq-dns" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.210506 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae127fab-553c-460f-9952-c9cc20307aa1" containerName="dnsmasq-dns" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.211123 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.212866 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.217431 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.366399 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-65d2835b-bc34-4f52-9465-8db15c16395a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.366466 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/807e629f-7d47-4c5f-8b6c-ed1191b40698-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.366513 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97hlg\" (UniqueName: \"kubernetes.io/projected/807e629f-7d47-4c5f-8b6c-ed1191b40698-kube-api-access-97hlg\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.468550 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/807e629f-7d47-4c5f-8b6c-ed1191b40698-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.468643 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97hlg\" (UniqueName: \"kubernetes.io/projected/807e629f-7d47-4c5f-8b6c-ed1191b40698-kube-api-access-97hlg\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.468771 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-65d2835b-bc34-4f52-9465-8db15c16395a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.472843 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.472888 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-65d2835b-bc34-4f52-9465-8db15c16395a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cff818e1751e67bcd1a7b2d7b33de80844ec7218f2fb54dc2d524f357959c60e/globalmount\"" pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.473076 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/807e629f-7d47-4c5f-8b6c-ed1191b40698-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.484727 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97hlg\" (UniqueName: \"kubernetes.io/projected/807e629f-7d47-4c5f-8b6c-ed1191b40698-kube-api-access-97hlg\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.505611 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-65d2835b-bc34-4f52-9465-8db15c16395a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a\") pod \"ovn-copy-data\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " pod="openstack/ovn-copy-data" Jan 09 00:49:09 crc kubenswrapper[4945]: I0109 00:49:09.530589 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 09 00:49:10 crc kubenswrapper[4945]: I0109 00:49:10.032120 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae127fab-553c-460f-9952-c9cc20307aa1" path="/var/lib/kubelet/pods/ae127fab-553c-460f-9952-c9cc20307aa1/volumes" Jan 09 00:49:10 crc kubenswrapper[4945]: I0109 00:49:10.054734 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 09 00:49:10 crc kubenswrapper[4945]: W0109 00:49:10.064720 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod807e629f_7d47_4c5f_8b6c_ed1191b40698.slice/crio-cf6c74f42eb6b7a9bab4f8cbe84b0ccd9fc29e0c04d0d5efaca97a823975026b WatchSource:0}: Error finding container cf6c74f42eb6b7a9bab4f8cbe84b0ccd9fc29e0c04d0d5efaca97a823975026b: Status 404 returned error can't find the container with id cf6c74f42eb6b7a9bab4f8cbe84b0ccd9fc29e0c04d0d5efaca97a823975026b Jan 09 00:49:10 crc kubenswrapper[4945]: I0109 00:49:10.096758 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"807e629f-7d47-4c5f-8b6c-ed1191b40698","Type":"ContainerStarted","Data":"cf6c74f42eb6b7a9bab4f8cbe84b0ccd9fc29e0c04d0d5efaca97a823975026b"} Jan 09 00:49:11 crc kubenswrapper[4945]: I0109 00:49:11.109117 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"807e629f-7d47-4c5f-8b6c-ed1191b40698","Type":"ContainerStarted","Data":"4d6ef5d4dfa45e97094dd21d9dd1247b6536d1cb12b448067a8876cb0f125d71"} Jan 09 00:49:11 crc kubenswrapper[4945]: I0109 00:49:11.133719 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.133629349 podStartE2EDuration="3.133629349s" podCreationTimestamp="2026-01-09 00:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:11.124084225 +0000 UTC m=+5621.435243181" watchObservedRunningTime="2026-01-09 00:49:11.133629349 +0000 UTC m=+5621.444788295" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.282822 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.285186 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.290979 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.291254 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.291549 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lmhjn" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.302878 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.384931 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7vb\" (UniqueName: \"kubernetes.io/projected/8845476d-879e-4e67-b913-4fd5c1c8f8cc-kube-api-access-zh7vb\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.385035 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8845476d-879e-4e67-b913-4fd5c1c8f8cc-scripts\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.385128 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8845476d-879e-4e67-b913-4fd5c1c8f8cc-config\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.385189 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8845476d-879e-4e67-b913-4fd5c1c8f8cc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.385239 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845476d-879e-4e67-b913-4fd5c1c8f8cc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.487106 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8845476d-879e-4e67-b913-4fd5c1c8f8cc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.487202 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845476d-879e-4e67-b913-4fd5c1c8f8cc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.487273 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh7vb\" (UniqueName: \"kubernetes.io/projected/8845476d-879e-4e67-b913-4fd5c1c8f8cc-kube-api-access-zh7vb\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.487304 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8845476d-879e-4e67-b913-4fd5c1c8f8cc-scripts\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.487363 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8845476d-879e-4e67-b913-4fd5c1c8f8cc-config\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.487623 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8845476d-879e-4e67-b913-4fd5c1c8f8cc-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.488279 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8845476d-879e-4e67-b913-4fd5c1c8f8cc-config\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.488376 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8845476d-879e-4e67-b913-4fd5c1c8f8cc-scripts\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.503920 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8845476d-879e-4e67-b913-4fd5c1c8f8cc-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.513086 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh7vb\" (UniqueName: \"kubernetes.io/projected/8845476d-879e-4e67-b913-4fd5c1c8f8cc-kube-api-access-zh7vb\") pod \"ovn-northd-0\" (UID: \"8845476d-879e-4e67-b913-4fd5c1c8f8cc\") " pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.606330 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.697257 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.757348 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-q7624"] Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.757614 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" podUID="5b3db303-0c17-45f6-846d-d34d14652a5a" containerName="dnsmasq-dns" containerID="cri-o://6a51ceda202d236c8ef69e2503babc7f6316564a6e6bd24c62750acab9604ec8" gracePeriod=10 Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.848380 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w8wzf"] Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.850705 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.857889 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8wzf"] Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.995216 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t6xh\" (UniqueName: \"kubernetes.io/projected/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-kube-api-access-6t6xh\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.995292 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-utilities\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:16 crc kubenswrapper[4945]: I0109 00:49:16.995362 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-catalog-content\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.097094 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t6xh\" (UniqueName: \"kubernetes.io/projected/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-kube-api-access-6t6xh\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.097159 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-utilities\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.097209 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-catalog-content\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.097735 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-catalog-content\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.098290 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-utilities\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.101248 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.123927 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t6xh\" (UniqueName: \"kubernetes.io/projected/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-kube-api-access-6t6xh\") pod \"certified-operators-w8wzf\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.158851 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8845476d-879e-4e67-b913-4fd5c1c8f8cc","Type":"ContainerStarted","Data":"e79d031248b201f83d4b08a173a054879506e187defb972586792326816b9520"} Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.161668 4945 generic.go:334] "Generic (PLEG): container finished" podID="5b3db303-0c17-45f6-846d-d34d14652a5a" containerID="6a51ceda202d236c8ef69e2503babc7f6316564a6e6bd24c62750acab9604ec8" exitCode=0 Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.161701 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" event={"ID":"5b3db303-0c17-45f6-846d-d34d14652a5a","Type":"ContainerDied","Data":"6a51ceda202d236c8ef69e2503babc7f6316564a6e6bd24c62750acab9604ec8"} Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.169113 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.261504 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.405113 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-config\") pod \"5b3db303-0c17-45f6-846d-d34d14652a5a\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.405258 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-dns-svc\") pod \"5b3db303-0c17-45f6-846d-d34d14652a5a\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.405284 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l89ks\" (UniqueName: \"kubernetes.io/projected/5b3db303-0c17-45f6-846d-d34d14652a5a-kube-api-access-l89ks\") pod \"5b3db303-0c17-45f6-846d-d34d14652a5a\" (UID: \"5b3db303-0c17-45f6-846d-d34d14652a5a\") " Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.410910 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3db303-0c17-45f6-846d-d34d14652a5a-kube-api-access-l89ks" (OuterVolumeSpecName: "kube-api-access-l89ks") pod "5b3db303-0c17-45f6-846d-d34d14652a5a" (UID: "5b3db303-0c17-45f6-846d-d34d14652a5a"). InnerVolumeSpecName "kube-api-access-l89ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.475978 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-config" (OuterVolumeSpecName: "config") pod "5b3db303-0c17-45f6-846d-d34d14652a5a" (UID: "5b3db303-0c17-45f6-846d-d34d14652a5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.477878 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b3db303-0c17-45f6-846d-d34d14652a5a" (UID: "5b3db303-0c17-45f6-846d-d34d14652a5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.508216 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.508259 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b3db303-0c17-45f6-846d-d34d14652a5a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.508273 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l89ks\" (UniqueName: \"kubernetes.io/projected/5b3db303-0c17-45f6-846d-d34d14652a5a-kube-api-access-l89ks\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:17 crc kubenswrapper[4945]: I0109 00:49:17.699311 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8wzf"] Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.172202 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" event={"ID":"5b3db303-0c17-45f6-846d-d34d14652a5a","Type":"ContainerDied","Data":"d41bdf9465912ec17d888c8ae16d481c5a5e6697763695b816901412b7859dfa"} Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.172274 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-q7624" Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.172572 4945 scope.go:117] "RemoveContainer" containerID="6a51ceda202d236c8ef69e2503babc7f6316564a6e6bd24c62750acab9604ec8" Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.174232 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8845476d-879e-4e67-b913-4fd5c1c8f8cc","Type":"ContainerStarted","Data":"2ad8dde80c1bbfcb5d33ca72fcbc32be2b94d74a9667c8e3770ffd420f5299ae"} Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.174279 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8845476d-879e-4e67-b913-4fd5c1c8f8cc","Type":"ContainerStarted","Data":"5bda9dbb8bedd22b270be502b608291393ea4aa71d62b2e495055e4b2a4e334f"} Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.174353 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.177145 4945 generic.go:334] "Generic (PLEG): container finished" podID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerID="757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623" exitCode=0 Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.177215 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8wzf" event={"ID":"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072","Type":"ContainerDied","Data":"757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623"} Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.177923 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8wzf" event={"ID":"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072","Type":"ContainerStarted","Data":"5cbc581cd14681e8f5e72a8f2c61300ab9d4b02bd6466011541efc02072581cc"} Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.179459 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.196883 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.196857213 podStartE2EDuration="2.196857213s" podCreationTimestamp="2026-01-09 00:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:18.195967761 +0000 UTC m=+5628.507126707" watchObservedRunningTime="2026-01-09 00:49:18.196857213 +0000 UTC m=+5628.508016159" Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.207357 4945 scope.go:117] "RemoveContainer" containerID="c31795334f4cee1db3cb451b5fc10c0f856ebbf97b797b716f274d6c37742e60" Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.249235 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-q7624"] Jan 09 00:49:18 crc kubenswrapper[4945]: I0109 00:49:18.253262 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-q7624"] Jan 09 00:49:19 crc kubenswrapper[4945]: I0109 00:49:19.190205 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8wzf" event={"ID":"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072","Type":"ContainerStarted","Data":"dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6"} Jan 09 00:49:20 crc kubenswrapper[4945]: I0109 00:49:20.011142 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3db303-0c17-45f6-846d-d34d14652a5a" path="/var/lib/kubelet/pods/5b3db303-0c17-45f6-846d-d34d14652a5a/volumes" Jan 09 00:49:20 crc kubenswrapper[4945]: I0109 00:49:20.200363 4945 generic.go:334] "Generic (PLEG): container finished" podID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerID="dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6" exitCode=0 Jan 09 00:49:20 crc kubenswrapper[4945]: I0109 00:49:20.200416 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8wzf" event={"ID":"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072","Type":"ContainerDied","Data":"dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6"} Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.041803 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wtdj8"] Jan 09 00:49:21 crc kubenswrapper[4945]: E0109 00:49:21.042572 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3db303-0c17-45f6-846d-d34d14652a5a" containerName="dnsmasq-dns" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.042597 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3db303-0c17-45f6-846d-d34d14652a5a" containerName="dnsmasq-dns" Jan 09 00:49:21 crc kubenswrapper[4945]: E0109 00:49:21.042611 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3db303-0c17-45f6-846d-d34d14652a5a" containerName="init" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.042619 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3db303-0c17-45f6-846d-d34d14652a5a" containerName="init" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.042843 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3db303-0c17-45f6-846d-d34d14652a5a" containerName="dnsmasq-dns" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.043515 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.063262 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wtdj8"] Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.141360 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d074-account-create-update-6jvq2"] Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.142440 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.145552 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.153712 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d074-account-create-update-6jvq2"] Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.167772 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9v84\" (UniqueName: \"kubernetes.io/projected/3e3de133-2843-4b80-98a5-a59edc83e4f5-kube-api-access-x9v84\") pod \"keystone-db-create-wtdj8\" (UID: \"3e3de133-2843-4b80-98a5-a59edc83e4f5\") " pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.167864 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3de133-2843-4b80-98a5-a59edc83e4f5-operator-scripts\") pod \"keystone-db-create-wtdj8\" (UID: \"3e3de133-2843-4b80-98a5-a59edc83e4f5\") " pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.210042 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8wzf" event={"ID":"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072","Type":"ContainerStarted","Data":"0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a"} Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.237329 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w8wzf" podStartSLOduration=2.7779598439999997 podStartE2EDuration="5.237304776s" podCreationTimestamp="2026-01-09 00:49:16 +0000 UTC" firstStartedPulling="2026-01-09 00:49:18.17923734 +0000 UTC m=+5628.490396276" lastFinishedPulling="2026-01-09 00:49:20.638582262 +0000 UTC m=+5630.949741208" observedRunningTime="2026-01-09 00:49:21.230066429 +0000 UTC m=+5631.541225375" watchObservedRunningTime="2026-01-09 00:49:21.237304776 +0000 UTC m=+5631.548463722" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.269158 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9v84\" (UniqueName: \"kubernetes.io/projected/3e3de133-2843-4b80-98a5-a59edc83e4f5-kube-api-access-x9v84\") pod \"keystone-db-create-wtdj8\" (UID: \"3e3de133-2843-4b80-98a5-a59edc83e4f5\") " pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.269459 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3de133-2843-4b80-98a5-a59edc83e4f5-operator-scripts\") pod \"keystone-db-create-wtdj8\" (UID: \"3e3de133-2843-4b80-98a5-a59edc83e4f5\") " pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.269524 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndf7\" (UniqueName: \"kubernetes.io/projected/a2db8755-cab9-49ac-a3af-52c1cbe036a1-kube-api-access-qndf7\") pod \"keystone-d074-account-create-update-6jvq2\" (UID: \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\") " pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.269568 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2db8755-cab9-49ac-a3af-52c1cbe036a1-operator-scripts\") pod \"keystone-d074-account-create-update-6jvq2\" (UID: \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\") " pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.270879 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3de133-2843-4b80-98a5-a59edc83e4f5-operator-scripts\") pod \"keystone-db-create-wtdj8\" (UID: \"3e3de133-2843-4b80-98a5-a59edc83e4f5\") " pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.301830 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9v84\" (UniqueName: \"kubernetes.io/projected/3e3de133-2843-4b80-98a5-a59edc83e4f5-kube-api-access-x9v84\") pod \"keystone-db-create-wtdj8\" (UID: \"3e3de133-2843-4b80-98a5-a59edc83e4f5\") " pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.361017 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.370981 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qndf7\" (UniqueName: \"kubernetes.io/projected/a2db8755-cab9-49ac-a3af-52c1cbe036a1-kube-api-access-qndf7\") pod \"keystone-d074-account-create-update-6jvq2\" (UID: \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\") " pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.371082 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2db8755-cab9-49ac-a3af-52c1cbe036a1-operator-scripts\") pod \"keystone-d074-account-create-update-6jvq2\" (UID: \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\") " pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.372102 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2db8755-cab9-49ac-a3af-52c1cbe036a1-operator-scripts\") pod \"keystone-d074-account-create-update-6jvq2\" (UID: \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\") " pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.391539 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qndf7\" (UniqueName: \"kubernetes.io/projected/a2db8755-cab9-49ac-a3af-52c1cbe036a1-kube-api-access-qndf7\") pod \"keystone-d074-account-create-update-6jvq2\" (UID: \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\") " pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.465644 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.789442 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wtdj8"] Jan 09 00:49:21 crc kubenswrapper[4945]: I0109 00:49:21.920736 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d074-account-create-update-6jvq2"] Jan 09 00:49:21 crc kubenswrapper[4945]: W0109 00:49:21.925748 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2db8755_cab9_49ac_a3af_52c1cbe036a1.slice/crio-c5699ace010462ec3c47dc74345db084ce2837183d84af41493dcd34d5bf4185 WatchSource:0}: Error finding container c5699ace010462ec3c47dc74345db084ce2837183d84af41493dcd34d5bf4185: Status 404 returned error can't find the container with id c5699ace010462ec3c47dc74345db084ce2837183d84af41493dcd34d5bf4185 Jan 09 00:49:22 crc kubenswrapper[4945]: I0109 00:49:22.218513 4945 generic.go:334] "Generic (PLEG): container finished" podID="3e3de133-2843-4b80-98a5-a59edc83e4f5" containerID="6a786fd11a87d6a7c70da8b8956ec908c33aa6d7991a43e9945b4f384743abd8" exitCode=0 Jan 09 00:49:22 crc kubenswrapper[4945]: I0109 00:49:22.218574 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wtdj8" event={"ID":"3e3de133-2843-4b80-98a5-a59edc83e4f5","Type":"ContainerDied","Data":"6a786fd11a87d6a7c70da8b8956ec908c33aa6d7991a43e9945b4f384743abd8"} Jan 09 00:49:22 crc kubenswrapper[4945]: I0109 00:49:22.218629 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wtdj8" event={"ID":"3e3de133-2843-4b80-98a5-a59edc83e4f5","Type":"ContainerStarted","Data":"58d7bb44c23ae4e72d47fc297cbce372fd086b5b6ffc1eb785c5413c0c3cbbcd"} Jan 09 00:49:22 crc kubenswrapper[4945]: I0109 00:49:22.220132 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d074-account-create-update-6jvq2" event={"ID":"a2db8755-cab9-49ac-a3af-52c1cbe036a1","Type":"ContainerStarted","Data":"a260d15f0505773d413361720fa02a27eeaf7dbb3a7a9c7c684870a85f088119"} Jan 09 00:49:22 crc kubenswrapper[4945]: I0109 00:49:22.220170 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d074-account-create-update-6jvq2" event={"ID":"a2db8755-cab9-49ac-a3af-52c1cbe036a1","Type":"ContainerStarted","Data":"c5699ace010462ec3c47dc74345db084ce2837183d84af41493dcd34d5bf4185"} Jan 09 00:49:22 crc kubenswrapper[4945]: I0109 00:49:22.258152 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d074-account-create-update-6jvq2" podStartSLOduration=1.258129738 podStartE2EDuration="1.258129738s" podCreationTimestamp="2026-01-09 00:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:22.252877169 +0000 UTC m=+5632.564036115" watchObservedRunningTime="2026-01-09 00:49:22.258129738 +0000 UTC m=+5632.569288684" Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.229213 4945 generic.go:334] "Generic (PLEG): container finished" podID="a2db8755-cab9-49ac-a3af-52c1cbe036a1" containerID="a260d15f0505773d413361720fa02a27eeaf7dbb3a7a9c7c684870a85f088119" exitCode=0 Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.229320 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d074-account-create-update-6jvq2" event={"ID":"a2db8755-cab9-49ac-a3af-52c1cbe036a1","Type":"ContainerDied","Data":"a260d15f0505773d413361720fa02a27eeaf7dbb3a7a9c7c684870a85f088119"} Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.498288 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.657116 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3de133-2843-4b80-98a5-a59edc83e4f5-operator-scripts\") pod \"3e3de133-2843-4b80-98a5-a59edc83e4f5\" (UID: \"3e3de133-2843-4b80-98a5-a59edc83e4f5\") " Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.657308 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9v84\" (UniqueName: \"kubernetes.io/projected/3e3de133-2843-4b80-98a5-a59edc83e4f5-kube-api-access-x9v84\") pod \"3e3de133-2843-4b80-98a5-a59edc83e4f5\" (UID: \"3e3de133-2843-4b80-98a5-a59edc83e4f5\") " Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.658314 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3de133-2843-4b80-98a5-a59edc83e4f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e3de133-2843-4b80-98a5-a59edc83e4f5" (UID: "3e3de133-2843-4b80-98a5-a59edc83e4f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.663468 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3de133-2843-4b80-98a5-a59edc83e4f5-kube-api-access-x9v84" (OuterVolumeSpecName: "kube-api-access-x9v84") pod "3e3de133-2843-4b80-98a5-a59edc83e4f5" (UID: "3e3de133-2843-4b80-98a5-a59edc83e4f5"). InnerVolumeSpecName "kube-api-access-x9v84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.759111 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9v84\" (UniqueName: \"kubernetes.io/projected/3e3de133-2843-4b80-98a5-a59edc83e4f5-kube-api-access-x9v84\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:23 crc kubenswrapper[4945]: I0109 00:49:23.759146 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e3de133-2843-4b80-98a5-a59edc83e4f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.238343 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wtdj8" event={"ID":"3e3de133-2843-4b80-98a5-a59edc83e4f5","Type":"ContainerDied","Data":"58d7bb44c23ae4e72d47fc297cbce372fd086b5b6ffc1eb785c5413c0c3cbbcd"} Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.238684 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58d7bb44c23ae4e72d47fc297cbce372fd086b5b6ffc1eb785c5413c0c3cbbcd" Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.238392 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wtdj8" Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.586154 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.677247 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2db8755-cab9-49ac-a3af-52c1cbe036a1-operator-scripts\") pod \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\" (UID: \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\") " Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.677311 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qndf7\" (UniqueName: \"kubernetes.io/projected/a2db8755-cab9-49ac-a3af-52c1cbe036a1-kube-api-access-qndf7\") pod \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\" (UID: \"a2db8755-cab9-49ac-a3af-52c1cbe036a1\") " Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.678235 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2db8755-cab9-49ac-a3af-52c1cbe036a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2db8755-cab9-49ac-a3af-52c1cbe036a1" (UID: "a2db8755-cab9-49ac-a3af-52c1cbe036a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.682955 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2db8755-cab9-49ac-a3af-52c1cbe036a1-kube-api-access-qndf7" (OuterVolumeSpecName: "kube-api-access-qndf7") pod "a2db8755-cab9-49ac-a3af-52c1cbe036a1" (UID: "a2db8755-cab9-49ac-a3af-52c1cbe036a1"). InnerVolumeSpecName "kube-api-access-qndf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.778609 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2db8755-cab9-49ac-a3af-52c1cbe036a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:24 crc kubenswrapper[4945]: I0109 00:49:24.778646 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qndf7\" (UniqueName: \"kubernetes.io/projected/a2db8755-cab9-49ac-a3af-52c1cbe036a1-kube-api-access-qndf7\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:25 crc kubenswrapper[4945]: I0109 00:49:25.246077 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d074-account-create-update-6jvq2" event={"ID":"a2db8755-cab9-49ac-a3af-52c1cbe036a1","Type":"ContainerDied","Data":"c5699ace010462ec3c47dc74345db084ce2837183d84af41493dcd34d5bf4185"} Jan 09 00:49:25 crc kubenswrapper[4945]: I0109 00:49:25.246121 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5699ace010462ec3c47dc74345db084ce2837183d84af41493dcd34d5bf4185" Jan 09 00:49:25 crc kubenswrapper[4945]: I0109 00:49:25.246146 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d074-account-create-update-6jvq2" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.696094 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pdj57"] Jan 09 00:49:26 crc kubenswrapper[4945]: E0109 00:49:26.696680 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2db8755-cab9-49ac-a3af-52c1cbe036a1" containerName="mariadb-account-create-update" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.696694 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2db8755-cab9-49ac-a3af-52c1cbe036a1" containerName="mariadb-account-create-update" Jan 09 00:49:26 crc kubenswrapper[4945]: E0109 00:49:26.696720 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3de133-2843-4b80-98a5-a59edc83e4f5" containerName="mariadb-database-create" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.696727 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3de133-2843-4b80-98a5-a59edc83e4f5" containerName="mariadb-database-create" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.696896 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2db8755-cab9-49ac-a3af-52c1cbe036a1" containerName="mariadb-account-create-update" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.696910 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3de133-2843-4b80-98a5-a59edc83e4f5" containerName="mariadb-database-create" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.697517 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.699712 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.702840 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.703249 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.706770 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-config-data\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.706871 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-combined-ca-bundle\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.706908 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr7pd\" (UniqueName: \"kubernetes.io/projected/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-kube-api-access-gr7pd\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.711896 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pdj57"] Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.712085 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zj4bd" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.807792 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-config-data\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.807912 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-combined-ca-bundle\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.807941 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr7pd\" (UniqueName: \"kubernetes.io/projected/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-kube-api-access-gr7pd\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.815408 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-combined-ca-bundle\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.816909 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-config-data\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:26 crc kubenswrapper[4945]: I0109 00:49:26.825223 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr7pd\" (UniqueName: \"kubernetes.io/projected/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-kube-api-access-gr7pd\") pod \"keystone-db-sync-pdj57\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:27 crc kubenswrapper[4945]: I0109 00:49:27.013210 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:27 crc kubenswrapper[4945]: I0109 00:49:27.170701 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:27 crc kubenswrapper[4945]: I0109 00:49:27.171144 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:27 crc kubenswrapper[4945]: I0109 00:49:27.240811 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:27 crc kubenswrapper[4945]: I0109 00:49:27.284684 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pdj57"] Jan 09 00:49:27 crc kubenswrapper[4945]: I0109 00:49:27.322035 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:27 crc kubenswrapper[4945]: I0109 00:49:27.477199 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8wzf"] Jan 09 00:49:28 crc kubenswrapper[4945]: I0109 00:49:28.281507 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdj57" event={"ID":"9c3356b2-a1e4-444c-83eb-8ce5b717d99c","Type":"ContainerStarted","Data":"ef925930dc2ed1bd687004c42f36bf352c11f26edd4b91c8ea76ecab6370c4c4"} Jan 09 00:49:28 crc kubenswrapper[4945]: I0109 00:49:28.281564 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdj57" event={"ID":"9c3356b2-a1e4-444c-83eb-8ce5b717d99c","Type":"ContainerStarted","Data":"2e5429b417ce1c245f835cea5d50df62ef2f189398026f6bc4ee8659a9952fdf"} Jan 09 00:49:28 crc kubenswrapper[4945]: I0109 00:49:28.308856 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pdj57" podStartSLOduration=2.308827964 podStartE2EDuration="2.308827964s" podCreationTimestamp="2026-01-09 00:49:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:28.308123216 +0000 UTC m=+5638.619282162" watchObservedRunningTime="2026-01-09 00:49:28.308827964 +0000 UTC m=+5638.619986930" Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.294020 4945 generic.go:334] "Generic (PLEG): container finished" podID="9c3356b2-a1e4-444c-83eb-8ce5b717d99c" containerID="ef925930dc2ed1bd687004c42f36bf352c11f26edd4b91c8ea76ecab6370c4c4" exitCode=0 Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.294115 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdj57" event={"ID":"9c3356b2-a1e4-444c-83eb-8ce5b717d99c","Type":"ContainerDied","Data":"ef925930dc2ed1bd687004c42f36bf352c11f26edd4b91c8ea76ecab6370c4c4"} Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.294564 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w8wzf" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerName="registry-server" containerID="cri-o://0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a" gracePeriod=2 Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.741222 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.859513 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t6xh\" (UniqueName: \"kubernetes.io/projected/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-kube-api-access-6t6xh\") pod \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.860026 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-utilities\") pod \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.860136 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-catalog-content\") pod \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\" (UID: \"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072\") " Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.861132 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-utilities" (OuterVolumeSpecName: "utilities") pod "7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" (UID: "7cf11cc1-f91b-45eb-b457-b3fbb4b6f072"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.867420 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-kube-api-access-6t6xh" (OuterVolumeSpecName: "kube-api-access-6t6xh") pod "7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" (UID: "7cf11cc1-f91b-45eb-b457-b3fbb4b6f072"). InnerVolumeSpecName "kube-api-access-6t6xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.908953 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" (UID: "7cf11cc1-f91b-45eb-b457-b3fbb4b6f072"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.962239 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.962797 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:29 crc kubenswrapper[4945]: I0109 00:49:29.962839 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t6xh\" (UniqueName: \"kubernetes.io/projected/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072-kube-api-access-6t6xh\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.303677 4945 generic.go:334] "Generic (PLEG): container finished" podID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerID="0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a" exitCode=0 Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.303762 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8wzf" event={"ID":"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072","Type":"ContainerDied","Data":"0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a"} Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.303842 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8wzf" event={"ID":"7cf11cc1-f91b-45eb-b457-b3fbb4b6f072","Type":"ContainerDied","Data":"5cbc581cd14681e8f5e72a8f2c61300ab9d4b02bd6466011541efc02072581cc"} Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.303865 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8wzf" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.303881 4945 scope.go:117] "RemoveContainer" containerID="0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.330208 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8wzf"] Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.334200 4945 scope.go:117] "RemoveContainer" containerID="dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.337152 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w8wzf"] Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.357881 4945 scope.go:117] "RemoveContainer" containerID="757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.412780 4945 scope.go:117] "RemoveContainer" containerID="0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a" Jan 09 00:49:30 crc kubenswrapper[4945]: E0109 00:49:30.413344 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a\": container with ID starting with 0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a not found: ID does not exist" containerID="0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.413442 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a"} err="failed to get container status \"0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a\": rpc error: code = NotFound desc = could not find container \"0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a\": container with ID starting with 0b1b71993af0a0c8ecd5bd94aabd0e8badac29e6c3fa6b81470240791841604a not found: ID does not exist" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.413494 4945 scope.go:117] "RemoveContainer" containerID="dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6" Jan 09 00:49:30 crc kubenswrapper[4945]: E0109 00:49:30.413858 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6\": container with ID starting with dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6 not found: ID does not exist" containerID="dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.413886 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6"} err="failed to get container status \"dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6\": rpc error: code = NotFound desc = could not find container \"dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6\": container with ID starting with dd242eb88611edde5b82f4a9696d7fadf35a3fb5c07ff9c655581c8f46581db6 not found: ID does not exist" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.413909 4945 scope.go:117] "RemoveContainer" containerID="757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623" Jan 09 00:49:30 crc kubenswrapper[4945]: E0109 00:49:30.414600 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623\": container with ID starting with 757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623 not found: ID does not exist" containerID="757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.414686 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623"} err="failed to get container status \"757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623\": rpc error: code = NotFound desc = could not find container \"757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623\": container with ID starting with 757fe540379ddc56d20b7f8796a559ecaf1050f85825a8265b5abebcd2df9623 not found: ID does not exist" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.606557 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.778458 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-config-data\") pod \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.778541 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr7pd\" (UniqueName: \"kubernetes.io/projected/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-kube-api-access-gr7pd\") pod \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.778802 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-combined-ca-bundle\") pod \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\" (UID: \"9c3356b2-a1e4-444c-83eb-8ce5b717d99c\") " Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.784766 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-kube-api-access-gr7pd" (OuterVolumeSpecName: "kube-api-access-gr7pd") pod "9c3356b2-a1e4-444c-83eb-8ce5b717d99c" (UID: "9c3356b2-a1e4-444c-83eb-8ce5b717d99c"). InnerVolumeSpecName "kube-api-access-gr7pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.818461 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c3356b2-a1e4-444c-83eb-8ce5b717d99c" (UID: "9c3356b2-a1e4-444c-83eb-8ce5b717d99c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.826978 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-config-data" (OuterVolumeSpecName: "config-data") pod "9c3356b2-a1e4-444c-83eb-8ce5b717d99c" (UID: "9c3356b2-a1e4-444c-83eb-8ce5b717d99c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.881102 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.881136 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:30 crc kubenswrapper[4945]: I0109 00:49:30.881147 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr7pd\" (UniqueName: \"kubernetes.io/projected/9c3356b2-a1e4-444c-83eb-8ce5b717d99c-kube-api-access-gr7pd\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:30 crc kubenswrapper[4945]: E0109 00:49:30.883819 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice/crio-5cbc581cd14681e8f5e72a8f2c61300ab9d4b02bd6466011541efc02072581cc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice\": RecentStats: unable to find data in memory cache]" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.321813 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdj57" event={"ID":"9c3356b2-a1e4-444c-83eb-8ce5b717d99c","Type":"ContainerDied","Data":"2e5429b417ce1c245f835cea5d50df62ef2f189398026f6bc4ee8659a9952fdf"} Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.321864 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5429b417ce1c245f835cea5d50df62ef2f189398026f6bc4ee8659a9952fdf" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.322028 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdj57" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.563294 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745c4fff85-p6z29"] Jan 09 00:49:31 crc kubenswrapper[4945]: E0109 00:49:31.564029 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerName="registry-server" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.564046 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerName="registry-server" Jan 09 00:49:31 crc kubenswrapper[4945]: E0109 00:49:31.564055 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerName="extract-content" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.564061 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerName="extract-content" Jan 09 00:49:31 crc kubenswrapper[4945]: E0109 00:49:31.564093 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerName="extract-utilities" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.564100 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerName="extract-utilities" Jan 09 00:49:31 crc kubenswrapper[4945]: E0109 00:49:31.564109 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c3356b2-a1e4-444c-83eb-8ce5b717d99c" containerName="keystone-db-sync" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.564116 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c3356b2-a1e4-444c-83eb-8ce5b717d99c" containerName="keystone-db-sync" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.564274 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" containerName="registry-server" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.564292 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c3356b2-a1e4-444c-83eb-8ce5b717d99c" containerName="keystone-db-sync" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.565461 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.574748 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745c4fff85-p6z29"] Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.593726 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-48vcn"] Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.608818 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.617859 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.618035 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.618121 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.618273 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.618381 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zj4bd" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.622224 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-48vcn"] Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.691384 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8cfw\" (UniqueName: \"kubernetes.io/projected/1cb1705d-9097-4779-8ee3-87f5924ab655-kube-api-access-g8cfw\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.691449 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-dns-svc\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.691482 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-sb\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.691509 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-nb\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.691540 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-config\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.705109 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.793248 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-nb\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.793577 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkk4\" (UniqueName: \"kubernetes.io/projected/45e3be85-a173-4a88-a24b-bf711f72530d-kube-api-access-wlkk4\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.793655 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-scripts\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.793734 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-credential-keys\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.793832 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-config\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.793930 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-config-data\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.794029 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-fernet-keys\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.794304 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8cfw\" (UniqueName: \"kubernetes.io/projected/1cb1705d-9097-4779-8ee3-87f5924ab655-kube-api-access-g8cfw\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.794336 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-nb\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.794451 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-dns-svc\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.794503 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-sb\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.794535 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-combined-ca-bundle\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.794607 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-config\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.795197 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-sb\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.795334 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-dns-svc\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.828066 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8cfw\" (UniqueName: \"kubernetes.io/projected/1cb1705d-9097-4779-8ee3-87f5924ab655-kube-api-access-g8cfw\") pod \"dnsmasq-dns-745c4fff85-p6z29\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.887960 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.895930 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlkk4\" (UniqueName: \"kubernetes.io/projected/45e3be85-a173-4a88-a24b-bf711f72530d-kube-api-access-wlkk4\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.895983 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-scripts\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.896024 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-credential-keys\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.896080 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-config-data\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.896128 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-fernet-keys\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.896242 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-combined-ca-bundle\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.900519 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-credential-keys\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.900541 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-fernet-keys\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.900599 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-combined-ca-bundle\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.901327 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-config-data\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.901532 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-scripts\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.922234 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlkk4\" (UniqueName: \"kubernetes.io/projected/45e3be85-a173-4a88-a24b-bf711f72530d-kube-api-access-wlkk4\") pod \"keystone-bootstrap-48vcn\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:31 crc kubenswrapper[4945]: I0109 00:49:31.938503 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:32 crc kubenswrapper[4945]: I0109 00:49:32.017255 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf11cc1-f91b-45eb-b457-b3fbb4b6f072" path="/var/lib/kubelet/pods/7cf11cc1-f91b-45eb-b457-b3fbb4b6f072/volumes" Jan 09 00:49:32 crc kubenswrapper[4945]: I0109 00:49:32.348709 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745c4fff85-p6z29"] Jan 09 00:49:32 crc kubenswrapper[4945]: I0109 00:49:32.448812 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-48vcn"] Jan 09 00:49:33 crc kubenswrapper[4945]: I0109 00:49:33.338125 4945 generic.go:334] "Generic (PLEG): container finished" podID="1cb1705d-9097-4779-8ee3-87f5924ab655" containerID="d40319eeade6620393952b7d683aaa279c7a0008a2a7457264c145f1f7f74a66" exitCode=0 Jan 09 00:49:33 crc kubenswrapper[4945]: I0109 00:49:33.338262 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" event={"ID":"1cb1705d-9097-4779-8ee3-87f5924ab655","Type":"ContainerDied","Data":"d40319eeade6620393952b7d683aaa279c7a0008a2a7457264c145f1f7f74a66"} Jan 09 00:49:33 crc kubenswrapper[4945]: I0109 00:49:33.338481 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" event={"ID":"1cb1705d-9097-4779-8ee3-87f5924ab655","Type":"ContainerStarted","Data":"42ce9bff01f95aa2fb35ba8d3844105ca2d0a511ac711415e5496f8a4960b9d3"} Jan 09 00:49:33 crc kubenswrapper[4945]: I0109 00:49:33.340127 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48vcn" event={"ID":"45e3be85-a173-4a88-a24b-bf711f72530d","Type":"ContainerStarted","Data":"585fab122d0997953f1abe75abdb9eb9b7c1c7a67c35d44a8a9573c564e87e3e"} Jan 09 00:49:33 crc kubenswrapper[4945]: I0109 00:49:33.340149 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48vcn" event={"ID":"45e3be85-a173-4a88-a24b-bf711f72530d","Type":"ContainerStarted","Data":"46e9fa5daec9448622acb355af571927261a68b0b8ca317c6b4eb344fa957e26"} Jan 09 00:49:33 crc kubenswrapper[4945]: I0109 00:49:33.384270 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-48vcn" podStartSLOduration=2.384251026 podStartE2EDuration="2.384251026s" podCreationTimestamp="2026-01-09 00:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:33.37626654 +0000 UTC m=+5643.687425486" watchObservedRunningTime="2026-01-09 00:49:33.384251026 +0000 UTC m=+5643.695409972" Jan 09 00:49:34 crc kubenswrapper[4945]: I0109 00:49:34.351834 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" event={"ID":"1cb1705d-9097-4779-8ee3-87f5924ab655","Type":"ContainerStarted","Data":"554a004fdcbb18dc5410f7a55e5af7d3b0ffbcfa1db9c2e054a1a5bba9d2f74d"} Jan 09 00:49:34 crc kubenswrapper[4945]: I0109 00:49:34.352255 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:34 crc kubenswrapper[4945]: I0109 00:49:34.387189 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" podStartSLOduration=3.387160588 podStartE2EDuration="3.387160588s" podCreationTimestamp="2026-01-09 00:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:34.378729031 +0000 UTC m=+5644.689887977" watchObservedRunningTime="2026-01-09 00:49:34.387160588 +0000 UTC m=+5644.698319534" Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.113547 4945 scope.go:117] "RemoveContainer" containerID="cc80cb24ea02ac0ef5c53bbf92f131bf83f2b7c00ede2dce0af809083b1614bd" Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.133420 4945 scope.go:117] "RemoveContainer" containerID="3769760ab2d4020d86abe427bf66a6213bf5c96c602a0384791cd76180d8a031" Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.168432 4945 scope.go:117] "RemoveContainer" containerID="aee23294eadbf328ca138ea4ebbdbdeae9ec0e729f66ef3ee6dc19974784ad63" Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.201531 4945 scope.go:117] "RemoveContainer" containerID="cf108471d9c21386ad45b7dbc3ddc8c7548e85efdd0395b169e572ac96f9be29" Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.236394 4945 scope.go:117] "RemoveContainer" containerID="c57ab41e0d40f2c0f6d1853c2c75003ad2526af66665563936280c1a9112c362" Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.269632 4945 scope.go:117] "RemoveContainer" containerID="cc5d3bd38a3ae510c5ea9981d4ee2a4a3359e7a7bd3f2d0db1434ccccfabe669" Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.302884 4945 scope.go:117] "RemoveContainer" containerID="65888d6dad32b95212993485934cdfe9b5de233ff8b1ac28b6b939849b026055" Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.367304 4945 generic.go:334] "Generic (PLEG): container finished" podID="45e3be85-a173-4a88-a24b-bf711f72530d" containerID="585fab122d0997953f1abe75abdb9eb9b7c1c7a67c35d44a8a9573c564e87e3e" exitCode=0 Jan 09 00:49:36 crc kubenswrapper[4945]: I0109 00:49:36.367384 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48vcn" event={"ID":"45e3be85-a173-4a88-a24b-bf711f72530d","Type":"ContainerDied","Data":"585fab122d0997953f1abe75abdb9eb9b7c1c7a67c35d44a8a9573c564e87e3e"} Jan 09 00:49:37 crc kubenswrapper[4945]: I0109 00:49:37.886363 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.002957 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-scripts\") pod \"45e3be85-a173-4a88-a24b-bf711f72530d\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.003080 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-credential-keys\") pod \"45e3be85-a173-4a88-a24b-bf711f72530d\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.003218 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-fernet-keys\") pod \"45e3be85-a173-4a88-a24b-bf711f72530d\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.003252 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlkk4\" (UniqueName: \"kubernetes.io/projected/45e3be85-a173-4a88-a24b-bf711f72530d-kube-api-access-wlkk4\") pod \"45e3be85-a173-4a88-a24b-bf711f72530d\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.003302 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-combined-ca-bundle\") pod \"45e3be85-a173-4a88-a24b-bf711f72530d\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.003326 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-config-data\") pod \"45e3be85-a173-4a88-a24b-bf711f72530d\" (UID: \"45e3be85-a173-4a88-a24b-bf711f72530d\") " Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.010633 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "45e3be85-a173-4a88-a24b-bf711f72530d" (UID: "45e3be85-a173-4a88-a24b-bf711f72530d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.012144 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e3be85-a173-4a88-a24b-bf711f72530d-kube-api-access-wlkk4" (OuterVolumeSpecName: "kube-api-access-wlkk4") pod "45e3be85-a173-4a88-a24b-bf711f72530d" (UID: "45e3be85-a173-4a88-a24b-bf711f72530d"). InnerVolumeSpecName "kube-api-access-wlkk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.014086 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-scripts" (OuterVolumeSpecName: "scripts") pod "45e3be85-a173-4a88-a24b-bf711f72530d" (UID: "45e3be85-a173-4a88-a24b-bf711f72530d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.014144 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "45e3be85-a173-4a88-a24b-bf711f72530d" (UID: "45e3be85-a173-4a88-a24b-bf711f72530d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.031059 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-config-data" (OuterVolumeSpecName: "config-data") pod "45e3be85-a173-4a88-a24b-bf711f72530d" (UID: "45e3be85-a173-4a88-a24b-bf711f72530d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.033649 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45e3be85-a173-4a88-a24b-bf711f72530d" (UID: "45e3be85-a173-4a88-a24b-bf711f72530d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.106309 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.106379 4945 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.106400 4945 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.106419 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlkk4\" (UniqueName: \"kubernetes.io/projected/45e3be85-a173-4a88-a24b-bf711f72530d-kube-api-access-wlkk4\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.106438 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.106457 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e3be85-a173-4a88-a24b-bf711f72530d-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.396030 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-48vcn" event={"ID":"45e3be85-a173-4a88-a24b-bf711f72530d","Type":"ContainerDied","Data":"46e9fa5daec9448622acb355af571927261a68b0b8ca317c6b4eb344fa957e26"} Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.396391 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46e9fa5daec9448622acb355af571927261a68b0b8ca317c6b4eb344fa957e26" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.396098 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-48vcn" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.485086 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-48vcn"] Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.493933 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-48vcn"] Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.575636 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-65469"] Jan 09 00:49:38 crc kubenswrapper[4945]: E0109 00:49:38.576172 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e3be85-a173-4a88-a24b-bf711f72530d" containerName="keystone-bootstrap" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.576199 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e3be85-a173-4a88-a24b-bf711f72530d" containerName="keystone-bootstrap" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.576428 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e3be85-a173-4a88-a24b-bf711f72530d" containerName="keystone-bootstrap" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.577242 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.579623 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.579753 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zj4bd" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.580057 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.580097 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.580842 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.585556 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-65469"] Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.716774 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-fernet-keys\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.716839 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m78bg\" (UniqueName: \"kubernetes.io/projected/d18c95da-e9f3-4f04-9d79-a6760f6faa97-kube-api-access-m78bg\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.716897 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-config-data\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.716934 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-scripts\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.716958 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-combined-ca-bundle\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.717068 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-credential-keys\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.819180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-fernet-keys\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.819343 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m78bg\" (UniqueName: \"kubernetes.io/projected/d18c95da-e9f3-4f04-9d79-a6760f6faa97-kube-api-access-m78bg\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.819422 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-config-data\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.819509 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-scripts\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.819577 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-combined-ca-bundle\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.819644 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-credential-keys\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.823898 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-fernet-keys\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.823906 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-scripts\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.824294 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-combined-ca-bundle\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.824824 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-config-data\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.837985 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-credential-keys\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.844859 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m78bg\" (UniqueName: \"kubernetes.io/projected/d18c95da-e9f3-4f04-9d79-a6760f6faa97-kube-api-access-m78bg\") pod \"keystone-bootstrap-65469\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:38 crc kubenswrapper[4945]: I0109 00:49:38.893935 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:39 crc kubenswrapper[4945]: I0109 00:49:39.370148 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-65469"] Jan 09 00:49:39 crc kubenswrapper[4945]: I0109 00:49:39.405913 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-65469" event={"ID":"d18c95da-e9f3-4f04-9d79-a6760f6faa97","Type":"ContainerStarted","Data":"71f6f067ed944f8a5360a378e7954c720f3698459aa7bc3b223eefc4e5753d7f"} Jan 09 00:49:40 crc kubenswrapper[4945]: I0109 00:49:40.010111 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e3be85-a173-4a88-a24b-bf711f72530d" path="/var/lib/kubelet/pods/45e3be85-a173-4a88-a24b-bf711f72530d/volumes" Jan 09 00:49:40 crc kubenswrapper[4945]: I0109 00:49:40.414725 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-65469" event={"ID":"d18c95da-e9f3-4f04-9d79-a6760f6faa97","Type":"ContainerStarted","Data":"6fc9752a0f70bd62b30e47f44faed8bb7c3ef2a5778362db9059b8ec8ff808a2"} Jan 09 00:49:40 crc kubenswrapper[4945]: I0109 00:49:40.438832 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-65469" podStartSLOduration=2.438810686 podStartE2EDuration="2.438810686s" podCreationTimestamp="2026-01-09 00:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:40.436647803 +0000 UTC m=+5650.747806759" watchObservedRunningTime="2026-01-09 00:49:40.438810686 +0000 UTC m=+5650.749969632" Jan 09 00:49:41 crc kubenswrapper[4945]: E0109 00:49:41.076421 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice/crio-5cbc581cd14681e8f5e72a8f2c61300ab9d4b02bd6466011541efc02072581cc\": RecentStats: unable to find data in memory cache]" Jan 09 00:49:41 crc kubenswrapper[4945]: I0109 00:49:41.890349 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:49:41 crc kubenswrapper[4945]: I0109 00:49:41.997401 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c4774c875-vx44d"] Jan 09 00:49:41 crc kubenswrapper[4945]: I0109 00:49:41.997751 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" podUID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" containerName="dnsmasq-dns" containerID="cri-o://ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38" gracePeriod=10 Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.491719 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.582881 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nrgp\" (UniqueName: \"kubernetes.io/projected/7ce3dc1d-72b9-4512-8567-f7514125a3cd-kube-api-access-6nrgp\") pod \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.583034 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-nb\") pod \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.583133 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-sb\") pod \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.583177 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-config\") pod \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.583192 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-dns-svc\") pod \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\" (UID: \"7ce3dc1d-72b9-4512-8567-f7514125a3cd\") " Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.596861 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce3dc1d-72b9-4512-8567-f7514125a3cd-kube-api-access-6nrgp" (OuterVolumeSpecName: "kube-api-access-6nrgp") pod "7ce3dc1d-72b9-4512-8567-f7514125a3cd" (UID: "7ce3dc1d-72b9-4512-8567-f7514125a3cd"). InnerVolumeSpecName "kube-api-access-6nrgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.627941 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ce3dc1d-72b9-4512-8567-f7514125a3cd" (UID: "7ce3dc1d-72b9-4512-8567-f7514125a3cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.634405 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-config" (OuterVolumeSpecName: "config") pod "7ce3dc1d-72b9-4512-8567-f7514125a3cd" (UID: "7ce3dc1d-72b9-4512-8567-f7514125a3cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.646184 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ce3dc1d-72b9-4512-8567-f7514125a3cd" (UID: "7ce3dc1d-72b9-4512-8567-f7514125a3cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.650549 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ce3dc1d-72b9-4512-8567-f7514125a3cd" (UID: "7ce3dc1d-72b9-4512-8567-f7514125a3cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.686119 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.686166 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.686178 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.686187 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nrgp\" (UniqueName: \"kubernetes.io/projected/7ce3dc1d-72b9-4512-8567-f7514125a3cd-kube-api-access-6nrgp\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.686199 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ce3dc1d-72b9-4512-8567-f7514125a3cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.923939 4945 generic.go:334] "Generic (PLEG): container finished" podID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" containerID="ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38" exitCode=0 Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.924030 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" event={"ID":"7ce3dc1d-72b9-4512-8567-f7514125a3cd","Type":"ContainerDied","Data":"ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38"} Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.924096 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" event={"ID":"7ce3dc1d-72b9-4512-8567-f7514125a3cd","Type":"ContainerDied","Data":"8165b53f1c111ce0a182b1dc0490d5eb4df5e8607fb290373d7e8b6636d523f7"} Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.924124 4945 scope.go:117] "RemoveContainer" containerID="ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.924377 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c4774c875-vx44d" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.957001 4945 scope.go:117] "RemoveContainer" containerID="5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.964251 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c4774c875-vx44d"] Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.966097 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c4774c875-vx44d"] Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.989455 4945 scope.go:117] "RemoveContainer" containerID="ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38" Jan 09 00:49:42 crc kubenswrapper[4945]: E0109 00:49:42.990121 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38\": container with ID starting with ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38 not found: ID does not exist" containerID="ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.990192 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38"} err="failed to get container status \"ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38\": rpc error: code = NotFound desc = could not find container \"ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38\": container with ID starting with ac6f1aff184ffab5cd7df992f3dc52b68b0a5661067b5b77f453f9e42f699d38 not found: ID does not exist" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.990227 4945 scope.go:117] "RemoveContainer" containerID="5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000" Jan 09 00:49:42 crc kubenswrapper[4945]: E0109 00:49:42.990801 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000\": container with ID starting with 5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000 not found: ID does not exist" containerID="5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000" Jan 09 00:49:42 crc kubenswrapper[4945]: I0109 00:49:42.990863 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000"} err="failed to get container status \"5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000\": rpc error: code = NotFound desc = could not find container \"5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000\": container with ID starting with 5162b330ccdcd6a507fd80a1489b804662157ace850fc0c418a7b0ea09a44000 not found: ID does not exist" Jan 09 00:49:43 crc kubenswrapper[4945]: I0109 00:49:43.578418 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:49:43 crc kubenswrapper[4945]: I0109 00:49:43.578836 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:49:43 crc kubenswrapper[4945]: I0109 00:49:43.934860 4945 generic.go:334] "Generic (PLEG): container finished" podID="d18c95da-e9f3-4f04-9d79-a6760f6faa97" containerID="6fc9752a0f70bd62b30e47f44faed8bb7c3ef2a5778362db9059b8ec8ff808a2" exitCode=0 Jan 09 00:49:43 crc kubenswrapper[4945]: I0109 00:49:43.934961 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-65469" event={"ID":"d18c95da-e9f3-4f04-9d79-a6760f6faa97","Type":"ContainerDied","Data":"6fc9752a0f70bd62b30e47f44faed8bb7c3ef2a5778362db9059b8ec8ff808a2"} Jan 09 00:49:44 crc kubenswrapper[4945]: I0109 00:49:44.011688 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" path="/var/lib/kubelet/pods/7ce3dc1d-72b9-4512-8567-f7514125a3cd/volumes" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.325079 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.468508 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-fernet-keys\") pod \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.468561 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-credential-keys\") pod \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.468595 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-combined-ca-bundle\") pod \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.468684 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m78bg\" (UniqueName: \"kubernetes.io/projected/d18c95da-e9f3-4f04-9d79-a6760f6faa97-kube-api-access-m78bg\") pod \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.468735 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-config-data\") pod \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.468800 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-scripts\") pod \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\" (UID: \"d18c95da-e9f3-4f04-9d79-a6760f6faa97\") " Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.475751 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d18c95da-e9f3-4f04-9d79-a6760f6faa97" (UID: "d18c95da-e9f3-4f04-9d79-a6760f6faa97"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.475784 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c95da-e9f3-4f04-9d79-a6760f6faa97-kube-api-access-m78bg" (OuterVolumeSpecName: "kube-api-access-m78bg") pod "d18c95da-e9f3-4f04-9d79-a6760f6faa97" (UID: "d18c95da-e9f3-4f04-9d79-a6760f6faa97"). InnerVolumeSpecName "kube-api-access-m78bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.477282 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d18c95da-e9f3-4f04-9d79-a6760f6faa97" (UID: "d18c95da-e9f3-4f04-9d79-a6760f6faa97"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.478213 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-scripts" (OuterVolumeSpecName: "scripts") pod "d18c95da-e9f3-4f04-9d79-a6760f6faa97" (UID: "d18c95da-e9f3-4f04-9d79-a6760f6faa97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.494305 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-config-data" (OuterVolumeSpecName: "config-data") pod "d18c95da-e9f3-4f04-9d79-a6760f6faa97" (UID: "d18c95da-e9f3-4f04-9d79-a6760f6faa97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.515569 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d18c95da-e9f3-4f04-9d79-a6760f6faa97" (UID: "d18c95da-e9f3-4f04-9d79-a6760f6faa97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.570556 4945 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.570596 4945 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.570611 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.570621 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m78bg\" (UniqueName: \"kubernetes.io/projected/d18c95da-e9f3-4f04-9d79-a6760f6faa97-kube-api-access-m78bg\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.570632 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.570640 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d18c95da-e9f3-4f04-9d79-a6760f6faa97-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.954726 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-65469" event={"ID":"d18c95da-e9f3-4f04-9d79-a6760f6faa97","Type":"ContainerDied","Data":"71f6f067ed944f8a5360a378e7954c720f3698459aa7bc3b223eefc4e5753d7f"} Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.955335 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f6f067ed944f8a5360a378e7954c720f3698459aa7bc3b223eefc4e5753d7f" Jan 09 00:49:45 crc kubenswrapper[4945]: I0109 00:49:45.955040 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-65469" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.050235 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-74478c8c7f-gzflr"] Jan 09 00:49:46 crc kubenswrapper[4945]: E0109 00:49:46.050919 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" containerName="init" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.051087 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" containerName="init" Jan 09 00:49:46 crc kubenswrapper[4945]: E0109 00:49:46.051183 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" containerName="dnsmasq-dns" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.051526 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" containerName="dnsmasq-dns" Jan 09 00:49:46 crc kubenswrapper[4945]: E0109 00:49:46.051610 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18c95da-e9f3-4f04-9d79-a6760f6faa97" containerName="keystone-bootstrap" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.051683 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18c95da-e9f3-4f04-9d79-a6760f6faa97" containerName="keystone-bootstrap" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.051950 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18c95da-e9f3-4f04-9d79-a6760f6faa97" containerName="keystone-bootstrap" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.052050 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce3dc1d-72b9-4512-8567-f7514125a3cd" containerName="dnsmasq-dns" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.052886 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.054735 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.055118 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.055646 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zj4bd" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.055727 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.067957 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74478c8c7f-gzflr"] Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.182370 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-combined-ca-bundle\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.182685 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-fernet-keys\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.182894 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg482\" (UniqueName: \"kubernetes.io/projected/79837a34-fd63-45a4-9521-387e72c26b24-kube-api-access-qg482\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.183072 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-config-data\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.183117 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-scripts\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.183137 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-credential-keys\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.284766 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-scripts\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.284833 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-credential-keys\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.284958 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-combined-ca-bundle\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.285009 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-fernet-keys\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.285076 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg482\" (UniqueName: \"kubernetes.io/projected/79837a34-fd63-45a4-9521-387e72c26b24-kube-api-access-qg482\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.285107 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-config-data\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.288954 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-scripts\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.289378 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-credential-keys\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.289504 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-fernet-keys\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.292555 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-combined-ca-bundle\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.297865 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79837a34-fd63-45a4-9521-387e72c26b24-config-data\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.305545 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg482\" (UniqueName: \"kubernetes.io/projected/79837a34-fd63-45a4-9521-387e72c26b24-kube-api-access-qg482\") pod \"keystone-74478c8c7f-gzflr\" (UID: \"79837a34-fd63-45a4-9521-387e72c26b24\") " pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.369292 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.777132 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-74478c8c7f-gzflr"] Jan 09 00:49:46 crc kubenswrapper[4945]: W0109 00:49:46.789812 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79837a34_fd63_45a4_9521_387e72c26b24.slice/crio-fa56f9361098844303ae13e5b087805f291c5d6988b2c43419cc64d83e16f697 WatchSource:0}: Error finding container fa56f9361098844303ae13e5b087805f291c5d6988b2c43419cc64d83e16f697: Status 404 returned error can't find the container with id fa56f9361098844303ae13e5b087805f291c5d6988b2c43419cc64d83e16f697 Jan 09 00:49:46 crc kubenswrapper[4945]: I0109 00:49:46.963925 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74478c8c7f-gzflr" event={"ID":"79837a34-fd63-45a4-9521-387e72c26b24","Type":"ContainerStarted","Data":"fa56f9361098844303ae13e5b087805f291c5d6988b2c43419cc64d83e16f697"} Jan 09 00:49:47 crc kubenswrapper[4945]: I0109 00:49:47.989601 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-74478c8c7f-gzflr" event={"ID":"79837a34-fd63-45a4-9521-387e72c26b24","Type":"ContainerStarted","Data":"0a87992c7f1ad1c1cdbf2ffcb054ab2088f490ac4a1b3567d75b3e7905df9182"} Jan 09 00:49:47 crc kubenswrapper[4945]: I0109 00:49:47.989900 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:49:48 crc kubenswrapper[4945]: I0109 00:49:48.019724 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-74478c8c7f-gzflr" podStartSLOduration=2.019698604 podStartE2EDuration="2.019698604s" podCreationTimestamp="2026-01-09 00:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:49:48.010089188 +0000 UTC m=+5658.321248174" watchObservedRunningTime="2026-01-09 00:49:48.019698604 +0000 UTC m=+5658.330857550" Jan 09 00:49:51 crc kubenswrapper[4945]: E0109 00:49:51.271593 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice/crio-5cbc581cd14681e8f5e72a8f2c61300ab9d4b02bd6466011541efc02072581cc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice\": RecentStats: unable to find data in memory cache]" Jan 09 00:50:01 crc kubenswrapper[4945]: E0109 00:50:01.539781 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice/crio-5cbc581cd14681e8f5e72a8f2c61300ab9d4b02bd6466011541efc02072581cc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice\": RecentStats: unable to find data in memory cache]" Jan 09 00:50:11 crc kubenswrapper[4945]: E0109 00:50:11.791908 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice/crio-5cbc581cd14681e8f5e72a8f2c61300ab9d4b02bd6466011541efc02072581cc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice\": RecentStats: unable to find data in memory cache]" Jan 09 00:50:13 crc kubenswrapper[4945]: I0109 00:50:13.578788 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:50:13 crc kubenswrapper[4945]: I0109 00:50:13.579273 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:50:17 crc kubenswrapper[4945]: I0109 00:50:17.877140 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-74478c8c7f-gzflr" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.064246 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.065750 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.068431 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.068690 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.068944 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pp27c" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.074955 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.226334 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90ccb708-384f-414c-b03b-676f19656e35-openstack-config-secret\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.226435 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52k9\" (UniqueName: \"kubernetes.io/projected/90ccb708-384f-414c-b03b-676f19656e35-kube-api-access-m52k9\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.226482 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90ccb708-384f-414c-b03b-676f19656e35-openstack-config\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.328318 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90ccb708-384f-414c-b03b-676f19656e35-openstack-config-secret\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.328474 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m52k9\" (UniqueName: \"kubernetes.io/projected/90ccb708-384f-414c-b03b-676f19656e35-kube-api-access-m52k9\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.328523 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90ccb708-384f-414c-b03b-676f19656e35-openstack-config\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.329476 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90ccb708-384f-414c-b03b-676f19656e35-openstack-config\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.334666 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90ccb708-384f-414c-b03b-676f19656e35-openstack-config-secret\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.346924 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m52k9\" (UniqueName: \"kubernetes.io/projected/90ccb708-384f-414c-b03b-676f19656e35-kube-api-access-m52k9\") pod \"openstackclient\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.388085 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 00:50:21 crc kubenswrapper[4945]: I0109 00:50:21.843315 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 00:50:22 crc kubenswrapper[4945]: E0109 00:50:22.009855 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cf11cc1_f91b_45eb_b457_b3fbb4b6f072.slice/crio-5cbc581cd14681e8f5e72a8f2c61300ab9d4b02bd6466011541efc02072581cc\": RecentStats: unable to find data in memory cache]" Jan 09 00:50:22 crc kubenswrapper[4945]: I0109 00:50:22.313847 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"90ccb708-384f-414c-b03b-676f19656e35","Type":"ContainerStarted","Data":"09c862520ef1555654e18fac009e4f5512b9a7c1d5dd268896f167c4e7e2e45e"} Jan 09 00:50:22 crc kubenswrapper[4945]: I0109 00:50:22.313906 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"90ccb708-384f-414c-b03b-676f19656e35","Type":"ContainerStarted","Data":"9d32caa304c860b21c6a75a860fa96972bcb50ec5ad4431119c41581ea148079"} Jan 09 00:50:22 crc kubenswrapper[4945]: I0109 00:50:22.329735 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.329713062 podStartE2EDuration="1.329713062s" podCreationTimestamp="2026-01-09 00:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:50:22.326909924 +0000 UTC m=+5692.638068880" watchObservedRunningTime="2026-01-09 00:50:22.329713062 +0000 UTC m=+5692.640872018" Jan 09 00:50:30 crc kubenswrapper[4945]: E0109 00:50:30.049879 4945 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1cbf09c23bbb3f433fecdb95486521bb3e1ae71a9a1ff09d9cd884a133356207/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1cbf09c23bbb3f433fecdb95486521bb3e1ae71a9a1ff09d9cd884a133356207/diff: no such file or directory, extraDiskErr: Jan 09 00:50:43 crc kubenswrapper[4945]: I0109 00:50:43.578566 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:50:43 crc kubenswrapper[4945]: I0109 00:50:43.579098 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:50:43 crc kubenswrapper[4945]: I0109 00:50:43.579149 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:50:43 crc kubenswrapper[4945]: I0109 00:50:43.579877 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:50:43 crc kubenswrapper[4945]: I0109 00:50:43.579934 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" gracePeriod=600 Jan 09 00:50:43 crc kubenswrapper[4945]: E0109 00:50:43.700198 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:50:44 crc kubenswrapper[4945]: I0109 00:50:44.504216 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" exitCode=0 Jan 09 00:50:44 crc kubenswrapper[4945]: I0109 00:50:44.504287 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb"} Jan 09 00:50:44 crc kubenswrapper[4945]: I0109 00:50:44.504607 4945 scope.go:117] "RemoveContainer" containerID="f956e706e3b54964ca3dda380ed40fcf584aa07099065dbe710da5a117358406" Jan 09 00:50:44 crc kubenswrapper[4945]: I0109 00:50:44.505560 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:50:44 crc kubenswrapper[4945]: E0109 00:50:44.508693 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:50:58 crc kubenswrapper[4945]: I0109 00:50:58.000720 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:50:58 crc kubenswrapper[4945]: E0109 00:50:58.002652 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:51:02 crc kubenswrapper[4945]: E0109 00:51:02.581829 4945 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.74:54032->38.102.83.74:45665: write tcp 38.102.83.74:54032->38.102.83.74:45665: write: broken pipe Jan 09 00:51:03 crc kubenswrapper[4945]: I0109 00:51:03.052542 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-24dgv"] Jan 09 00:51:03 crc kubenswrapper[4945]: I0109 00:51:03.062050 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-24dgv"] Jan 09 00:51:04 crc kubenswrapper[4945]: I0109 00:51:04.009047 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce68989-d35b-4013-a5f3-dd43fdfa650c" path="/var/lib/kubelet/pods/6ce68989-d35b-4013-a5f3-dd43fdfa650c/volumes" Jan 09 00:51:13 crc kubenswrapper[4945]: I0109 00:51:12.999901 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:51:13 crc kubenswrapper[4945]: E0109 00:51:13.000778 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:51:24 crc kubenswrapper[4945]: I0109 00:51:24.000946 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:51:24 crc kubenswrapper[4945]: E0109 00:51:24.002762 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:51:32 crc kubenswrapper[4945]: E0109 00:51:32.841372 4945 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.74:37812->38.102.83.74:45665: write tcp 38.102.83.74:37812->38.102.83.74:45665: write: connection reset by peer Jan 09 00:51:36 crc kubenswrapper[4945]: I0109 00:51:36.002445 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:51:36 crc kubenswrapper[4945]: E0109 00:51:36.004251 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:51:36 crc kubenswrapper[4945]: I0109 00:51:36.486191 4945 scope.go:117] "RemoveContainer" containerID="6858dbe110c02a5f6d6cee4a5264f29480832a4e465548abe029dddc8ffd4722" Jan 09 00:51:47 crc kubenswrapper[4945]: I0109 00:51:47.000457 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:51:47 crc kubenswrapper[4945]: E0109 00:51:47.001263 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.467641 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4mkmq"] Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.470397 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.484415 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mkmq"] Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.567121 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-catalog-content\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.567174 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdt95\" (UniqueName: \"kubernetes.io/projected/6c0316e8-a59f-4811-8d61-141ff3aacaa9-kube-api-access-cdt95\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.567202 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-utilities\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.669236 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-catalog-content\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.669295 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdt95\" (UniqueName: \"kubernetes.io/projected/6c0316e8-a59f-4811-8d61-141ff3aacaa9-kube-api-access-cdt95\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.669320 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-utilities\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.669766 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-catalog-content\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.669836 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-utilities\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.692054 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdt95\" (UniqueName: \"kubernetes.io/projected/6c0316e8-a59f-4811-8d61-141ff3aacaa9-kube-api-access-cdt95\") pod \"redhat-marketplace-4mkmq\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:55 crc kubenswrapper[4945]: I0109 00:51:55.792506 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:51:56 crc kubenswrapper[4945]: I0109 00:51:56.250361 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mkmq"] Jan 09 00:51:57 crc kubenswrapper[4945]: I0109 00:51:57.108515 4945 generic.go:334] "Generic (PLEG): container finished" podID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerID="2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538" exitCode=0 Jan 09 00:51:57 crc kubenswrapper[4945]: I0109 00:51:57.108595 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mkmq" event={"ID":"6c0316e8-a59f-4811-8d61-141ff3aacaa9","Type":"ContainerDied","Data":"2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538"} Jan 09 00:51:57 crc kubenswrapper[4945]: I0109 00:51:57.108902 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mkmq" event={"ID":"6c0316e8-a59f-4811-8d61-141ff3aacaa9","Type":"ContainerStarted","Data":"0718878b0b23bac5e48fe01cf605716a50c42e0ccc95e7dfe1198eeba50d0727"} Jan 09 00:51:58 crc kubenswrapper[4945]: I0109 00:51:58.117668 4945 generic.go:334] "Generic (PLEG): container finished" podID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerID="4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14" exitCode=0 Jan 09 00:51:58 crc kubenswrapper[4945]: I0109 00:51:58.117732 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mkmq" event={"ID":"6c0316e8-a59f-4811-8d61-141ff3aacaa9","Type":"ContainerDied","Data":"4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14"} Jan 09 00:51:58 crc kubenswrapper[4945]: I0109 00:51:58.928559 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xndkb"] Jan 09 00:51:58 crc kubenswrapper[4945]: I0109 00:51:58.930245 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xndkb" Jan 09 00:51:58 crc kubenswrapper[4945]: I0109 00:51:58.936920 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xndkb"] Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.022743 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b819698e-6f53-4cab-94ed-b8cf4ab3602c-operator-scripts\") pod \"barbican-db-create-xndkb\" (UID: \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\") " pod="openstack/barbican-db-create-xndkb" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.022823 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2zc2\" (UniqueName: \"kubernetes.io/projected/b819698e-6f53-4cab-94ed-b8cf4ab3602c-kube-api-access-w2zc2\") pod \"barbican-db-create-xndkb\" (UID: \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\") " pod="openstack/barbican-db-create-xndkb" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.032614 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3f2a-account-create-update-xtdp9"] Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.033984 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.035891 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.049947 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3f2a-account-create-update-xtdp9"] Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.124883 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b819698e-6f53-4cab-94ed-b8cf4ab3602c-operator-scripts\") pod \"barbican-db-create-xndkb\" (UID: \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\") " pod="openstack/barbican-db-create-xndkb" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.124953 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d7277-7424-4f34-a337-23ed0b080c65-operator-scripts\") pod \"barbican-3f2a-account-create-update-xtdp9\" (UID: \"c23d7277-7424-4f34-a337-23ed0b080c65\") " pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.125020 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2zc2\" (UniqueName: \"kubernetes.io/projected/b819698e-6f53-4cab-94ed-b8cf4ab3602c-kube-api-access-w2zc2\") pod \"barbican-db-create-xndkb\" (UID: \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\") " pod="openstack/barbican-db-create-xndkb" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.125150 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6v9g\" (UniqueName: \"kubernetes.io/projected/c23d7277-7424-4f34-a337-23ed0b080c65-kube-api-access-r6v9g\") pod \"barbican-3f2a-account-create-update-xtdp9\" (UID: \"c23d7277-7424-4f34-a337-23ed0b080c65\") " pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.127023 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b819698e-6f53-4cab-94ed-b8cf4ab3602c-operator-scripts\") pod \"barbican-db-create-xndkb\" (UID: \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\") " pod="openstack/barbican-db-create-xndkb" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.127283 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mkmq" event={"ID":"6c0316e8-a59f-4811-8d61-141ff3aacaa9","Type":"ContainerStarted","Data":"921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8"} Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.145483 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2zc2\" (UniqueName: \"kubernetes.io/projected/b819698e-6f53-4cab-94ed-b8cf4ab3602c-kube-api-access-w2zc2\") pod \"barbican-db-create-xndkb\" (UID: \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\") " pod="openstack/barbican-db-create-xndkb" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.146625 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4mkmq" podStartSLOduration=2.712833191 podStartE2EDuration="4.146602585s" podCreationTimestamp="2026-01-09 00:51:55 +0000 UTC" firstStartedPulling="2026-01-09 00:51:57.112152268 +0000 UTC m=+5787.423311214" lastFinishedPulling="2026-01-09 00:51:58.545921662 +0000 UTC m=+5788.857080608" observedRunningTime="2026-01-09 00:51:59.143580491 +0000 UTC m=+5789.454739457" watchObservedRunningTime="2026-01-09 00:51:59.146602585 +0000 UTC m=+5789.457761531" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.226531 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d7277-7424-4f34-a337-23ed0b080c65-operator-scripts\") pod \"barbican-3f2a-account-create-update-xtdp9\" (UID: \"c23d7277-7424-4f34-a337-23ed0b080c65\") " pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.226903 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6v9g\" (UniqueName: \"kubernetes.io/projected/c23d7277-7424-4f34-a337-23ed0b080c65-kube-api-access-r6v9g\") pod \"barbican-3f2a-account-create-update-xtdp9\" (UID: \"c23d7277-7424-4f34-a337-23ed0b080c65\") " pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.227311 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d7277-7424-4f34-a337-23ed0b080c65-operator-scripts\") pod \"barbican-3f2a-account-create-update-xtdp9\" (UID: \"c23d7277-7424-4f34-a337-23ed0b080c65\") " pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.246567 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6v9g\" (UniqueName: \"kubernetes.io/projected/c23d7277-7424-4f34-a337-23ed0b080c65-kube-api-access-r6v9g\") pod \"barbican-3f2a-account-create-update-xtdp9\" (UID: \"c23d7277-7424-4f34-a337-23ed0b080c65\") " pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.297196 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xndkb" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.350303 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.768101 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xndkb"] Jan 09 00:51:59 crc kubenswrapper[4945]: W0109 00:51:59.773206 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb819698e_6f53_4cab_94ed_b8cf4ab3602c.slice/crio-17fab173c45c292abd641083e05412d72a3bd62addc8dccaa390ef0011373181 WatchSource:0}: Error finding container 17fab173c45c292abd641083e05412d72a3bd62addc8dccaa390ef0011373181: Status 404 returned error can't find the container with id 17fab173c45c292abd641083e05412d72a3bd62addc8dccaa390ef0011373181 Jan 09 00:51:59 crc kubenswrapper[4945]: I0109 00:51:59.868489 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3f2a-account-create-update-xtdp9"] Jan 09 00:52:00 crc kubenswrapper[4945]: I0109 00:52:00.138541 4945 generic.go:334] "Generic (PLEG): container finished" podID="b819698e-6f53-4cab-94ed-b8cf4ab3602c" containerID="84871ac405ac6a591adb1255fc22f754470245858c2f906762544b04094c0c4f" exitCode=0 Jan 09 00:52:00 crc kubenswrapper[4945]: I0109 00:52:00.138615 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xndkb" event={"ID":"b819698e-6f53-4cab-94ed-b8cf4ab3602c","Type":"ContainerDied","Data":"84871ac405ac6a591adb1255fc22f754470245858c2f906762544b04094c0c4f"} Jan 09 00:52:00 crc kubenswrapper[4945]: I0109 00:52:00.138646 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xndkb" event={"ID":"b819698e-6f53-4cab-94ed-b8cf4ab3602c","Type":"ContainerStarted","Data":"17fab173c45c292abd641083e05412d72a3bd62addc8dccaa390ef0011373181"} Jan 09 00:52:00 crc kubenswrapper[4945]: I0109 00:52:00.142261 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3f2a-account-create-update-xtdp9" event={"ID":"c23d7277-7424-4f34-a337-23ed0b080c65","Type":"ContainerStarted","Data":"d31ac7b9c59e8b7a00d48f0efe199a07b4eea9fd429dc2b7910c8e968d807c9d"} Jan 09 00:52:00 crc kubenswrapper[4945]: I0109 00:52:00.142330 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3f2a-account-create-update-xtdp9" event={"ID":"c23d7277-7424-4f34-a337-23ed0b080c65","Type":"ContainerStarted","Data":"ed6bb12c769defe99bab65fc88032291ce4df49666e610427a98504378c0724b"} Jan 09 00:52:00 crc kubenswrapper[4945]: I0109 00:52:00.188276 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-3f2a-account-create-update-xtdp9" podStartSLOduration=1.188245867 podStartE2EDuration="1.188245867s" podCreationTimestamp="2026-01-09 00:51:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:00.184657859 +0000 UTC m=+5790.495816815" watchObservedRunningTime="2026-01-09 00:52:00.188245867 +0000 UTC m=+5790.499404813" Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.000661 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:52:01 crc kubenswrapper[4945]: E0109 00:52:01.001296 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.153004 4945 generic.go:334] "Generic (PLEG): container finished" podID="c23d7277-7424-4f34-a337-23ed0b080c65" containerID="d31ac7b9c59e8b7a00d48f0efe199a07b4eea9fd429dc2b7910c8e968d807c9d" exitCode=0 Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.153105 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3f2a-account-create-update-xtdp9" event={"ID":"c23d7277-7424-4f34-a337-23ed0b080c65","Type":"ContainerDied","Data":"d31ac7b9c59e8b7a00d48f0efe199a07b4eea9fd429dc2b7910c8e968d807c9d"} Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.496922 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xndkb" Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.563701 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2zc2\" (UniqueName: \"kubernetes.io/projected/b819698e-6f53-4cab-94ed-b8cf4ab3602c-kube-api-access-w2zc2\") pod \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\" (UID: \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\") " Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.563830 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b819698e-6f53-4cab-94ed-b8cf4ab3602c-operator-scripts\") pod \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\" (UID: \"b819698e-6f53-4cab-94ed-b8cf4ab3602c\") " Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.564783 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b819698e-6f53-4cab-94ed-b8cf4ab3602c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b819698e-6f53-4cab-94ed-b8cf4ab3602c" (UID: "b819698e-6f53-4cab-94ed-b8cf4ab3602c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.569687 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b819698e-6f53-4cab-94ed-b8cf4ab3602c-kube-api-access-w2zc2" (OuterVolumeSpecName: "kube-api-access-w2zc2") pod "b819698e-6f53-4cab-94ed-b8cf4ab3602c" (UID: "b819698e-6f53-4cab-94ed-b8cf4ab3602c"). InnerVolumeSpecName "kube-api-access-w2zc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.665224 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2zc2\" (UniqueName: \"kubernetes.io/projected/b819698e-6f53-4cab-94ed-b8cf4ab3602c-kube-api-access-w2zc2\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:01 crc kubenswrapper[4945]: I0109 00:52:01.665256 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b819698e-6f53-4cab-94ed-b8cf4ab3602c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.163363 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xndkb" Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.163484 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xndkb" event={"ID":"b819698e-6f53-4cab-94ed-b8cf4ab3602c","Type":"ContainerDied","Data":"17fab173c45c292abd641083e05412d72a3bd62addc8dccaa390ef0011373181"} Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.164188 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17fab173c45c292abd641083e05412d72a3bd62addc8dccaa390ef0011373181" Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.556280 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.600091 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6v9g\" (UniqueName: \"kubernetes.io/projected/c23d7277-7424-4f34-a337-23ed0b080c65-kube-api-access-r6v9g\") pod \"c23d7277-7424-4f34-a337-23ed0b080c65\" (UID: \"c23d7277-7424-4f34-a337-23ed0b080c65\") " Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.600281 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d7277-7424-4f34-a337-23ed0b080c65-operator-scripts\") pod \"c23d7277-7424-4f34-a337-23ed0b080c65\" (UID: \"c23d7277-7424-4f34-a337-23ed0b080c65\") " Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.601138 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23d7277-7424-4f34-a337-23ed0b080c65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c23d7277-7424-4f34-a337-23ed0b080c65" (UID: "c23d7277-7424-4f34-a337-23ed0b080c65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.605482 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23d7277-7424-4f34-a337-23ed0b080c65-kube-api-access-r6v9g" (OuterVolumeSpecName: "kube-api-access-r6v9g") pod "c23d7277-7424-4f34-a337-23ed0b080c65" (UID: "c23d7277-7424-4f34-a337-23ed0b080c65"). InnerVolumeSpecName "kube-api-access-r6v9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.701855 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23d7277-7424-4f34-a337-23ed0b080c65-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:02 crc kubenswrapper[4945]: I0109 00:52:02.701890 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6v9g\" (UniqueName: \"kubernetes.io/projected/c23d7277-7424-4f34-a337-23ed0b080c65-kube-api-access-r6v9g\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:03 crc kubenswrapper[4945]: I0109 00:52:03.175456 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3f2a-account-create-update-xtdp9" event={"ID":"c23d7277-7424-4f34-a337-23ed0b080c65","Type":"ContainerDied","Data":"ed6bb12c769defe99bab65fc88032291ce4df49666e610427a98504378c0724b"} Jan 09 00:52:03 crc kubenswrapper[4945]: I0109 00:52:03.175834 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6bb12c769defe99bab65fc88032291ce4df49666e610427a98504378c0724b" Jan 09 00:52:03 crc kubenswrapper[4945]: I0109 00:52:03.175517 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3f2a-account-create-update-xtdp9" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.403693 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kldbw"] Jan 09 00:52:04 crc kubenswrapper[4945]: E0109 00:52:04.404078 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b819698e-6f53-4cab-94ed-b8cf4ab3602c" containerName="mariadb-database-create" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.404091 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b819698e-6f53-4cab-94ed-b8cf4ab3602c" containerName="mariadb-database-create" Jan 09 00:52:04 crc kubenswrapper[4945]: E0109 00:52:04.404122 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23d7277-7424-4f34-a337-23ed0b080c65" containerName="mariadb-account-create-update" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.404128 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23d7277-7424-4f34-a337-23ed0b080c65" containerName="mariadb-account-create-update" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.404289 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23d7277-7424-4f34-a337-23ed0b080c65" containerName="mariadb-account-create-update" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.404308 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b819698e-6f53-4cab-94ed-b8cf4ab3602c" containerName="mariadb-database-create" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.404824 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.408052 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.408065 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v95dt" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.414623 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kldbw"] Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.537150 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-combined-ca-bundle\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.537209 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6jzs\" (UniqueName: \"kubernetes.io/projected/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-kube-api-access-r6jzs\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.537542 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-db-sync-config-data\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.639672 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-db-sync-config-data\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.639746 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-combined-ca-bundle\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.639793 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6jzs\" (UniqueName: \"kubernetes.io/projected/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-kube-api-access-r6jzs\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.644469 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-db-sync-config-data\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.644850 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-combined-ca-bundle\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.655599 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6jzs\" (UniqueName: \"kubernetes.io/projected/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-kube-api-access-r6jzs\") pod \"barbican-db-sync-kldbw\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:04 crc kubenswrapper[4945]: I0109 00:52:04.722760 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:05 crc kubenswrapper[4945]: I0109 00:52:05.157477 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kldbw"] Jan 09 00:52:05 crc kubenswrapper[4945]: I0109 00:52:05.191314 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kldbw" event={"ID":"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d","Type":"ContainerStarted","Data":"63eeb6f22be238181050ebf70964acfd6dbf16263f0440ef41001d99903ff1aa"} Jan 09 00:52:05 crc kubenswrapper[4945]: I0109 00:52:05.792944 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:52:05 crc kubenswrapper[4945]: I0109 00:52:05.793351 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:52:05 crc kubenswrapper[4945]: I0109 00:52:05.863449 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:52:06 crc kubenswrapper[4945]: I0109 00:52:06.200027 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kldbw" event={"ID":"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d","Type":"ContainerStarted","Data":"f1428fe84dec5d10e47cc7b38cf2fb23c5278741b4aafbfb4ccbf4035a9ea4ed"} Jan 09 00:52:06 crc kubenswrapper[4945]: I0109 00:52:06.248609 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:52:06 crc kubenswrapper[4945]: I0109 00:52:06.271345 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kldbw" podStartSLOduration=2.271323829 podStartE2EDuration="2.271323829s" podCreationTimestamp="2026-01-09 00:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:06.217030585 +0000 UTC m=+5796.528189531" watchObservedRunningTime="2026-01-09 00:52:06.271323829 +0000 UTC m=+5796.582482775" Jan 09 00:52:06 crc kubenswrapper[4945]: I0109 00:52:06.290485 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mkmq"] Jan 09 00:52:07 crc kubenswrapper[4945]: I0109 00:52:07.209967 4945 generic.go:334] "Generic (PLEG): container finished" podID="e54eb183-3ffc-403d-a2bc-dc3e59b5da2d" containerID="f1428fe84dec5d10e47cc7b38cf2fb23c5278741b4aafbfb4ccbf4035a9ea4ed" exitCode=0 Jan 09 00:52:07 crc kubenswrapper[4945]: I0109 00:52:07.210037 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kldbw" event={"ID":"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d","Type":"ContainerDied","Data":"f1428fe84dec5d10e47cc7b38cf2fb23c5278741b4aafbfb4ccbf4035a9ea4ed"} Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.224923 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4mkmq" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerName="registry-server" containerID="cri-o://921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8" gracePeriod=2 Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.583198 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.718169 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-db-sync-config-data\") pod \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.718363 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-combined-ca-bundle\") pod \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.718424 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6jzs\" (UniqueName: \"kubernetes.io/projected/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-kube-api-access-r6jzs\") pod \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\" (UID: \"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d\") " Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.723817 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-kube-api-access-r6jzs" (OuterVolumeSpecName: "kube-api-access-r6jzs") pod "e54eb183-3ffc-403d-a2bc-dc3e59b5da2d" (UID: "e54eb183-3ffc-403d-a2bc-dc3e59b5da2d"). InnerVolumeSpecName "kube-api-access-r6jzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.724533 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e54eb183-3ffc-403d-a2bc-dc3e59b5da2d" (UID: "e54eb183-3ffc-403d-a2bc-dc3e59b5da2d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.758684 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e54eb183-3ffc-403d-a2bc-dc3e59b5da2d" (UID: "e54eb183-3ffc-403d-a2bc-dc3e59b5da2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.821098 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.821152 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6jzs\" (UniqueName: \"kubernetes.io/projected/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-kube-api-access-r6jzs\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:08 crc kubenswrapper[4945]: I0109 00:52:08.821170 4945 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.114052 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.227798 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdt95\" (UniqueName: \"kubernetes.io/projected/6c0316e8-a59f-4811-8d61-141ff3aacaa9-kube-api-access-cdt95\") pod \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.228071 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-catalog-content\") pod \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.228164 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-utilities\") pod \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\" (UID: \"6c0316e8-a59f-4811-8d61-141ff3aacaa9\") " Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.229079 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-utilities" (OuterVolumeSpecName: "utilities") pod "6c0316e8-a59f-4811-8d61-141ff3aacaa9" (UID: "6c0316e8-a59f-4811-8d61-141ff3aacaa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.233293 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0316e8-a59f-4811-8d61-141ff3aacaa9-kube-api-access-cdt95" (OuterVolumeSpecName: "kube-api-access-cdt95") pod "6c0316e8-a59f-4811-8d61-141ff3aacaa9" (UID: "6c0316e8-a59f-4811-8d61-141ff3aacaa9"). InnerVolumeSpecName "kube-api-access-cdt95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.237989 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kldbw" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.237984 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kldbw" event={"ID":"e54eb183-3ffc-403d-a2bc-dc3e59b5da2d","Type":"ContainerDied","Data":"63eeb6f22be238181050ebf70964acfd6dbf16263f0440ef41001d99903ff1aa"} Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.238233 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63eeb6f22be238181050ebf70964acfd6dbf16263f0440ef41001d99903ff1aa" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.241500 4945 generic.go:334] "Generic (PLEG): container finished" podID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerID="921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8" exitCode=0 Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.241554 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mkmq" event={"ID":"6c0316e8-a59f-4811-8d61-141ff3aacaa9","Type":"ContainerDied","Data":"921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8"} Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.241575 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mkmq" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.241586 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mkmq" event={"ID":"6c0316e8-a59f-4811-8d61-141ff3aacaa9","Type":"ContainerDied","Data":"0718878b0b23bac5e48fe01cf605716a50c42e0ccc95e7dfe1198eeba50d0727"} Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.241609 4945 scope.go:117] "RemoveContainer" containerID="921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.272531 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c0316e8-a59f-4811-8d61-141ff3aacaa9" (UID: "6c0316e8-a59f-4811-8d61-141ff3aacaa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.276448 4945 scope.go:117] "RemoveContainer" containerID="4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.293509 4945 scope.go:117] "RemoveContainer" containerID="2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.310004 4945 scope.go:117] "RemoveContainer" containerID="921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8" Jan 09 00:52:09 crc kubenswrapper[4945]: E0109 00:52:09.310581 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8\": container with ID starting with 921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8 not found: ID does not exist" containerID="921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.310622 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8"} err="failed to get container status \"921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8\": rpc error: code = NotFound desc = could not find container \"921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8\": container with ID starting with 921ae386c1d1bc54894c60700ec88988bea71a98b322eeef3c4b56e1290375b8 not found: ID does not exist" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.310648 4945 scope.go:117] "RemoveContainer" containerID="4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14" Jan 09 00:52:09 crc kubenswrapper[4945]: E0109 00:52:09.311131 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14\": container with ID starting with 4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14 not found: ID does not exist" containerID="4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.311292 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14"} err="failed to get container status \"4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14\": rpc error: code = NotFound desc = could not find container \"4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14\": container with ID starting with 4dc2881a53dece5fba9796f88dbd55b03c191151d6cabc46ccd4465799048e14 not found: ID does not exist" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.311459 4945 scope.go:117] "RemoveContainer" containerID="2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538" Jan 09 00:52:09 crc kubenswrapper[4945]: E0109 00:52:09.312209 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538\": container with ID starting with 2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538 not found: ID does not exist" containerID="2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.312243 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538"} err="failed to get container status \"2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538\": rpc error: code = NotFound desc = could not find container \"2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538\": container with ID starting with 2a6d0e3bd3d80a921b755dd7e21d8b8cf8dff04c3cf6054b31f5eca2adadd538 not found: ID does not exist" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.331195 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.331572 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0316e8-a59f-4811-8d61-141ff3aacaa9-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.331696 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdt95\" (UniqueName: \"kubernetes.io/projected/6c0316e8-a59f-4811-8d61-141ff3aacaa9-kube-api-access-cdt95\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.469232 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f5db76bd7-2q6ts"] Jan 09 00:52:09 crc kubenswrapper[4945]: E0109 00:52:09.469680 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerName="extract-utilities" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.469706 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerName="extract-utilities" Jan 09 00:52:09 crc kubenswrapper[4945]: E0109 00:52:09.469732 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54eb183-3ffc-403d-a2bc-dc3e59b5da2d" containerName="barbican-db-sync" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.469742 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54eb183-3ffc-403d-a2bc-dc3e59b5da2d" containerName="barbican-db-sync" Jan 09 00:52:09 crc kubenswrapper[4945]: E0109 00:52:09.469766 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerName="extract-content" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.469775 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerName="extract-content" Jan 09 00:52:09 crc kubenswrapper[4945]: E0109 00:52:09.469800 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerName="registry-server" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.469810 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerName="registry-server" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.470038 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" containerName="registry-server" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.470069 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54eb183-3ffc-403d-a2bc-dc3e59b5da2d" containerName="barbican-db-sync" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.471228 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.475105 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.475639 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v95dt" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.476083 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.500098 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-77648949c6-std4c"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.502078 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.505579 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.511642 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f5db76bd7-2q6ts"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.530743 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77648949c6-std4c"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.536088 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-config-data\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.536188 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5288278-7bca-479f-8518-6bc622b31f66-logs\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.536244 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbr5w\" (UniqueName: \"kubernetes.io/projected/d5288278-7bca-479f-8518-6bc622b31f66-kube-api-access-hbr5w\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.536273 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-combined-ca-bundle\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.536294 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-config-data-custom\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639190 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rthcj\" (UniqueName: \"kubernetes.io/projected/56ac70af-0c59-4ccc-9ca3-732b5bde275a-kube-api-access-rthcj\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639248 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbr5w\" (UniqueName: \"kubernetes.io/projected/d5288278-7bca-479f-8518-6bc622b31f66-kube-api-access-hbr5w\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639320 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-combined-ca-bundle\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639353 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-config-data-custom\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639386 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ac70af-0c59-4ccc-9ca3-732b5bde275a-logs\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639415 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-config-data\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639459 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-combined-ca-bundle\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639489 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-config-data\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639506 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-config-data-custom\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639532 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5288278-7bca-479f-8518-6bc622b31f66-logs\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.639898 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5288278-7bca-479f-8518-6bc622b31f66-logs\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.642742 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66bc9f9f69-ncgb6"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.646689 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-combined-ca-bundle\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.646967 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.650612 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-config-data\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.660587 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66bc9f9f69-ncgb6"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.665065 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d5288278-7bca-479f-8518-6bc622b31f66-config-data-custom\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.670280 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbr5w\" (UniqueName: \"kubernetes.io/projected/d5288278-7bca-479f-8518-6bc622b31f66-kube-api-access-hbr5w\") pod \"barbican-worker-f5db76bd7-2q6ts\" (UID: \"d5288278-7bca-479f-8518-6bc622b31f66\") " pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.674098 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mkmq"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.687638 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mkmq"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.709844 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-799f9b6998-2jhb7"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.711474 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.714408 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.718430 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-799f9b6998-2jhb7"] Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.740564 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rthcj\" (UniqueName: \"kubernetes.io/projected/56ac70af-0c59-4ccc-9ca3-732b5bde275a-kube-api-access-rthcj\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.740609 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vsx\" (UniqueName: \"kubernetes.io/projected/f94d82a2-e911-43e8-af40-5154ede205cd-kube-api-access-82vsx\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.740635 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-config\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.740656 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-sb\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.741078 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-dns-svc\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.741119 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ac70af-0c59-4ccc-9ca3-732b5bde275a-logs\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.741213 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-combined-ca-bundle\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.741254 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-config-data-custom\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.741273 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-config-data\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.741365 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-nb\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.742922 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ac70af-0c59-4ccc-9ca3-732b5bde275a-logs\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.764854 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-config-data-custom\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.765362 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-combined-ca-bundle\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.765567 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ac70af-0c59-4ccc-9ca3-732b5bde275a-config-data\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.769503 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rthcj\" (UniqueName: \"kubernetes.io/projected/56ac70af-0c59-4ccc-9ca3-732b5bde275a-kube-api-access-rthcj\") pod \"barbican-keystone-listener-77648949c6-std4c\" (UID: \"56ac70af-0c59-4ccc-9ca3-732b5bde275a\") " pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.792324 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f5db76bd7-2q6ts" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.838763 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77648949c6-std4c" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844519 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6142446-21fa-43af-b0c7-46ed8f5111c0-logs\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844614 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dp58\" (UniqueName: \"kubernetes.io/projected/d6142446-21fa-43af-b0c7-46ed8f5111c0-kube-api-access-4dp58\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844669 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-nb\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844715 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82vsx\" (UniqueName: \"kubernetes.io/projected/f94d82a2-e911-43e8-af40-5154ede205cd-kube-api-access-82vsx\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844745 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-config\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844770 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-sb\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844793 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-combined-ca-bundle\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844824 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-config-data\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844858 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-dns-svc\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.844918 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-config-data-custom\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.846201 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-dns-svc\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.846242 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-sb\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.846497 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-nb\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.847698 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-config\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.863678 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82vsx\" (UniqueName: \"kubernetes.io/projected/f94d82a2-e911-43e8-af40-5154ede205cd-kube-api-access-82vsx\") pod \"dnsmasq-dns-66bc9f9f69-ncgb6\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.948966 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dp58\" (UniqueName: \"kubernetes.io/projected/d6142446-21fa-43af-b0c7-46ed8f5111c0-kube-api-access-4dp58\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.949256 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-combined-ca-bundle\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.949280 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-config-data\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.949325 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-config-data-custom\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.949376 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6142446-21fa-43af-b0c7-46ed8f5111c0-logs\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.949988 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6142446-21fa-43af-b0c7-46ed8f5111c0-logs\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.958840 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-combined-ca-bundle\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.975687 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-config-data\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.976700 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6142446-21fa-43af-b0c7-46ed8f5111c0-config-data-custom\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:09 crc kubenswrapper[4945]: I0109 00:52:09.997770 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dp58\" (UniqueName: \"kubernetes.io/projected/d6142446-21fa-43af-b0c7-46ed8f5111c0-kube-api-access-4dp58\") pod \"barbican-api-799f9b6998-2jhb7\" (UID: \"d6142446-21fa-43af-b0c7-46ed8f5111c0\") " pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:10 crc kubenswrapper[4945]: I0109 00:52:10.026273 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0316e8-a59f-4811-8d61-141ff3aacaa9" path="/var/lib/kubelet/pods/6c0316e8-a59f-4811-8d61-141ff3aacaa9/volumes" Jan 09 00:52:10 crc kubenswrapper[4945]: I0109 00:52:10.037803 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:10 crc kubenswrapper[4945]: I0109 00:52:10.125456 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:10 crc kubenswrapper[4945]: I0109 00:52:10.400623 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77648949c6-std4c"] Jan 09 00:52:10 crc kubenswrapper[4945]: I0109 00:52:10.558454 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f5db76bd7-2q6ts"] Jan 09 00:52:10 crc kubenswrapper[4945]: I0109 00:52:10.676054 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66bc9f9f69-ncgb6"] Jan 09 00:52:10 crc kubenswrapper[4945]: I0109 00:52:10.741124 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-799f9b6998-2jhb7"] Jan 09 00:52:10 crc kubenswrapper[4945]: W0109 00:52:10.748626 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6142446_21fa_43af_b0c7_46ed8f5111c0.slice/crio-6cfc3c8402cebf77bbb6ef73b9025267c157bb0550b4a5b60cb56c99e7c2fffd WatchSource:0}: Error finding container 6cfc3c8402cebf77bbb6ef73b9025267c157bb0550b4a5b60cb56c99e7c2fffd: Status 404 returned error can't find the container with id 6cfc3c8402cebf77bbb6ef73b9025267c157bb0550b4a5b60cb56c99e7c2fffd Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.271019 4945 generic.go:334] "Generic (PLEG): container finished" podID="f94d82a2-e911-43e8-af40-5154ede205cd" containerID="68821b2587507884a1d274f75d926b1ec3c93bf115479e9663c88cfcd00e3830" exitCode=0 Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.271086 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" event={"ID":"f94d82a2-e911-43e8-af40-5154ede205cd","Type":"ContainerDied","Data":"68821b2587507884a1d274f75d926b1ec3c93bf115479e9663c88cfcd00e3830"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.271117 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" event={"ID":"f94d82a2-e911-43e8-af40-5154ede205cd","Type":"ContainerStarted","Data":"26f64dd4cee1c0675301804bfc2e2fc51ae80a03a2ff725c13e3540fedd714a4"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.275697 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77648949c6-std4c" event={"ID":"56ac70af-0c59-4ccc-9ca3-732b5bde275a","Type":"ContainerStarted","Data":"959c3207e07268e32dd0bbfe7fec73ba4f7b7b8306fe98417066d73c56d2f442"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.275891 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77648949c6-std4c" event={"ID":"56ac70af-0c59-4ccc-9ca3-732b5bde275a","Type":"ContainerStarted","Data":"4408cb571ffa2d77a8069453b518679a849d8e469c946e6360da2226e62a9e2b"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.275998 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77648949c6-std4c" event={"ID":"56ac70af-0c59-4ccc-9ca3-732b5bde275a","Type":"ContainerStarted","Data":"b14af3c95a2b69e7f34523df9f3246cdbf13688648d13bfc1987ccfd5c40ec36"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.289499 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5db76bd7-2q6ts" event={"ID":"d5288278-7bca-479f-8518-6bc622b31f66","Type":"ContainerStarted","Data":"90afbea2d01d67bc554c4b199f4a47d77b303d002b67db4de3721eff6b49581a"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.289549 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5db76bd7-2q6ts" event={"ID":"d5288278-7bca-479f-8518-6bc622b31f66","Type":"ContainerStarted","Data":"fc322cd259269b28f4d7dbc14b2b0fea8738d8ee916149ce98d6db4ff1514f6a"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.289567 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f5db76bd7-2q6ts" event={"ID":"d5288278-7bca-479f-8518-6bc622b31f66","Type":"ContainerStarted","Data":"3b57da9c86df668cb68e10bda8c662e92b1283bc361310558f039d376d14bb43"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.295057 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799f9b6998-2jhb7" event={"ID":"d6142446-21fa-43af-b0c7-46ed8f5111c0","Type":"ContainerStarted","Data":"0229a971c9a4f46bc2a42d4efdfcced7e6ed0a1b623f60be8b282d55b85aab09"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.295106 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799f9b6998-2jhb7" event={"ID":"d6142446-21fa-43af-b0c7-46ed8f5111c0","Type":"ContainerStarted","Data":"0bff72f9dc80fc9540b3da48f2fe8adcb2594e8d85263343414a8b1687ff0914"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.295117 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-799f9b6998-2jhb7" event={"ID":"d6142446-21fa-43af-b0c7-46ed8f5111c0","Type":"ContainerStarted","Data":"6cfc3c8402cebf77bbb6ef73b9025267c157bb0550b4a5b60cb56c99e7c2fffd"} Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.295649 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.295676 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.319728 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f5db76bd7-2q6ts" podStartSLOduration=2.319709347 podStartE2EDuration="2.319709347s" podCreationTimestamp="2026-01-09 00:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:11.316486068 +0000 UTC m=+5801.627645014" watchObservedRunningTime="2026-01-09 00:52:11.319709347 +0000 UTC m=+5801.630868293" Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.341167 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-77648949c6-std4c" podStartSLOduration=2.341149983 podStartE2EDuration="2.341149983s" podCreationTimestamp="2026-01-09 00:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:11.329418425 +0000 UTC m=+5801.640577371" watchObservedRunningTime="2026-01-09 00:52:11.341149983 +0000 UTC m=+5801.652308929" Jan 09 00:52:11 crc kubenswrapper[4945]: I0109 00:52:11.364763 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-799f9b6998-2jhb7" podStartSLOduration=2.364717992 podStartE2EDuration="2.364717992s" podCreationTimestamp="2026-01-09 00:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:11.360314194 +0000 UTC m=+5801.671473140" watchObservedRunningTime="2026-01-09 00:52:11.364717992 +0000 UTC m=+5801.675876948" Jan 09 00:52:12 crc kubenswrapper[4945]: I0109 00:52:12.303580 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" event={"ID":"f94d82a2-e911-43e8-af40-5154ede205cd","Type":"ContainerStarted","Data":"6ed8d4e33cf4e7819c04ba5b363e15e9cf65e879d117c6d92d5bfe9add59e3a3"} Jan 09 00:52:12 crc kubenswrapper[4945]: I0109 00:52:12.325789 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" podStartSLOduration=3.3257699560000002 podStartE2EDuration="3.325769956s" podCreationTimestamp="2026-01-09 00:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:12.322225859 +0000 UTC m=+5802.633384815" watchObservedRunningTime="2026-01-09 00:52:12.325769956 +0000 UTC m=+5802.636928902" Jan 09 00:52:13 crc kubenswrapper[4945]: I0109 00:52:13.311173 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:15 crc kubenswrapper[4945]: I0109 00:52:15.001228 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:52:15 crc kubenswrapper[4945]: E0109 00:52:15.002321 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:52:16 crc kubenswrapper[4945]: I0109 00:52:16.607336 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:18 crc kubenswrapper[4945]: I0109 00:52:18.051099 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-799f9b6998-2jhb7" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.040154 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.108289 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745c4fff85-p6z29"] Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.108612 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" podUID="1cb1705d-9097-4779-8ee3-87f5924ab655" containerName="dnsmasq-dns" containerID="cri-o://554a004fdcbb18dc5410f7a55e5af7d3b0ffbcfa1db9c2e054a1a5bba9d2f74d" gracePeriod=10 Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.376609 4945 generic.go:334] "Generic (PLEG): container finished" podID="1cb1705d-9097-4779-8ee3-87f5924ab655" containerID="554a004fdcbb18dc5410f7a55e5af7d3b0ffbcfa1db9c2e054a1a5bba9d2f74d" exitCode=0 Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.376710 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" event={"ID":"1cb1705d-9097-4779-8ee3-87f5924ab655","Type":"ContainerDied","Data":"554a004fdcbb18dc5410f7a55e5af7d3b0ffbcfa1db9c2e054a1a5bba9d2f74d"} Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.616717 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.682474 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-sb\") pod \"1cb1705d-9097-4779-8ee3-87f5924ab655\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.682550 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-config\") pod \"1cb1705d-9097-4779-8ee3-87f5924ab655\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.682742 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-dns-svc\") pod \"1cb1705d-9097-4779-8ee3-87f5924ab655\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.682769 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8cfw\" (UniqueName: \"kubernetes.io/projected/1cb1705d-9097-4779-8ee3-87f5924ab655-kube-api-access-g8cfw\") pod \"1cb1705d-9097-4779-8ee3-87f5924ab655\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.682801 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-nb\") pod \"1cb1705d-9097-4779-8ee3-87f5924ab655\" (UID: \"1cb1705d-9097-4779-8ee3-87f5924ab655\") " Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.709204 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb1705d-9097-4779-8ee3-87f5924ab655-kube-api-access-g8cfw" (OuterVolumeSpecName: "kube-api-access-g8cfw") pod "1cb1705d-9097-4779-8ee3-87f5924ab655" (UID: "1cb1705d-9097-4779-8ee3-87f5924ab655"). InnerVolumeSpecName "kube-api-access-g8cfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.743510 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cb1705d-9097-4779-8ee3-87f5924ab655" (UID: "1cb1705d-9097-4779-8ee3-87f5924ab655"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.743554 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-config" (OuterVolumeSpecName: "config") pod "1cb1705d-9097-4779-8ee3-87f5924ab655" (UID: "1cb1705d-9097-4779-8ee3-87f5924ab655"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.762383 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1cb1705d-9097-4779-8ee3-87f5924ab655" (UID: "1cb1705d-9097-4779-8ee3-87f5924ab655"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.787353 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1cb1705d-9097-4779-8ee3-87f5924ab655" (UID: "1cb1705d-9097-4779-8ee3-87f5924ab655"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.793338 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.793375 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8cfw\" (UniqueName: \"kubernetes.io/projected/1cb1705d-9097-4779-8ee3-87f5924ab655-kube-api-access-g8cfw\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.793388 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.793398 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:20 crc kubenswrapper[4945]: I0109 00:52:20.793406 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb1705d-9097-4779-8ee3-87f5924ab655-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:21 crc kubenswrapper[4945]: I0109 00:52:21.390805 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" event={"ID":"1cb1705d-9097-4779-8ee3-87f5924ab655","Type":"ContainerDied","Data":"42ce9bff01f95aa2fb35ba8d3844105ca2d0a511ac711415e5496f8a4960b9d3"} Jan 09 00:52:21 crc kubenswrapper[4945]: I0109 00:52:21.390887 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745c4fff85-p6z29" Jan 09 00:52:21 crc kubenswrapper[4945]: I0109 00:52:21.390904 4945 scope.go:117] "RemoveContainer" containerID="554a004fdcbb18dc5410f7a55e5af7d3b0ffbcfa1db9c2e054a1a5bba9d2f74d" Jan 09 00:52:21 crc kubenswrapper[4945]: I0109 00:52:21.419526 4945 scope.go:117] "RemoveContainer" containerID="d40319eeade6620393952b7d683aaa279c7a0008a2a7457264c145f1f7f74a66" Jan 09 00:52:21 crc kubenswrapper[4945]: I0109 00:52:21.440328 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745c4fff85-p6z29"] Jan 09 00:52:21 crc kubenswrapper[4945]: I0109 00:52:21.446153 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745c4fff85-p6z29"] Jan 09 00:52:22 crc kubenswrapper[4945]: I0109 00:52:22.009738 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb1705d-9097-4779-8ee3-87f5924ab655" path="/var/lib/kubelet/pods/1cb1705d-9097-4779-8ee3-87f5924ab655/volumes" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.022931 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:52:30 crc kubenswrapper[4945]: E0109 00:52:30.026332 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.206404 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-728qx"] Jan 09 00:52:30 crc kubenswrapper[4945]: E0109 00:52:30.206772 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb1705d-9097-4779-8ee3-87f5924ab655" containerName="dnsmasq-dns" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.206789 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb1705d-9097-4779-8ee3-87f5924ab655" containerName="dnsmasq-dns" Jan 09 00:52:30 crc kubenswrapper[4945]: E0109 00:52:30.206816 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb1705d-9097-4779-8ee3-87f5924ab655" containerName="init" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.206822 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb1705d-9097-4779-8ee3-87f5924ab655" containerName="init" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.206974 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb1705d-9097-4779-8ee3-87f5924ab655" containerName="dnsmasq-dns" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.207645 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-728qx" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.216054 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-728qx"] Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.303308 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4821-account-create-update-jrfjd"] Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.304139 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-operator-scripts\") pod \"neutron-db-create-728qx\" (UID: \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\") " pod="openstack/neutron-db-create-728qx" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.304198 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gft\" (UniqueName: \"kubernetes.io/projected/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-kube-api-access-r6gft\") pod \"neutron-db-create-728qx\" (UID: \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\") " pod="openstack/neutron-db-create-728qx" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.304830 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.307134 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.319586 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4821-account-create-update-jrfjd"] Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.406235 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r86fq\" (UniqueName: \"kubernetes.io/projected/33b819af-d0d3-40b2-9775-f88a358e9083-kube-api-access-r86fq\") pod \"neutron-4821-account-create-update-jrfjd\" (UID: \"33b819af-d0d3-40b2-9775-f88a358e9083\") " pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.406478 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-operator-scripts\") pod \"neutron-db-create-728qx\" (UID: \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\") " pod="openstack/neutron-db-create-728qx" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.406551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b819af-d0d3-40b2-9775-f88a358e9083-operator-scripts\") pod \"neutron-4821-account-create-update-jrfjd\" (UID: \"33b819af-d0d3-40b2-9775-f88a358e9083\") " pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.406606 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6gft\" (UniqueName: \"kubernetes.io/projected/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-kube-api-access-r6gft\") pod \"neutron-db-create-728qx\" (UID: \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\") " pod="openstack/neutron-db-create-728qx" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.407491 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-operator-scripts\") pod \"neutron-db-create-728qx\" (UID: \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\") " pod="openstack/neutron-db-create-728qx" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.424425 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6gft\" (UniqueName: \"kubernetes.io/projected/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-kube-api-access-r6gft\") pod \"neutron-db-create-728qx\" (UID: \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\") " pod="openstack/neutron-db-create-728qx" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.508640 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r86fq\" (UniqueName: \"kubernetes.io/projected/33b819af-d0d3-40b2-9775-f88a358e9083-kube-api-access-r86fq\") pod \"neutron-4821-account-create-update-jrfjd\" (UID: \"33b819af-d0d3-40b2-9775-f88a358e9083\") " pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.508774 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b819af-d0d3-40b2-9775-f88a358e9083-operator-scripts\") pod \"neutron-4821-account-create-update-jrfjd\" (UID: \"33b819af-d0d3-40b2-9775-f88a358e9083\") " pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.509486 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b819af-d0d3-40b2-9775-f88a358e9083-operator-scripts\") pod \"neutron-4821-account-create-update-jrfjd\" (UID: \"33b819af-d0d3-40b2-9775-f88a358e9083\") " pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.527547 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r86fq\" (UniqueName: \"kubernetes.io/projected/33b819af-d0d3-40b2-9775-f88a358e9083-kube-api-access-r86fq\") pod \"neutron-4821-account-create-update-jrfjd\" (UID: \"33b819af-d0d3-40b2-9775-f88a358e9083\") " pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.527953 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-728qx" Jan 09 00:52:30 crc kubenswrapper[4945]: I0109 00:52:30.619771 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:31 crc kubenswrapper[4945]: I0109 00:52:31.011350 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-728qx"] Jan 09 00:52:31 crc kubenswrapper[4945]: W0109 00:52:31.013161 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea62a4e1_81d5_4e9c_8d1c_e9df2f6bbac6.slice/crio-03c70e4a87e6b711c4bf39d52f2d838aa02e7c673614c7ec7292a56876828209 WatchSource:0}: Error finding container 03c70e4a87e6b711c4bf39d52f2d838aa02e7c673614c7ec7292a56876828209: Status 404 returned error can't find the container with id 03c70e4a87e6b711c4bf39d52f2d838aa02e7c673614c7ec7292a56876828209 Jan 09 00:52:31 crc kubenswrapper[4945]: I0109 00:52:31.077352 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4821-account-create-update-jrfjd"] Jan 09 00:52:31 crc kubenswrapper[4945]: W0109 00:52:31.084160 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b819af_d0d3_40b2_9775_f88a358e9083.slice/crio-1c447230262edbda5a83513fcd656688676fff1eac66cfc38ebf6f75bc8516e7 WatchSource:0}: Error finding container 1c447230262edbda5a83513fcd656688676fff1eac66cfc38ebf6f75bc8516e7: Status 404 returned error can't find the container with id 1c447230262edbda5a83513fcd656688676fff1eac66cfc38ebf6f75bc8516e7 Jan 09 00:52:31 crc kubenswrapper[4945]: I0109 00:52:31.470703 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-728qx" event={"ID":"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6","Type":"ContainerStarted","Data":"2330f57cc7c5aa4caedcbe55a521d5163e8bea7bd1b9d5e58221cd4fa8911ae6"} Jan 09 00:52:31 crc kubenswrapper[4945]: I0109 00:52:31.471165 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-728qx" event={"ID":"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6","Type":"ContainerStarted","Data":"03c70e4a87e6b711c4bf39d52f2d838aa02e7c673614c7ec7292a56876828209"} Jan 09 00:52:31 crc kubenswrapper[4945]: I0109 00:52:31.472296 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4821-account-create-update-jrfjd" event={"ID":"33b819af-d0d3-40b2-9775-f88a358e9083","Type":"ContainerStarted","Data":"990ec3b04177bedcf2203a933b166932eef7f545c056d09edafea421e62bd709"} Jan 09 00:52:31 crc kubenswrapper[4945]: I0109 00:52:31.472354 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4821-account-create-update-jrfjd" event={"ID":"33b819af-d0d3-40b2-9775-f88a358e9083","Type":"ContainerStarted","Data":"1c447230262edbda5a83513fcd656688676fff1eac66cfc38ebf6f75bc8516e7"} Jan 09 00:52:31 crc kubenswrapper[4945]: I0109 00:52:31.487085 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-728qx" podStartSLOduration=1.4870635700000001 podStartE2EDuration="1.48706357s" podCreationTimestamp="2026-01-09 00:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:31.484031745 +0000 UTC m=+5821.795190691" watchObservedRunningTime="2026-01-09 00:52:31.48706357 +0000 UTC m=+5821.798222516" Jan 09 00:52:31 crc kubenswrapper[4945]: I0109 00:52:31.509964 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-4821-account-create-update-jrfjd" podStartSLOduration=1.509943732 podStartE2EDuration="1.509943732s" podCreationTimestamp="2026-01-09 00:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:31.508382173 +0000 UTC m=+5821.819541129" watchObservedRunningTime="2026-01-09 00:52:31.509943732 +0000 UTC m=+5821.821102668" Jan 09 00:52:32 crc kubenswrapper[4945]: I0109 00:52:32.481930 4945 generic.go:334] "Generic (PLEG): container finished" podID="33b819af-d0d3-40b2-9775-f88a358e9083" containerID="990ec3b04177bedcf2203a933b166932eef7f545c056d09edafea421e62bd709" exitCode=0 Jan 09 00:52:32 crc kubenswrapper[4945]: I0109 00:52:32.482063 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4821-account-create-update-jrfjd" event={"ID":"33b819af-d0d3-40b2-9775-f88a358e9083","Type":"ContainerDied","Data":"990ec3b04177bedcf2203a933b166932eef7f545c056d09edafea421e62bd709"} Jan 09 00:52:32 crc kubenswrapper[4945]: I0109 00:52:32.483732 4945 generic.go:334] "Generic (PLEG): container finished" podID="ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6" containerID="2330f57cc7c5aa4caedcbe55a521d5163e8bea7bd1b9d5e58221cd4fa8911ae6" exitCode=0 Jan 09 00:52:32 crc kubenswrapper[4945]: I0109 00:52:32.483788 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-728qx" event={"ID":"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6","Type":"ContainerDied","Data":"2330f57cc7c5aa4caedcbe55a521d5163e8bea7bd1b9d5e58221cd4fa8911ae6"} Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.864162 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-728qx" Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.869807 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.966208 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r86fq\" (UniqueName: \"kubernetes.io/projected/33b819af-d0d3-40b2-9775-f88a358e9083-kube-api-access-r86fq\") pod \"33b819af-d0d3-40b2-9775-f88a358e9083\" (UID: \"33b819af-d0d3-40b2-9775-f88a358e9083\") " Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.966527 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6gft\" (UniqueName: \"kubernetes.io/projected/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-kube-api-access-r6gft\") pod \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\" (UID: \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\") " Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.966642 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b819af-d0d3-40b2-9775-f88a358e9083-operator-scripts\") pod \"33b819af-d0d3-40b2-9775-f88a358e9083\" (UID: \"33b819af-d0d3-40b2-9775-f88a358e9083\") " Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.966735 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-operator-scripts\") pod \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\" (UID: \"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6\") " Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.967484 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6" (UID: "ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.967744 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33b819af-d0d3-40b2-9775-f88a358e9083-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33b819af-d0d3-40b2-9775-f88a358e9083" (UID: "33b819af-d0d3-40b2-9775-f88a358e9083"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.973167 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-kube-api-access-r6gft" (OuterVolumeSpecName: "kube-api-access-r6gft") pod "ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6" (UID: "ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6"). InnerVolumeSpecName "kube-api-access-r6gft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:33 crc kubenswrapper[4945]: I0109 00:52:33.973242 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b819af-d0d3-40b2-9775-f88a358e9083-kube-api-access-r86fq" (OuterVolumeSpecName: "kube-api-access-r86fq") pod "33b819af-d0d3-40b2-9775-f88a358e9083" (UID: "33b819af-d0d3-40b2-9775-f88a358e9083"). InnerVolumeSpecName "kube-api-access-r86fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.069029 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33b819af-d0d3-40b2-9775-f88a358e9083-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.069078 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.069090 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r86fq\" (UniqueName: \"kubernetes.io/projected/33b819af-d0d3-40b2-9775-f88a358e9083-kube-api-access-r86fq\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.069112 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6gft\" (UniqueName: \"kubernetes.io/projected/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6-kube-api-access-r6gft\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.506399 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4821-account-create-update-jrfjd" event={"ID":"33b819af-d0d3-40b2-9775-f88a358e9083","Type":"ContainerDied","Data":"1c447230262edbda5a83513fcd656688676fff1eac66cfc38ebf6f75bc8516e7"} Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.506439 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c447230262edbda5a83513fcd656688676fff1eac66cfc38ebf6f75bc8516e7" Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.506503 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4821-account-create-update-jrfjd" Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.508494 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-728qx" event={"ID":"ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6","Type":"ContainerDied","Data":"03c70e4a87e6b711c4bf39d52f2d838aa02e7c673614c7ec7292a56876828209"} Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.508527 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c70e4a87e6b711c4bf39d52f2d838aa02e7c673614c7ec7292a56876828209" Jan 09 00:52:34 crc kubenswrapper[4945]: I0109 00:52:34.508570 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-728qx" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.506131 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vrmxl"] Jan 09 00:52:35 crc kubenswrapper[4945]: E0109 00:52:35.506920 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6" containerName="mariadb-database-create" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.506944 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6" containerName="mariadb-database-create" Jan 09 00:52:35 crc kubenswrapper[4945]: E0109 00:52:35.506972 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b819af-d0d3-40b2-9775-f88a358e9083" containerName="mariadb-account-create-update" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.506980 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b819af-d0d3-40b2-9775-f88a358e9083" containerName="mariadb-account-create-update" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.507210 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b819af-d0d3-40b2-9775-f88a358e9083" containerName="mariadb-account-create-update" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.507227 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6" containerName="mariadb-database-create" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.507961 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.510593 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.511976 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.512328 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nf479" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.515490 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vrmxl"] Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.597252 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljpx\" (UniqueName: \"kubernetes.io/projected/a8601523-2719-4c56-b8aa-8c61609e91f0-kube-api-access-cljpx\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.597343 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-config\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.597416 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-combined-ca-bundle\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.700791 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cljpx\" (UniqueName: \"kubernetes.io/projected/a8601523-2719-4c56-b8aa-8c61609e91f0-kube-api-access-cljpx\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.700885 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-config\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.700969 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-combined-ca-bundle\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.708565 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-combined-ca-bundle\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.709181 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-config\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.721948 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cljpx\" (UniqueName: \"kubernetes.io/projected/a8601523-2719-4c56-b8aa-8c61609e91f0-kube-api-access-cljpx\") pod \"neutron-db-sync-vrmxl\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:35 crc kubenswrapper[4945]: I0109 00:52:35.887156 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:36 crc kubenswrapper[4945]: I0109 00:52:36.515457 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vrmxl"] Jan 09 00:52:36 crc kubenswrapper[4945]: I0109 00:52:36.527354 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vrmxl" event={"ID":"a8601523-2719-4c56-b8aa-8c61609e91f0","Type":"ContainerStarted","Data":"bc5a13e8a093f99e9c2064eda257d00420c6d2e73bc0e3ae5d1b6a60d62f5787"} Jan 09 00:52:37 crc kubenswrapper[4945]: I0109 00:52:37.540114 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vrmxl" event={"ID":"a8601523-2719-4c56-b8aa-8c61609e91f0","Type":"ContainerStarted","Data":"9d92e41a0deac043add83594d3f3c29fcde9aac4f2081b0baa89f19a44518003"} Jan 09 00:52:37 crc kubenswrapper[4945]: I0109 00:52:37.561323 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vrmxl" podStartSLOduration=2.5613019230000003 podStartE2EDuration="2.561301923s" podCreationTimestamp="2026-01-09 00:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:37.559881688 +0000 UTC m=+5827.871040674" watchObservedRunningTime="2026-01-09 00:52:37.561301923 +0000 UTC m=+5827.872460869" Jan 09 00:52:40 crc kubenswrapper[4945]: I0109 00:52:40.566432 4945 generic.go:334] "Generic (PLEG): container finished" podID="a8601523-2719-4c56-b8aa-8c61609e91f0" containerID="9d92e41a0deac043add83594d3f3c29fcde9aac4f2081b0baa89f19a44518003" exitCode=0 Jan 09 00:52:40 crc kubenswrapper[4945]: I0109 00:52:40.566532 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vrmxl" event={"ID":"a8601523-2719-4c56-b8aa-8c61609e91f0","Type":"ContainerDied","Data":"9d92e41a0deac043add83594d3f3c29fcde9aac4f2081b0baa89f19a44518003"} Jan 09 00:52:41 crc kubenswrapper[4945]: I0109 00:52:41.903004 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.015669 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-combined-ca-bundle\") pod \"a8601523-2719-4c56-b8aa-8c61609e91f0\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.016174 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cljpx\" (UniqueName: \"kubernetes.io/projected/a8601523-2719-4c56-b8aa-8c61609e91f0-kube-api-access-cljpx\") pod \"a8601523-2719-4c56-b8aa-8c61609e91f0\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.016283 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-config\") pod \"a8601523-2719-4c56-b8aa-8c61609e91f0\" (UID: \"a8601523-2719-4c56-b8aa-8c61609e91f0\") " Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.021219 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8601523-2719-4c56-b8aa-8c61609e91f0-kube-api-access-cljpx" (OuterVolumeSpecName: "kube-api-access-cljpx") pod "a8601523-2719-4c56-b8aa-8c61609e91f0" (UID: "a8601523-2719-4c56-b8aa-8c61609e91f0"). InnerVolumeSpecName "kube-api-access-cljpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.038848 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8601523-2719-4c56-b8aa-8c61609e91f0" (UID: "a8601523-2719-4c56-b8aa-8c61609e91f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.039981 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-config" (OuterVolumeSpecName: "config") pod "a8601523-2719-4c56-b8aa-8c61609e91f0" (UID: "a8601523-2719-4c56-b8aa-8c61609e91f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.118553 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.118581 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8601523-2719-4c56-b8aa-8c61609e91f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.118590 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cljpx\" (UniqueName: \"kubernetes.io/projected/a8601523-2719-4c56-b8aa-8c61609e91f0-kube-api-access-cljpx\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.589323 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vrmxl" event={"ID":"a8601523-2719-4c56-b8aa-8c61609e91f0","Type":"ContainerDied","Data":"bc5a13e8a093f99e9c2064eda257d00420c6d2e73bc0e3ae5d1b6a60d62f5787"} Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.589374 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc5a13e8a093f99e9c2064eda257d00420c6d2e73bc0e3ae5d1b6a60d62f5787" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.589438 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vrmxl" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.869658 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb884bf57-m68xt"] Jan 09 00:52:42 crc kubenswrapper[4945]: E0109 00:52:42.870105 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8601523-2719-4c56-b8aa-8c61609e91f0" containerName="neutron-db-sync" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.870124 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8601523-2719-4c56-b8aa-8c61609e91f0" containerName="neutron-db-sync" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.870318 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8601523-2719-4c56-b8aa-8c61609e91f0" containerName="neutron-db-sync" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.871305 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.887637 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb884bf57-m68xt"] Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.940549 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9b48855d5-7fhb5"] Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.942678 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hddk8\" (UniqueName: \"kubernetes.io/projected/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-kube-api-access-hddk8\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.942753 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.942782 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-dns-svc\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.942845 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-config\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.942881 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.945769 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.950224 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.952182 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.955755 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nf479" Jan 09 00:52:42 crc kubenswrapper[4945]: I0109 00:52:42.976805 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b48855d5-7fhb5"] Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.043887 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-httpd-config\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.043946 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hddk8\" (UniqueName: \"kubernetes.io/projected/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-kube-api-access-hddk8\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.044004 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtzrl\" (UniqueName: \"kubernetes.io/projected/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-kube-api-access-qtzrl\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.044026 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-combined-ca-bundle\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.044050 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.044070 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-dns-svc\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.044101 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-config\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.044138 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-config\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.044169 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.045046 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.045850 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.046385 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-config\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.046725 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-dns-svc\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.065753 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hddk8\" (UniqueName: \"kubernetes.io/projected/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-kube-api-access-hddk8\") pod \"dnsmasq-dns-6cb884bf57-m68xt\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.145575 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtzrl\" (UniqueName: \"kubernetes.io/projected/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-kube-api-access-qtzrl\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.145637 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-combined-ca-bundle\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.145717 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-config\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.146438 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-httpd-config\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.152674 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-combined-ca-bundle\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.155769 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-httpd-config\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.156126 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-config\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.169936 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtzrl\" (UniqueName: \"kubernetes.io/projected/0ba7a8cb-fec8-4b0d-baab-594bc5d674dd-kube-api-access-qtzrl\") pod \"neutron-9b48855d5-7fhb5\" (UID: \"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd\") " pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.203703 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.263977 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.883776 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb884bf57-m68xt"] Jan 09 00:52:43 crc kubenswrapper[4945]: I0109 00:52:43.945089 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9b48855d5-7fhb5"] Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.000312 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:52:44 crc kubenswrapper[4945]: E0109 00:52:44.000544 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.611714 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b48855d5-7fhb5" event={"ID":"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd","Type":"ContainerStarted","Data":"2a263acde0629ca6811e61d41f07324e9eea61609db4b8caefc2c0c336be100b"} Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.612188 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b48855d5-7fhb5" event={"ID":"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd","Type":"ContainerStarted","Data":"5e84a1c6e8fb1960bb8ffebae3b37bd488556762f70058823d75020e41735de0"} Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.612212 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9b48855d5-7fhb5" event={"ID":"0ba7a8cb-fec8-4b0d-baab-594bc5d674dd","Type":"ContainerStarted","Data":"9251b0980f725421ef9b8550c37d49f2bc9051b39d6af52a1158839e8040908f"} Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.612247 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.614812 4945 generic.go:334] "Generic (PLEG): container finished" podID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" containerID="e744d3468035c5aea92db4594223484f3a895027641640c5464ae2348d457def" exitCode=0 Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.614867 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" event={"ID":"8f32c1a7-b9bf-44ad-8a4a-a882f105b638","Type":"ContainerDied","Data":"e744d3468035c5aea92db4594223484f3a895027641640c5464ae2348d457def"} Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.614921 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" event={"ID":"8f32c1a7-b9bf-44ad-8a4a-a882f105b638","Type":"ContainerStarted","Data":"c82d227f33e03ffd02f6b8a85c3859ac0b72894660cc0b06dbabd84c73bf63e4"} Jan 09 00:52:44 crc kubenswrapper[4945]: I0109 00:52:44.643106 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9b48855d5-7fhb5" podStartSLOduration=2.643087902 podStartE2EDuration="2.643087902s" podCreationTimestamp="2026-01-09 00:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:44.641801691 +0000 UTC m=+5834.952960647" watchObservedRunningTime="2026-01-09 00:52:44.643087902 +0000 UTC m=+5834.954246848" Jan 09 00:52:45 crc kubenswrapper[4945]: I0109 00:52:45.626105 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" event={"ID":"8f32c1a7-b9bf-44ad-8a4a-a882f105b638","Type":"ContainerStarted","Data":"f6122dc6d483ef326d4eda4b50dcff4e35b61065e44057638acce42df270bd3c"} Jan 09 00:52:45 crc kubenswrapper[4945]: I0109 00:52:45.662793 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" podStartSLOduration=3.662766796 podStartE2EDuration="3.662766796s" podCreationTimestamp="2026-01-09 00:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:52:45.652013022 +0000 UTC m=+5835.963171968" watchObservedRunningTime="2026-01-09 00:52:45.662766796 +0000 UTC m=+5835.973925742" Jan 09 00:52:46 crc kubenswrapper[4945]: I0109 00:52:46.640092 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:53 crc kubenswrapper[4945]: I0109 00:52:53.205188 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:52:53 crc kubenswrapper[4945]: I0109 00:52:53.286858 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66bc9f9f69-ncgb6"] Jan 09 00:52:53 crc kubenswrapper[4945]: I0109 00:52:53.287622 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" podUID="f94d82a2-e911-43e8-af40-5154ede205cd" containerName="dnsmasq-dns" containerID="cri-o://6ed8d4e33cf4e7819c04ba5b363e15e9cf65e879d117c6d92d5bfe9add59e3a3" gracePeriod=10 Jan 09 00:52:53 crc kubenswrapper[4945]: I0109 00:52:53.735091 4945 generic.go:334] "Generic (PLEG): container finished" podID="f94d82a2-e911-43e8-af40-5154ede205cd" containerID="6ed8d4e33cf4e7819c04ba5b363e15e9cf65e879d117c6d92d5bfe9add59e3a3" exitCode=0 Jan 09 00:52:53 crc kubenswrapper[4945]: I0109 00:52:53.735420 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" event={"ID":"f94d82a2-e911-43e8-af40-5154ede205cd","Type":"ContainerDied","Data":"6ed8d4e33cf4e7819c04ba5b363e15e9cf65e879d117c6d92d5bfe9add59e3a3"} Jan 09 00:52:53 crc kubenswrapper[4945]: I0109 00:52:53.901455 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.039574 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-dns-svc\") pod \"f94d82a2-e911-43e8-af40-5154ede205cd\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.039867 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-nb\") pod \"f94d82a2-e911-43e8-af40-5154ede205cd\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.039930 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-sb\") pod \"f94d82a2-e911-43e8-af40-5154ede205cd\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.039967 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-config\") pod \"f94d82a2-e911-43e8-af40-5154ede205cd\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.040051 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82vsx\" (UniqueName: \"kubernetes.io/projected/f94d82a2-e911-43e8-af40-5154ede205cd-kube-api-access-82vsx\") pod \"f94d82a2-e911-43e8-af40-5154ede205cd\" (UID: \"f94d82a2-e911-43e8-af40-5154ede205cd\") " Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.044977 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94d82a2-e911-43e8-af40-5154ede205cd-kube-api-access-82vsx" (OuterVolumeSpecName: "kube-api-access-82vsx") pod "f94d82a2-e911-43e8-af40-5154ede205cd" (UID: "f94d82a2-e911-43e8-af40-5154ede205cd"). InnerVolumeSpecName "kube-api-access-82vsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.083067 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f94d82a2-e911-43e8-af40-5154ede205cd" (UID: "f94d82a2-e911-43e8-af40-5154ede205cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.094569 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-config" (OuterVolumeSpecName: "config") pod "f94d82a2-e911-43e8-af40-5154ede205cd" (UID: "f94d82a2-e911-43e8-af40-5154ede205cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.095523 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f94d82a2-e911-43e8-af40-5154ede205cd" (UID: "f94d82a2-e911-43e8-af40-5154ede205cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.096204 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f94d82a2-e911-43e8-af40-5154ede205cd" (UID: "f94d82a2-e911-43e8-af40-5154ede205cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.142063 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.142099 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.142111 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.142122 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82vsx\" (UniqueName: \"kubernetes.io/projected/f94d82a2-e911-43e8-af40-5154ede205cd-kube-api-access-82vsx\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.142130 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f94d82a2-e911-43e8-af40-5154ede205cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.755088 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" event={"ID":"f94d82a2-e911-43e8-af40-5154ede205cd","Type":"ContainerDied","Data":"26f64dd4cee1c0675301804bfc2e2fc51ae80a03a2ff725c13e3540fedd714a4"} Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.755145 4945 scope.go:117] "RemoveContainer" containerID="6ed8d4e33cf4e7819c04ba5b363e15e9cf65e879d117c6d92d5bfe9add59e3a3" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.755324 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bc9f9f69-ncgb6" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.792385 4945 scope.go:117] "RemoveContainer" containerID="68821b2587507884a1d274f75d926b1ec3c93bf115479e9663c88cfcd00e3830" Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.794554 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66bc9f9f69-ncgb6"] Jan 09 00:52:54 crc kubenswrapper[4945]: I0109 00:52:54.801874 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66bc9f9f69-ncgb6"] Jan 09 00:52:55 crc kubenswrapper[4945]: I0109 00:52:55.000972 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:52:55 crc kubenswrapper[4945]: E0109 00:52:55.001239 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:52:56 crc kubenswrapper[4945]: I0109 00:52:56.011578 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f94d82a2-e911-43e8-af40-5154ede205cd" path="/var/lib/kubelet/pods/f94d82a2-e911-43e8-af40-5154ede205cd/volumes" Jan 09 00:53:09 crc kubenswrapper[4945]: I0109 00:53:09.000362 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:53:09 crc kubenswrapper[4945]: E0109 00:53:09.001285 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:53:13 crc kubenswrapper[4945]: I0109 00:53:13.273761 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-9b48855d5-7fhb5" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.721744 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khg7v"] Jan 09 00:53:16 crc kubenswrapper[4945]: E0109 00:53:16.722392 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94d82a2-e911-43e8-af40-5154ede205cd" containerName="init" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.722409 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94d82a2-e911-43e8-af40-5154ede205cd" containerName="init" Jan 09 00:53:16 crc kubenswrapper[4945]: E0109 00:53:16.722436 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94d82a2-e911-43e8-af40-5154ede205cd" containerName="dnsmasq-dns" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.722443 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94d82a2-e911-43e8-af40-5154ede205cd" containerName="dnsmasq-dns" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.722603 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94d82a2-e911-43e8-af40-5154ede205cd" containerName="dnsmasq-dns" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.723919 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.735473 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khg7v"] Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.867301 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-catalog-content\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.867442 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-utilities\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.867514 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8szrh\" (UniqueName: \"kubernetes.io/projected/ea7745e2-70a1-493d-88ba-62e84d2c5675-kube-api-access-8szrh\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.969572 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8szrh\" (UniqueName: \"kubernetes.io/projected/ea7745e2-70a1-493d-88ba-62e84d2c5675-kube-api-access-8szrh\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.969681 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-catalog-content\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.969774 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-utilities\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.970665 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-utilities\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:16 crc kubenswrapper[4945]: I0109 00:53:16.970835 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-catalog-content\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:17 crc kubenswrapper[4945]: I0109 00:53:17.004681 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8szrh\" (UniqueName: \"kubernetes.io/projected/ea7745e2-70a1-493d-88ba-62e84d2c5675-kube-api-access-8szrh\") pod \"community-operators-khg7v\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:17 crc kubenswrapper[4945]: I0109 00:53:17.050456 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:17 crc kubenswrapper[4945]: I0109 00:53:17.533099 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khg7v"] Jan 09 00:53:17 crc kubenswrapper[4945]: I0109 00:53:17.993369 4945 generic.go:334] "Generic (PLEG): container finished" podID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerID="0ebc2045cf9e61afbc84cfbda428fc3b5ead9e557ca6caaa7c1744fc4e3ae9ec" exitCode=0 Jan 09 00:53:17 crc kubenswrapper[4945]: I0109 00:53:17.993427 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khg7v" event={"ID":"ea7745e2-70a1-493d-88ba-62e84d2c5675","Type":"ContainerDied","Data":"0ebc2045cf9e61afbc84cfbda428fc3b5ead9e557ca6caaa7c1744fc4e3ae9ec"} Jan 09 00:53:17 crc kubenswrapper[4945]: I0109 00:53:17.993467 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khg7v" event={"ID":"ea7745e2-70a1-493d-88ba-62e84d2c5675","Type":"ContainerStarted","Data":"d514ebcb6114bbb5b601b13c082f58f0514d4c8ab208eae2477749bb66045612"} Jan 09 00:53:19 crc kubenswrapper[4945]: I0109 00:53:19.002272 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khg7v" event={"ID":"ea7745e2-70a1-493d-88ba-62e84d2c5675","Type":"ContainerStarted","Data":"2edb7dd17ec4fb6d700535f89b1f45ccada92ebdf7e89df342cfbcbb2ba37495"} Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.013526 4945 generic.go:334] "Generic (PLEG): container finished" podID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerID="2edb7dd17ec4fb6d700535f89b1f45ccada92ebdf7e89df342cfbcbb2ba37495" exitCode=0 Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.013636 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khg7v" event={"ID":"ea7745e2-70a1-493d-88ba-62e84d2c5675","Type":"ContainerDied","Data":"2edb7dd17ec4fb6d700535f89b1f45ccada92ebdf7e89df342cfbcbb2ba37495"} Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.478868 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-bd87j"] Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.480269 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bd87j" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.491713 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bd87j"] Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.528178 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltnt\" (UniqueName: \"kubernetes.io/projected/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-kube-api-access-rltnt\") pod \"glance-db-create-bd87j\" (UID: \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\") " pod="openstack/glance-db-create-bd87j" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.528551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-operator-scripts\") pod \"glance-db-create-bd87j\" (UID: \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\") " pod="openstack/glance-db-create-bd87j" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.577563 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0ba5-account-create-update-vsb75"] Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.579317 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.581102 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.586684 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0ba5-account-create-update-vsb75"] Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.630272 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3be6f33d-bbf7-4211-bb12-47972c8b03b3-operator-scripts\") pod \"glance-0ba5-account-create-update-vsb75\" (UID: \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\") " pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.630373 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chx9k\" (UniqueName: \"kubernetes.io/projected/3be6f33d-bbf7-4211-bb12-47972c8b03b3-kube-api-access-chx9k\") pod \"glance-0ba5-account-create-update-vsb75\" (UID: \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\") " pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.630461 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-operator-scripts\") pod \"glance-db-create-bd87j\" (UID: \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\") " pod="openstack/glance-db-create-bd87j" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.630651 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rltnt\" (UniqueName: \"kubernetes.io/projected/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-kube-api-access-rltnt\") pod \"glance-db-create-bd87j\" (UID: \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\") " pod="openstack/glance-db-create-bd87j" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.631569 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-operator-scripts\") pod \"glance-db-create-bd87j\" (UID: \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\") " pod="openstack/glance-db-create-bd87j" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.654178 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rltnt\" (UniqueName: \"kubernetes.io/projected/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-kube-api-access-rltnt\") pod \"glance-db-create-bd87j\" (UID: \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\") " pod="openstack/glance-db-create-bd87j" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.732064 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chx9k\" (UniqueName: \"kubernetes.io/projected/3be6f33d-bbf7-4211-bb12-47972c8b03b3-kube-api-access-chx9k\") pod \"glance-0ba5-account-create-update-vsb75\" (UID: \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\") " pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.732180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3be6f33d-bbf7-4211-bb12-47972c8b03b3-operator-scripts\") pod \"glance-0ba5-account-create-update-vsb75\" (UID: \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\") " pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.733272 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3be6f33d-bbf7-4211-bb12-47972c8b03b3-operator-scripts\") pod \"glance-0ba5-account-create-update-vsb75\" (UID: \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\") " pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.751108 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chx9k\" (UniqueName: \"kubernetes.io/projected/3be6f33d-bbf7-4211-bb12-47972c8b03b3-kube-api-access-chx9k\") pod \"glance-0ba5-account-create-update-vsb75\" (UID: \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\") " pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.797780 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bd87j" Jan 09 00:53:20 crc kubenswrapper[4945]: I0109 00:53:20.897004 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:21 crc kubenswrapper[4945]: I0109 00:53:21.029462 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khg7v" event={"ID":"ea7745e2-70a1-493d-88ba-62e84d2c5675","Type":"ContainerStarted","Data":"cafb20538cddae04d90a76d3df111677b68b93a947de5b5c5591056e5bcaeb37"} Jan 09 00:53:21 crc kubenswrapper[4945]: I0109 00:53:21.053629 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khg7v" podStartSLOduration=2.31386639 podStartE2EDuration="5.053610008s" podCreationTimestamp="2026-01-09 00:53:16 +0000 UTC" firstStartedPulling="2026-01-09 00:53:17.997168312 +0000 UTC m=+5868.308327268" lastFinishedPulling="2026-01-09 00:53:20.73691194 +0000 UTC m=+5871.048070886" observedRunningTime="2026-01-09 00:53:21.0475879 +0000 UTC m=+5871.358746846" watchObservedRunningTime="2026-01-09 00:53:21.053610008 +0000 UTC m=+5871.364768954" Jan 09 00:53:21 crc kubenswrapper[4945]: I0109 00:53:21.266058 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bd87j"] Jan 09 00:53:21 crc kubenswrapper[4945]: W0109 00:53:21.269296 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a0f38d2_f7c4_4be9_a7cf_3cc1b3f97a30.slice/crio-75e75f0bddc99d68976f90f96ade8f71058c840a18742ceffd126e0d05a5d668 WatchSource:0}: Error finding container 75e75f0bddc99d68976f90f96ade8f71058c840a18742ceffd126e0d05a5d668: Status 404 returned error can't find the container with id 75e75f0bddc99d68976f90f96ade8f71058c840a18742ceffd126e0d05a5d668 Jan 09 00:53:21 crc kubenswrapper[4945]: I0109 00:53:21.372809 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0ba5-account-create-update-vsb75"] Jan 09 00:53:21 crc kubenswrapper[4945]: W0109 00:53:21.385320 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3be6f33d_bbf7_4211_bb12_47972c8b03b3.slice/crio-50946530e2b14c658cc030e47491d4a5b3ee7d0f4d508e3fae204c63b0c71103 WatchSource:0}: Error finding container 50946530e2b14c658cc030e47491d4a5b3ee7d0f4d508e3fae204c63b0c71103: Status 404 returned error can't find the container with id 50946530e2b14c658cc030e47491d4a5b3ee7d0f4d508e3fae204c63b0c71103 Jan 09 00:53:22 crc kubenswrapper[4945]: I0109 00:53:22.061328 4945 generic.go:334] "Generic (PLEG): container finished" podID="9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30" containerID="e5b420d4242cf12ffea08b34758342b1335f8b9056370de4c5cfd1decea5a462" exitCode=0 Jan 09 00:53:22 crc kubenswrapper[4945]: I0109 00:53:22.061803 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bd87j" event={"ID":"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30","Type":"ContainerDied","Data":"e5b420d4242cf12ffea08b34758342b1335f8b9056370de4c5cfd1decea5a462"} Jan 09 00:53:22 crc kubenswrapper[4945]: I0109 00:53:22.061849 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bd87j" event={"ID":"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30","Type":"ContainerStarted","Data":"75e75f0bddc99d68976f90f96ade8f71058c840a18742ceffd126e0d05a5d668"} Jan 09 00:53:22 crc kubenswrapper[4945]: I0109 00:53:22.065740 4945 generic.go:334] "Generic (PLEG): container finished" podID="3be6f33d-bbf7-4211-bb12-47972c8b03b3" containerID="a75b276aeabb8e5bcf22cb4cf573b2a43289614e80ba9b39f68b04fa2801fabe" exitCode=0 Jan 09 00:53:22 crc kubenswrapper[4945]: I0109 00:53:22.066701 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0ba5-account-create-update-vsb75" event={"ID":"3be6f33d-bbf7-4211-bb12-47972c8b03b3","Type":"ContainerDied","Data":"a75b276aeabb8e5bcf22cb4cf573b2a43289614e80ba9b39f68b04fa2801fabe"} Jan 09 00:53:22 crc kubenswrapper[4945]: I0109 00:53:22.066735 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0ba5-account-create-update-vsb75" event={"ID":"3be6f33d-bbf7-4211-bb12-47972c8b03b3","Type":"ContainerStarted","Data":"50946530e2b14c658cc030e47491d4a5b3ee7d0f4d508e3fae204c63b0c71103"} Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.000309 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:53:23 crc kubenswrapper[4945]: E0109 00:53:23.000568 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.541257 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bd87j" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.548247 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.588461 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3be6f33d-bbf7-4211-bb12-47972c8b03b3-operator-scripts\") pod \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\" (UID: \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\") " Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.588676 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-operator-scripts\") pod \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\" (UID: \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\") " Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.588764 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chx9k\" (UniqueName: \"kubernetes.io/projected/3be6f33d-bbf7-4211-bb12-47972c8b03b3-kube-api-access-chx9k\") pod \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\" (UID: \"3be6f33d-bbf7-4211-bb12-47972c8b03b3\") " Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.588802 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rltnt\" (UniqueName: \"kubernetes.io/projected/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-kube-api-access-rltnt\") pod \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\" (UID: \"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30\") " Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.589170 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30" (UID: "9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.589257 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3be6f33d-bbf7-4211-bb12-47972c8b03b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3be6f33d-bbf7-4211-bb12-47972c8b03b3" (UID: "3be6f33d-bbf7-4211-bb12-47972c8b03b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.594408 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-kube-api-access-rltnt" (OuterVolumeSpecName: "kube-api-access-rltnt") pod "9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30" (UID: "9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30"). InnerVolumeSpecName "kube-api-access-rltnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.595150 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be6f33d-bbf7-4211-bb12-47972c8b03b3-kube-api-access-chx9k" (OuterVolumeSpecName: "kube-api-access-chx9k") pod "3be6f33d-bbf7-4211-bb12-47972c8b03b3" (UID: "3be6f33d-bbf7-4211-bb12-47972c8b03b3"). InnerVolumeSpecName "kube-api-access-chx9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.690655 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.690693 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chx9k\" (UniqueName: \"kubernetes.io/projected/3be6f33d-bbf7-4211-bb12-47972c8b03b3-kube-api-access-chx9k\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.690726 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rltnt\" (UniqueName: \"kubernetes.io/projected/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30-kube-api-access-rltnt\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:23 crc kubenswrapper[4945]: I0109 00:53:23.690734 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3be6f33d-bbf7-4211-bb12-47972c8b03b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:24 crc kubenswrapper[4945]: I0109 00:53:24.083844 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bd87j" Jan 09 00:53:24 crc kubenswrapper[4945]: I0109 00:53:24.083853 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bd87j" event={"ID":"9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30","Type":"ContainerDied","Data":"75e75f0bddc99d68976f90f96ade8f71058c840a18742ceffd126e0d05a5d668"} Jan 09 00:53:24 crc kubenswrapper[4945]: I0109 00:53:24.084328 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e75f0bddc99d68976f90f96ade8f71058c840a18742ceffd126e0d05a5d668" Jan 09 00:53:24 crc kubenswrapper[4945]: I0109 00:53:24.085311 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0ba5-account-create-update-vsb75" event={"ID":"3be6f33d-bbf7-4211-bb12-47972c8b03b3","Type":"ContainerDied","Data":"50946530e2b14c658cc030e47491d4a5b3ee7d0f4d508e3fae204c63b0c71103"} Jan 09 00:53:24 crc kubenswrapper[4945]: I0109 00:53:24.085350 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50946530e2b14c658cc030e47491d4a5b3ee7d0f4d508e3fae204c63b0c71103" Jan 09 00:53:24 crc kubenswrapper[4945]: I0109 00:53:24.085388 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0ba5-account-create-update-vsb75" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.808346 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4jxxq"] Jan 09 00:53:25 crc kubenswrapper[4945]: E0109 00:53:25.808742 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be6f33d-bbf7-4211-bb12-47972c8b03b3" containerName="mariadb-account-create-update" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.808758 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be6f33d-bbf7-4211-bb12-47972c8b03b3" containerName="mariadb-account-create-update" Jan 09 00:53:25 crc kubenswrapper[4945]: E0109 00:53:25.808787 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30" containerName="mariadb-database-create" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.808793 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30" containerName="mariadb-database-create" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.809836 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be6f33d-bbf7-4211-bb12-47972c8b03b3" containerName="mariadb-account-create-update" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.809894 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30" containerName="mariadb-database-create" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.810572 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.813822 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vpr6w" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.814091 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.828906 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4jxxq"] Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.928240 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-db-sync-config-data\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.928391 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgtxd\" (UniqueName: \"kubernetes.io/projected/a6d8258a-52f0-455d-bdf8-2c039786ca6d-kube-api-access-vgtxd\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.928692 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-config-data\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:25 crc kubenswrapper[4945]: I0109 00:53:25.928938 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-combined-ca-bundle\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.030732 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-config-data\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.031092 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-combined-ca-bundle\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.031184 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-db-sync-config-data\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.031204 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgtxd\" (UniqueName: \"kubernetes.io/projected/a6d8258a-52f0-455d-bdf8-2c039786ca6d-kube-api-access-vgtxd\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.037192 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-config-data\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.053159 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-db-sync-config-data\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.053397 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-combined-ca-bundle\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.055726 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgtxd\" (UniqueName: \"kubernetes.io/projected/a6d8258a-52f0-455d-bdf8-2c039786ca6d-kube-api-access-vgtxd\") pod \"glance-db-sync-4jxxq\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.131358 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:26 crc kubenswrapper[4945]: I0109 00:53:26.661889 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4jxxq"] Jan 09 00:53:26 crc kubenswrapper[4945]: W0109 00:53:26.666688 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6d8258a_52f0_455d_bdf8_2c039786ca6d.slice/crio-f87a0da2d52e90cb30bc2da1b47d637b36b238a69a87b10238632d5347b662ff WatchSource:0}: Error finding container f87a0da2d52e90cb30bc2da1b47d637b36b238a69a87b10238632d5347b662ff: Status 404 returned error can't find the container with id f87a0da2d52e90cb30bc2da1b47d637b36b238a69a87b10238632d5347b662ff Jan 09 00:53:27 crc kubenswrapper[4945]: I0109 00:53:27.051146 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:27 crc kubenswrapper[4945]: I0109 00:53:27.051205 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:27 crc kubenswrapper[4945]: I0109 00:53:27.096938 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:27 crc kubenswrapper[4945]: I0109 00:53:27.121789 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jxxq" event={"ID":"a6d8258a-52f0-455d-bdf8-2c039786ca6d","Type":"ContainerStarted","Data":"f87a0da2d52e90cb30bc2da1b47d637b36b238a69a87b10238632d5347b662ff"} Jan 09 00:53:27 crc kubenswrapper[4945]: I0109 00:53:27.163764 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:27 crc kubenswrapper[4945]: I0109 00:53:27.330219 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khg7v"] Jan 09 00:53:28 crc kubenswrapper[4945]: I0109 00:53:28.133467 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jxxq" event={"ID":"a6d8258a-52f0-455d-bdf8-2c039786ca6d","Type":"ContainerStarted","Data":"061f8769d87c721138e1e67fde4108c40a890c9a5bc792da70f0b50296777404"} Jan 09 00:53:28 crc kubenswrapper[4945]: I0109 00:53:28.164854 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4jxxq" podStartSLOduration=3.164830151 podStartE2EDuration="3.164830151s" podCreationTimestamp="2026-01-09 00:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:53:28.156706071 +0000 UTC m=+5878.467865037" watchObservedRunningTime="2026-01-09 00:53:28.164830151 +0000 UTC m=+5878.475989097" Jan 09 00:53:29 crc kubenswrapper[4945]: I0109 00:53:29.142526 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khg7v" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerName="registry-server" containerID="cri-o://cafb20538cddae04d90a76d3df111677b68b93a947de5b5c5591056e5bcaeb37" gracePeriod=2 Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.151735 4945 generic.go:334] "Generic (PLEG): container finished" podID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerID="cafb20538cddae04d90a76d3df111677b68b93a947de5b5c5591056e5bcaeb37" exitCode=0 Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.151802 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khg7v" event={"ID":"ea7745e2-70a1-493d-88ba-62e84d2c5675","Type":"ContainerDied","Data":"cafb20538cddae04d90a76d3df111677b68b93a947de5b5c5591056e5bcaeb37"} Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.152140 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khg7v" event={"ID":"ea7745e2-70a1-493d-88ba-62e84d2c5675","Type":"ContainerDied","Data":"d514ebcb6114bbb5b601b13c082f58f0514d4c8ab208eae2477749bb66045612"} Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.152161 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d514ebcb6114bbb5b601b13c082f58f0514d4c8ab208eae2477749bb66045612" Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.178237 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.219252 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8szrh\" (UniqueName: \"kubernetes.io/projected/ea7745e2-70a1-493d-88ba-62e84d2c5675-kube-api-access-8szrh\") pod \"ea7745e2-70a1-493d-88ba-62e84d2c5675\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.219347 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-catalog-content\") pod \"ea7745e2-70a1-493d-88ba-62e84d2c5675\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.219547 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-utilities\") pod \"ea7745e2-70a1-493d-88ba-62e84d2c5675\" (UID: \"ea7745e2-70a1-493d-88ba-62e84d2c5675\") " Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.220521 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-utilities" (OuterVolumeSpecName: "utilities") pod "ea7745e2-70a1-493d-88ba-62e84d2c5675" (UID: "ea7745e2-70a1-493d-88ba-62e84d2c5675"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.228871 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7745e2-70a1-493d-88ba-62e84d2c5675-kube-api-access-8szrh" (OuterVolumeSpecName: "kube-api-access-8szrh") pod "ea7745e2-70a1-493d-88ba-62e84d2c5675" (UID: "ea7745e2-70a1-493d-88ba-62e84d2c5675"). InnerVolumeSpecName "kube-api-access-8szrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.270672 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea7745e2-70a1-493d-88ba-62e84d2c5675" (UID: "ea7745e2-70a1-493d-88ba-62e84d2c5675"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.321122 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.321163 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8szrh\" (UniqueName: \"kubernetes.io/projected/ea7745e2-70a1-493d-88ba-62e84d2c5675-kube-api-access-8szrh\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:30 crc kubenswrapper[4945]: I0109 00:53:30.321197 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7745e2-70a1-493d-88ba-62e84d2c5675-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:31 crc kubenswrapper[4945]: I0109 00:53:31.169659 4945 generic.go:334] "Generic (PLEG): container finished" podID="a6d8258a-52f0-455d-bdf8-2c039786ca6d" containerID="061f8769d87c721138e1e67fde4108c40a890c9a5bc792da70f0b50296777404" exitCode=0 Jan 09 00:53:31 crc kubenswrapper[4945]: I0109 00:53:31.169772 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jxxq" event={"ID":"a6d8258a-52f0-455d-bdf8-2c039786ca6d","Type":"ContainerDied","Data":"061f8769d87c721138e1e67fde4108c40a890c9a5bc792da70f0b50296777404"} Jan 09 00:53:31 crc kubenswrapper[4945]: I0109 00:53:31.170224 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khg7v" Jan 09 00:53:31 crc kubenswrapper[4945]: I0109 00:53:31.228090 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khg7v"] Jan 09 00:53:31 crc kubenswrapper[4945]: I0109 00:53:31.240693 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khg7v"] Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.009175 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" path="/var/lib/kubelet/pods/ea7745e2-70a1-493d-88ba-62e84d2c5675/volumes" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.569965 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.667021 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-config-data\") pod \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.667196 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-db-sync-config-data\") pod \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.667539 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgtxd\" (UniqueName: \"kubernetes.io/projected/a6d8258a-52f0-455d-bdf8-2c039786ca6d-kube-api-access-vgtxd\") pod \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.667584 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-combined-ca-bundle\") pod \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\" (UID: \"a6d8258a-52f0-455d-bdf8-2c039786ca6d\") " Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.674180 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a6d8258a-52f0-455d-bdf8-2c039786ca6d" (UID: "a6d8258a-52f0-455d-bdf8-2c039786ca6d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.674423 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d8258a-52f0-455d-bdf8-2c039786ca6d-kube-api-access-vgtxd" (OuterVolumeSpecName: "kube-api-access-vgtxd") pod "a6d8258a-52f0-455d-bdf8-2c039786ca6d" (UID: "a6d8258a-52f0-455d-bdf8-2c039786ca6d"). InnerVolumeSpecName "kube-api-access-vgtxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.698339 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6d8258a-52f0-455d-bdf8-2c039786ca6d" (UID: "a6d8258a-52f0-455d-bdf8-2c039786ca6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.707423 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-config-data" (OuterVolumeSpecName: "config-data") pod "a6d8258a-52f0-455d-bdf8-2c039786ca6d" (UID: "a6d8258a-52f0-455d-bdf8-2c039786ca6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.769052 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.769082 4945 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.769095 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgtxd\" (UniqueName: \"kubernetes.io/projected/a6d8258a-52f0-455d-bdf8-2c039786ca6d-kube-api-access-vgtxd\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:32 crc kubenswrapper[4945]: I0109 00:53:32.769104 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d8258a-52f0-455d-bdf8-2c039786ca6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.189828 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4jxxq" event={"ID":"a6d8258a-52f0-455d-bdf8-2c039786ca6d","Type":"ContainerDied","Data":"f87a0da2d52e90cb30bc2da1b47d637b36b238a69a87b10238632d5347b662ff"} Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.189886 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f87a0da2d52e90cb30bc2da1b47d637b36b238a69a87b10238632d5347b662ff" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.189859 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4jxxq" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.469896 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:33 crc kubenswrapper[4945]: E0109 00:53:33.470667 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerName="extract-utilities" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.470693 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerName="extract-utilities" Jan 09 00:53:33 crc kubenswrapper[4945]: E0109 00:53:33.470723 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d8258a-52f0-455d-bdf8-2c039786ca6d" containerName="glance-db-sync" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.470732 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d8258a-52f0-455d-bdf8-2c039786ca6d" containerName="glance-db-sync" Jan 09 00:53:33 crc kubenswrapper[4945]: E0109 00:53:33.470760 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerName="extract-content" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.470769 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerName="extract-content" Jan 09 00:53:33 crc kubenswrapper[4945]: E0109 00:53:33.470787 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerName="registry-server" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.470795 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerName="registry-server" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.471036 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7745e2-70a1-493d-88ba-62e84d2c5675" containerName="registry-server" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.471061 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d8258a-52f0-455d-bdf8-2c039786ca6d" containerName="glance-db-sync" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.473005 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.476153 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.476153 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.476268 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vpr6w" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.476532 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.487204 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.583120 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-scripts\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.583164 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpw6g\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-kube-api-access-xpw6g\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.583221 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-config-data\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.583267 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.583650 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-logs\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.583714 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-ceph\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.583848 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.595722 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545d85cf7c-44cms"] Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.597766 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.621489 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d85cf7c-44cms"] Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.689324 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-dns-svc\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.689419 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-nb\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.689522 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-logs\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.689563 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-ceph\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.689744 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.689817 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trj7r\" (UniqueName: \"kubernetes.io/projected/a38369be-67be-4a77-89bb-23ea75f4fe4d-kube-api-access-trj7r\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.689877 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-scripts\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.689912 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpw6g\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-kube-api-access-xpw6g\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.690043 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-config-data\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.690116 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-sb\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.690153 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.690185 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-config\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.691207 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-logs\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.691763 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.697416 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.702357 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-ceph\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.702409 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.702720 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-scripts\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.708575 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-config-data\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.710804 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.715114 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.721973 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpw6g\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-kube-api-access-xpw6g\") pod \"glance-default-external-api-0\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.725560 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.792850 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.792928 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.792965 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-logs\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793177 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trj7r\" (UniqueName: \"kubernetes.io/projected/a38369be-67be-4a77-89bb-23ea75f4fe4d-kube-api-access-trj7r\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793280 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-kube-api-access-xszrv\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793316 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793359 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-sb\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793392 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-config\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793424 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793454 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793493 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-dns-svc\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.793524 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-nb\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.794584 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-nb\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.795327 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-config\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.795452 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-sb\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.795859 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-dns-svc\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.796798 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.825232 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trj7r\" (UniqueName: \"kubernetes.io/projected/a38369be-67be-4a77-89bb-23ea75f4fe4d-kube-api-access-trj7r\") pod \"dnsmasq-dns-545d85cf7c-44cms\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.895067 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.895360 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.895381 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.895436 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.895464 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.895506 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-logs\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.895563 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-kube-api-access-xszrv\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.897707 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-logs\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.898309 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.909855 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.910261 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.910365 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.913962 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-kube-api-access-xszrv\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.918205 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.922744 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:33 crc kubenswrapper[4945]: I0109 00:53:33.945929 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:34 crc kubenswrapper[4945]: I0109 00:53:34.275428 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d85cf7c-44cms"] Jan 09 00:53:34 crc kubenswrapper[4945]: W0109 00:53:34.291592 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda38369be_67be_4a77_89bb_23ea75f4fe4d.slice/crio-48b7f80469a179235a4f4a852dc9d040dc07937a82f6965278b5001b1191414a WatchSource:0}: Error finding container 48b7f80469a179235a4f4a852dc9d040dc07937a82f6965278b5001b1191414a: Status 404 returned error can't find the container with id 48b7f80469a179235a4f4a852dc9d040dc07937a82f6965278b5001b1191414a Jan 09 00:53:34 crc kubenswrapper[4945]: I0109 00:53:34.370359 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:34 crc kubenswrapper[4945]: I0109 00:53:34.761869 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:34 crc kubenswrapper[4945]: I0109 00:53:34.875714 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:35 crc kubenswrapper[4945]: I0109 00:53:35.222331 4945 generic.go:334] "Generic (PLEG): container finished" podID="a38369be-67be-4a77-89bb-23ea75f4fe4d" containerID="e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9" exitCode=0 Jan 09 00:53:35 crc kubenswrapper[4945]: I0109 00:53:35.222455 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" event={"ID":"a38369be-67be-4a77-89bb-23ea75f4fe4d","Type":"ContainerDied","Data":"e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9"} Jan 09 00:53:35 crc kubenswrapper[4945]: I0109 00:53:35.223129 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" event={"ID":"a38369be-67be-4a77-89bb-23ea75f4fe4d","Type":"ContainerStarted","Data":"48b7f80469a179235a4f4a852dc9d040dc07937a82f6965278b5001b1191414a"} Jan 09 00:53:35 crc kubenswrapper[4945]: I0109 00:53:35.227441 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe3ed0e-8021-465d-9602-06880843a00b","Type":"ContainerStarted","Data":"f3fb520490dca975a6799fff54682fff299b72d15c22dc002f9fc9e142fb02cb"} Jan 09 00:53:35 crc kubenswrapper[4945]: I0109 00:53:35.232710 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ddc5bb-f69a-4669-b3e6-98686ddaad9c","Type":"ContainerStarted","Data":"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d"} Jan 09 00:53:35 crc kubenswrapper[4945]: I0109 00:53:35.232761 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ddc5bb-f69a-4669-b3e6-98686ddaad9c","Type":"ContainerStarted","Data":"57b10312b1c7d02c3ff5a5c205d2c5ab7726fdc171e9857811bb486744bea26a"} Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.244321 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe3ed0e-8021-465d-9602-06880843a00b","Type":"ContainerStarted","Data":"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146"} Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.244868 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe3ed0e-8021-465d-9602-06880843a00b","Type":"ContainerStarted","Data":"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619"} Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.248969 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ddc5bb-f69a-4669-b3e6-98686ddaad9c","Type":"ContainerStarted","Data":"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f"} Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.249129 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerName="glance-log" containerID="cri-o://2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d" gracePeriod=30 Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.249388 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerName="glance-httpd" containerID="cri-o://888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f" gracePeriod=30 Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.253443 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" event={"ID":"a38369be-67be-4a77-89bb-23ea75f4fe4d","Type":"ContainerStarted","Data":"68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea"} Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.253699 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.296360 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" podStartSLOduration=3.296337271 podStartE2EDuration="3.296337271s" podCreationTimestamp="2026-01-09 00:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:53:36.29547814 +0000 UTC m=+5886.606637096" watchObservedRunningTime="2026-01-09 00:53:36.296337271 +0000 UTC m=+5886.607496227" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.298548 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.298535475 podStartE2EDuration="3.298535475s" podCreationTimestamp="2026-01-09 00:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:53:36.267137044 +0000 UTC m=+5886.578296000" watchObservedRunningTime="2026-01-09 00:53:36.298535475 +0000 UTC m=+5886.609694421" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.320088 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.320059324 podStartE2EDuration="3.320059324s" podCreationTimestamp="2026-01-09 00:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:53:36.317528092 +0000 UTC m=+5886.628687048" watchObservedRunningTime="2026-01-09 00:53:36.320059324 +0000 UTC m=+5886.631218270" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.746483 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.907412 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.966792 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-logs\") pod \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.966891 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpw6g\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-kube-api-access-xpw6g\") pod \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.966927 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-ceph\") pod \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.966957 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-httpd-run\") pod \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.967026 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-config-data\") pod \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.967057 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-combined-ca-bundle\") pod \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.967099 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-scripts\") pod \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\" (UID: \"42ddc5bb-f69a-4669-b3e6-98686ddaad9c\") " Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.968079 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-logs" (OuterVolumeSpecName: "logs") pod "42ddc5bb-f69a-4669-b3e6-98686ddaad9c" (UID: "42ddc5bb-f69a-4669-b3e6-98686ddaad9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.968115 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "42ddc5bb-f69a-4669-b3e6-98686ddaad9c" (UID: "42ddc5bb-f69a-4669-b3e6-98686ddaad9c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.973868 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-ceph" (OuterVolumeSpecName: "ceph") pod "42ddc5bb-f69a-4669-b3e6-98686ddaad9c" (UID: "42ddc5bb-f69a-4669-b3e6-98686ddaad9c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.975145 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-scripts" (OuterVolumeSpecName: "scripts") pod "42ddc5bb-f69a-4669-b3e6-98686ddaad9c" (UID: "42ddc5bb-f69a-4669-b3e6-98686ddaad9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.983280 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-kube-api-access-xpw6g" (OuterVolumeSpecName: "kube-api-access-xpw6g") pod "42ddc5bb-f69a-4669-b3e6-98686ddaad9c" (UID: "42ddc5bb-f69a-4669-b3e6-98686ddaad9c"). InnerVolumeSpecName "kube-api-access-xpw6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:36 crc kubenswrapper[4945]: I0109 00:53:36.996808 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42ddc5bb-f69a-4669-b3e6-98686ddaad9c" (UID: "42ddc5bb-f69a-4669-b3e6-98686ddaad9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.000705 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:53:37 crc kubenswrapper[4945]: E0109 00:53:37.001051 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.011748 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-config-data" (OuterVolumeSpecName: "config-data") pod "42ddc5bb-f69a-4669-b3e6-98686ddaad9c" (UID: "42ddc5bb-f69a-4669-b3e6-98686ddaad9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.069278 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpw6g\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-kube-api-access-xpw6g\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.069710 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.069846 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.069941 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.070088 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.070175 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.070265 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42ddc5bb-f69a-4669-b3e6-98686ddaad9c-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.265308 4945 generic.go:334] "Generic (PLEG): container finished" podID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerID="888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f" exitCode=0 Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.265348 4945 generic.go:334] "Generic (PLEG): container finished" podID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerID="2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d" exitCode=143 Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.265379 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ddc5bb-f69a-4669-b3e6-98686ddaad9c","Type":"ContainerDied","Data":"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f"} Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.265463 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ddc5bb-f69a-4669-b3e6-98686ddaad9c","Type":"ContainerDied","Data":"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d"} Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.265404 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.265488 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42ddc5bb-f69a-4669-b3e6-98686ddaad9c","Type":"ContainerDied","Data":"57b10312b1c7d02c3ff5a5c205d2c5ab7726fdc171e9857811bb486744bea26a"} Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.265507 4945 scope.go:117] "RemoveContainer" containerID="888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.295252 4945 scope.go:117] "RemoveContainer" containerID="2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.308117 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.315258 4945 scope.go:117] "RemoveContainer" containerID="888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f" Jan 09 00:53:37 crc kubenswrapper[4945]: E0109 00:53:37.316164 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f\": container with ID starting with 888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f not found: ID does not exist" containerID="888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.316218 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f"} err="failed to get container status \"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f\": rpc error: code = NotFound desc = could not find container \"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f\": container with ID starting with 888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f not found: ID does not exist" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.316249 4945 scope.go:117] "RemoveContainer" containerID="2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d" Jan 09 00:53:37 crc kubenswrapper[4945]: E0109 00:53:37.316687 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d\": container with ID starting with 2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d not found: ID does not exist" containerID="2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.316737 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d"} err="failed to get container status \"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d\": rpc error: code = NotFound desc = could not find container \"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d\": container with ID starting with 2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d not found: ID does not exist" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.316768 4945 scope.go:117] "RemoveContainer" containerID="888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.317488 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f"} err="failed to get container status \"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f\": rpc error: code = NotFound desc = could not find container \"888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f\": container with ID starting with 888d8a53b134129ceaac52f43b6e7ead71610a508a5d1603e7ca3f48c189ac0f not found: ID does not exist" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.317540 4945 scope.go:117] "RemoveContainer" containerID="2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.317842 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d"} err="failed to get container status \"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d\": rpc error: code = NotFound desc = could not find container \"2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d\": container with ID starting with 2ce25ceef9c9fad069d1a33b15df7d73a6f2aed4543536c4c72572cb80abba7d not found: ID does not exist" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.336189 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.343585 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:37 crc kubenswrapper[4945]: E0109 00:53:37.344022 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerName="glance-httpd" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.344036 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerName="glance-httpd" Jan 09 00:53:37 crc kubenswrapper[4945]: E0109 00:53:37.344051 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerName="glance-log" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.344056 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerName="glance-log" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.344228 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerName="glance-httpd" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.344242 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" containerName="glance-log" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.345290 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.348112 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.352953 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.375279 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-ceph\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.375347 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.375375 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-logs\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.375402 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjs79\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-kube-api-access-wjs79\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.375453 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-scripts\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.375559 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-config-data\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.375591 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.477124 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-ceph\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.477175 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-logs\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.477192 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.477214 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjs79\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-kube-api-access-wjs79\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.477244 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-scripts\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.477272 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-config-data\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.477290 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.477704 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.478592 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-logs\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.482666 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.483564 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-ceph\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.483636 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-config-data\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.484778 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-scripts\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.495862 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjs79\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-kube-api-access-wjs79\") pod \"glance-default-external-api-0\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " pod="openstack/glance-default-external-api-0" Jan 09 00:53:37 crc kubenswrapper[4945]: I0109 00:53:37.674105 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 00:53:38 crc kubenswrapper[4945]: I0109 00:53:38.015135 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ddc5bb-f69a-4669-b3e6-98686ddaad9c" path="/var/lib/kubelet/pods/42ddc5bb-f69a-4669-b3e6-98686ddaad9c/volumes" Jan 09 00:53:38 crc kubenswrapper[4945]: I0109 00:53:38.284386 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="afe3ed0e-8021-465d-9602-06880843a00b" containerName="glance-log" containerID="cri-o://345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619" gracePeriod=30 Jan 09 00:53:38 crc kubenswrapper[4945]: I0109 00:53:38.284442 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="afe3ed0e-8021-465d-9602-06880843a00b" containerName="glance-httpd" containerID="cri-o://94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146" gracePeriod=30 Jan 09 00:53:38 crc kubenswrapper[4945]: I0109 00:53:38.337569 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 00:53:38 crc kubenswrapper[4945]: I0109 00:53:38.836792 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.015631 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-combined-ca-bundle\") pod \"afe3ed0e-8021-465d-9602-06880843a00b\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.015728 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-scripts\") pod \"afe3ed0e-8021-465d-9602-06880843a00b\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.016122 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-ceph\") pod \"afe3ed0e-8021-465d-9602-06880843a00b\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.016248 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-config-data\") pod \"afe3ed0e-8021-465d-9602-06880843a00b\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.016290 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-logs\") pod \"afe3ed0e-8021-465d-9602-06880843a00b\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.016323 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-httpd-run\") pod \"afe3ed0e-8021-465d-9602-06880843a00b\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.016448 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-kube-api-access-xszrv\") pod \"afe3ed0e-8021-465d-9602-06880843a00b\" (UID: \"afe3ed0e-8021-465d-9602-06880843a00b\") " Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.016871 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-logs" (OuterVolumeSpecName: "logs") pod "afe3ed0e-8021-465d-9602-06880843a00b" (UID: "afe3ed0e-8021-465d-9602-06880843a00b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.017232 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "afe3ed0e-8021-465d-9602-06880843a00b" (UID: "afe3ed0e-8021-465d-9602-06880843a00b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.017264 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.022890 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-scripts" (OuterVolumeSpecName: "scripts") pod "afe3ed0e-8021-465d-9602-06880843a00b" (UID: "afe3ed0e-8021-465d-9602-06880843a00b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.023390 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-ceph" (OuterVolumeSpecName: "ceph") pod "afe3ed0e-8021-465d-9602-06880843a00b" (UID: "afe3ed0e-8021-465d-9602-06880843a00b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.029439 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-kube-api-access-xszrv" (OuterVolumeSpecName: "kube-api-access-xszrv") pod "afe3ed0e-8021-465d-9602-06880843a00b" (UID: "afe3ed0e-8021-465d-9602-06880843a00b"). InnerVolumeSpecName "kube-api-access-xszrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.057247 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe3ed0e-8021-465d-9602-06880843a00b" (UID: "afe3ed0e-8021-465d-9602-06880843a00b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.080092 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-config-data" (OuterVolumeSpecName: "config-data") pod "afe3ed0e-8021-465d-9602-06880843a00b" (UID: "afe3ed0e-8021-465d-9602-06880843a00b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.119800 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.119833 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.119849 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.119861 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/afe3ed0e-8021-465d-9602-06880843a00b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.119875 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/afe3ed0e-8021-465d-9602-06880843a00b-kube-api-access-xszrv\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.119889 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe3ed0e-8021-465d-9602-06880843a00b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.296563 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fef5d863-fecd-47da-9486-0329c4a00c31","Type":"ContainerStarted","Data":"6dbaf0cb334f1566b9c1d4b0bf61478f38132811abe5f2f0caa0009026d298a6"} Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.296610 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fef5d863-fecd-47da-9486-0329c4a00c31","Type":"ContainerStarted","Data":"da7776b31eca6923148c457dea7a34f39182c7f0ea76e2938e6b7eebdb1c99da"} Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.298966 4945 generic.go:334] "Generic (PLEG): container finished" podID="afe3ed0e-8021-465d-9602-06880843a00b" containerID="94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146" exitCode=0 Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.298986 4945 generic.go:334] "Generic (PLEG): container finished" podID="afe3ed0e-8021-465d-9602-06880843a00b" containerID="345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619" exitCode=143 Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.299018 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe3ed0e-8021-465d-9602-06880843a00b","Type":"ContainerDied","Data":"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146"} Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.299038 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe3ed0e-8021-465d-9602-06880843a00b","Type":"ContainerDied","Data":"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619"} Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.299046 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"afe3ed0e-8021-465d-9602-06880843a00b","Type":"ContainerDied","Data":"f3fb520490dca975a6799fff54682fff299b72d15c22dc002f9fc9e142fb02cb"} Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.299062 4945 scope.go:117] "RemoveContainer" containerID="94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.299161 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.334847 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.336268 4945 scope.go:117] "RemoveContainer" containerID="345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.342433 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.361678 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:39 crc kubenswrapper[4945]: E0109 00:53:39.362105 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe3ed0e-8021-465d-9602-06880843a00b" containerName="glance-httpd" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.362124 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe3ed0e-8021-465d-9602-06880843a00b" containerName="glance-httpd" Jan 09 00:53:39 crc kubenswrapper[4945]: E0109 00:53:39.362137 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe3ed0e-8021-465d-9602-06880843a00b" containerName="glance-log" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.362145 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe3ed0e-8021-465d-9602-06880843a00b" containerName="glance-log" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.362320 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe3ed0e-8021-465d-9602-06880843a00b" containerName="glance-log" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.362332 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe3ed0e-8021-465d-9602-06880843a00b" containerName="glance-httpd" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.363268 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.367109 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.379401 4945 scope.go:117] "RemoveContainer" containerID="94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146" Jan 09 00:53:39 crc kubenswrapper[4945]: E0109 00:53:39.395181 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146\": container with ID starting with 94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146 not found: ID does not exist" containerID="94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.395239 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146"} err="failed to get container status \"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146\": rpc error: code = NotFound desc = could not find container \"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146\": container with ID starting with 94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146 not found: ID does not exist" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.395270 4945 scope.go:117] "RemoveContainer" containerID="345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619" Jan 09 00:53:39 crc kubenswrapper[4945]: E0109 00:53:39.395937 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619\": container with ID starting with 345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619 not found: ID does not exist" containerID="345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.395976 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619"} err="failed to get container status \"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619\": rpc error: code = NotFound desc = could not find container \"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619\": container with ID starting with 345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619 not found: ID does not exist" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.395995 4945 scope.go:117] "RemoveContainer" containerID="94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.396202 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.396278 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146"} err="failed to get container status \"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146\": rpc error: code = NotFound desc = could not find container \"94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146\": container with ID starting with 94501229c5ecb2adba49ff8072c729b23f1692aee107d93e45776181e7b30146 not found: ID does not exist" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.396301 4945 scope.go:117] "RemoveContainer" containerID="345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.399191 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619"} err="failed to get container status \"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619\": rpc error: code = NotFound desc = could not find container \"345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619\": container with ID starting with 345c00165035890f6f14becbdb61ca70aa23e11cb3dbbc5e52e66e0f01620619 not found: ID does not exist" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.529385 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.529720 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djknf\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-kube-api-access-djknf\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.529751 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.529783 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.529815 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.529864 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.529883 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.631456 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.631509 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.631578 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.631602 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djknf\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-kube-api-access-djknf\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.631627 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.631649 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.631677 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.632076 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.632689 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.635719 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.636060 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.636322 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-ceph\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.636403 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.651441 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djknf\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-kube-api-access-djknf\") pod \"glance-default-internal-api-0\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " pod="openstack/glance-default-internal-api-0" Jan 09 00:53:39 crc kubenswrapper[4945]: I0109 00:53:39.684085 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:40 crc kubenswrapper[4945]: I0109 00:53:40.016823 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe3ed0e-8021-465d-9602-06880843a00b" path="/var/lib/kubelet/pods/afe3ed0e-8021-465d-9602-06880843a00b/volumes" Jan 09 00:53:40 crc kubenswrapper[4945]: I0109 00:53:40.219275 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 00:53:40 crc kubenswrapper[4945]: I0109 00:53:40.310573 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2","Type":"ContainerStarted","Data":"9f972b77efb19d5d779f2fa963870b3078c163eb2bd0d711700fb1d71518ae0f"} Jan 09 00:53:40 crc kubenswrapper[4945]: I0109 00:53:40.318552 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fef5d863-fecd-47da-9486-0329c4a00c31","Type":"ContainerStarted","Data":"6572178f0dbd6a0e003afeaaf61cbf996f5991c8c8a96ca87e81dfb3367f9bb2"} Jan 09 00:53:40 crc kubenswrapper[4945]: I0109 00:53:40.340409 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.340384783 podStartE2EDuration="3.340384783s" podCreationTimestamp="2026-01-09 00:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:53:40.338066816 +0000 UTC m=+5890.649225762" watchObservedRunningTime="2026-01-09 00:53:40.340384783 +0000 UTC m=+5890.651543729" Jan 09 00:53:41 crc kubenswrapper[4945]: I0109 00:53:41.335296 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2","Type":"ContainerStarted","Data":"f252d062fb044fd7fd3e8fa5d3469a53eb5f7a03242007b3f49f4933733053e5"} Jan 09 00:53:41 crc kubenswrapper[4945]: I0109 00:53:41.335649 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2","Type":"ContainerStarted","Data":"b2d16e33ca8e8efaf612f28f00a905906de2cafffe582802fa261eaad17bec12"} Jan 09 00:53:41 crc kubenswrapper[4945]: I0109 00:53:41.361308 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.361288696 podStartE2EDuration="2.361288696s" podCreationTimestamp="2026-01-09 00:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:53:41.354595362 +0000 UTC m=+5891.665754298" watchObservedRunningTime="2026-01-09 00:53:41.361288696 +0000 UTC m=+5891.672447632" Jan 09 00:53:43 crc kubenswrapper[4945]: I0109 00:53:43.924156 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.034973 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb884bf57-m68xt"] Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.049835 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" podUID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" containerName="dnsmasq-dns" containerID="cri-o://f6122dc6d483ef326d4eda4b50dcff4e35b61065e44057638acce42df270bd3c" gracePeriod=10 Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.476572 4945 generic.go:334] "Generic (PLEG): container finished" podID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" containerID="f6122dc6d483ef326d4eda4b50dcff4e35b61065e44057638acce42df270bd3c" exitCode=0 Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.476629 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" event={"ID":"8f32c1a7-b9bf-44ad-8a4a-a882f105b638","Type":"ContainerDied","Data":"f6122dc6d483ef326d4eda4b50dcff4e35b61065e44057638acce42df270bd3c"} Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.476662 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" event={"ID":"8f32c1a7-b9bf-44ad-8a4a-a882f105b638","Type":"ContainerDied","Data":"c82d227f33e03ffd02f6b8a85c3859ac0b72894660cc0b06dbabd84c73bf63e4"} Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.476676 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c82d227f33e03ffd02f6b8a85c3859ac0b72894660cc0b06dbabd84c73bf63e4" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.525485 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.658165 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-dns-svc\") pod \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.658210 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-nb\") pod \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.658303 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-sb\") pod \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.658441 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-config\") pod \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.658487 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hddk8\" (UniqueName: \"kubernetes.io/projected/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-kube-api-access-hddk8\") pod \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\" (UID: \"8f32c1a7-b9bf-44ad-8a4a-a882f105b638\") " Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.663841 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-kube-api-access-hddk8" (OuterVolumeSpecName: "kube-api-access-hddk8") pod "8f32c1a7-b9bf-44ad-8a4a-a882f105b638" (UID: "8f32c1a7-b9bf-44ad-8a4a-a882f105b638"). InnerVolumeSpecName "kube-api-access-hddk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.697697 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8f32c1a7-b9bf-44ad-8a4a-a882f105b638" (UID: "8f32c1a7-b9bf-44ad-8a4a-a882f105b638"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.697710 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8f32c1a7-b9bf-44ad-8a4a-a882f105b638" (UID: "8f32c1a7-b9bf-44ad-8a4a-a882f105b638"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.700487 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8f32c1a7-b9bf-44ad-8a4a-a882f105b638" (UID: "8f32c1a7-b9bf-44ad-8a4a-a882f105b638"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.700765 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-config" (OuterVolumeSpecName: "config") pod "8f32c1a7-b9bf-44ad-8a4a-a882f105b638" (UID: "8f32c1a7-b9bf-44ad-8a4a-a882f105b638"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.760424 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.760470 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.760489 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hddk8\" (UniqueName: \"kubernetes.io/projected/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-kube-api-access-hddk8\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.760502 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:44 crc kubenswrapper[4945]: I0109 00:53:44.760515 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8f32c1a7-b9bf-44ad-8a4a-a882f105b638-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 00:53:45 crc kubenswrapper[4945]: I0109 00:53:45.489599 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb884bf57-m68xt" Jan 09 00:53:45 crc kubenswrapper[4945]: I0109 00:53:45.520745 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb884bf57-m68xt"] Jan 09 00:53:45 crc kubenswrapper[4945]: I0109 00:53:45.528687 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb884bf57-m68xt"] Jan 09 00:53:46 crc kubenswrapper[4945]: I0109 00:53:46.017821 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" path="/var/lib/kubelet/pods/8f32c1a7-b9bf-44ad-8a4a-a882f105b638/volumes" Jan 09 00:53:47 crc kubenswrapper[4945]: I0109 00:53:47.675190 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 00:53:47 crc kubenswrapper[4945]: I0109 00:53:47.675481 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 00:53:47 crc kubenswrapper[4945]: I0109 00:53:47.701232 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 00:53:47 crc kubenswrapper[4945]: I0109 00:53:47.715856 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 00:53:48 crc kubenswrapper[4945]: I0109 00:53:48.513782 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 00:53:48 crc kubenswrapper[4945]: I0109 00:53:48.514136 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 00:53:49 crc kubenswrapper[4945]: I0109 00:53:49.684766 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:49 crc kubenswrapper[4945]: I0109 00:53:49.684830 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:49 crc kubenswrapper[4945]: I0109 00:53:49.718392 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:49 crc kubenswrapper[4945]: I0109 00:53:49.745177 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:50 crc kubenswrapper[4945]: I0109 00:53:50.527770 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:50 crc kubenswrapper[4945]: I0109 00:53:50.527832 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:51 crc kubenswrapper[4945]: I0109 00:53:51.174404 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 00:53:51 crc kubenswrapper[4945]: I0109 00:53:51.174700 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 00:53:51 crc kubenswrapper[4945]: I0109 00:53:51.178749 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 00:53:52 crc kubenswrapper[4945]: I0109 00:53:52.004151 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:53:52 crc kubenswrapper[4945]: E0109 00:53:52.004672 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:53:52 crc kubenswrapper[4945]: I0109 00:53:52.468927 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 00:53:52 crc kubenswrapper[4945]: I0109 00:53:52.472760 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.193743 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6sr9b"] Jan 09 00:54:00 crc kubenswrapper[4945]: E0109 00:54:00.194602 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" containerName="dnsmasq-dns" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.194616 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" containerName="dnsmasq-dns" Jan 09 00:54:00 crc kubenswrapper[4945]: E0109 00:54:00.194624 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" containerName="init" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.194629 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" containerName="init" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.194786 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f32c1a7-b9bf-44ad-8a4a-a882f105b638" containerName="dnsmasq-dns" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.195381 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.205414 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6sr9b"] Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.302585 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2f1d-account-create-update-vm58z"] Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.303918 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.306328 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.310979 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2f1d-account-create-update-vm58z"] Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.353854 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3cb9ec-890b-44a1-8ef2-149b421860cd-operator-scripts\") pod \"placement-db-create-6sr9b\" (UID: \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\") " pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.353969 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m26m\" (UniqueName: \"kubernetes.io/projected/fa3cb9ec-890b-44a1-8ef2-149b421860cd-kube-api-access-2m26m\") pod \"placement-db-create-6sr9b\" (UID: \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\") " pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.455111 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m26m\" (UniqueName: \"kubernetes.io/projected/fa3cb9ec-890b-44a1-8ef2-149b421860cd-kube-api-access-2m26m\") pod \"placement-db-create-6sr9b\" (UID: \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\") " pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.455181 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8cw\" (UniqueName: \"kubernetes.io/projected/777ee79a-fbcc-49ed-8e95-287c51727cee-kube-api-access-kr8cw\") pod \"placement-2f1d-account-create-update-vm58z\" (UID: \"777ee79a-fbcc-49ed-8e95-287c51727cee\") " pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.455271 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3cb9ec-890b-44a1-8ef2-149b421860cd-operator-scripts\") pod \"placement-db-create-6sr9b\" (UID: \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\") " pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.455305 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/777ee79a-fbcc-49ed-8e95-287c51727cee-operator-scripts\") pod \"placement-2f1d-account-create-update-vm58z\" (UID: \"777ee79a-fbcc-49ed-8e95-287c51727cee\") " pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.456260 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3cb9ec-890b-44a1-8ef2-149b421860cd-operator-scripts\") pod \"placement-db-create-6sr9b\" (UID: \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\") " pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.474764 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m26m\" (UniqueName: \"kubernetes.io/projected/fa3cb9ec-890b-44a1-8ef2-149b421860cd-kube-api-access-2m26m\") pod \"placement-db-create-6sr9b\" (UID: \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\") " pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.512621 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.559220 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/777ee79a-fbcc-49ed-8e95-287c51727cee-operator-scripts\") pod \"placement-2f1d-account-create-update-vm58z\" (UID: \"777ee79a-fbcc-49ed-8e95-287c51727cee\") " pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.559321 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8cw\" (UniqueName: \"kubernetes.io/projected/777ee79a-fbcc-49ed-8e95-287c51727cee-kube-api-access-kr8cw\") pod \"placement-2f1d-account-create-update-vm58z\" (UID: \"777ee79a-fbcc-49ed-8e95-287c51727cee\") " pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.560323 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/777ee79a-fbcc-49ed-8e95-287c51727cee-operator-scripts\") pod \"placement-2f1d-account-create-update-vm58z\" (UID: \"777ee79a-fbcc-49ed-8e95-287c51727cee\") " pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:00 crc kubenswrapper[4945]: I0109 00:54:00.582455 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8cw\" (UniqueName: \"kubernetes.io/projected/777ee79a-fbcc-49ed-8e95-287c51727cee-kube-api-access-kr8cw\") pod \"placement-2f1d-account-create-update-vm58z\" (UID: \"777ee79a-fbcc-49ed-8e95-287c51727cee\") " pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:01 crc kubenswrapper[4945]: I0109 00:54:00.640745 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:01 crc kubenswrapper[4945]: I0109 00:54:01.505672 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6sr9b"] Jan 09 00:54:01 crc kubenswrapper[4945]: I0109 00:54:01.515486 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2f1d-account-create-update-vm58z"] Jan 09 00:54:01 crc kubenswrapper[4945]: I0109 00:54:01.651321 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2f1d-account-create-update-vm58z" event={"ID":"777ee79a-fbcc-49ed-8e95-287c51727cee","Type":"ContainerStarted","Data":"36a9dc31d452c8ac08dbaa30dff2d3359ce9552f570806b5e350db1913bceddf"} Jan 09 00:54:01 crc kubenswrapper[4945]: I0109 00:54:01.652282 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6sr9b" event={"ID":"fa3cb9ec-890b-44a1-8ef2-149b421860cd","Type":"ContainerStarted","Data":"d4c6d5f4b8dbe4a67cae2a2463a913770ce17a2ead93f3805cc0cdf0a7fbae0c"} Jan 09 00:54:02 crc kubenswrapper[4945]: I0109 00:54:02.677112 4945 generic.go:334] "Generic (PLEG): container finished" podID="fa3cb9ec-890b-44a1-8ef2-149b421860cd" containerID="159f6bb2662aace3edae8bda41df45a946d818d9cd93fac5c5ea32338ef5ab85" exitCode=0 Jan 09 00:54:02 crc kubenswrapper[4945]: I0109 00:54:02.677252 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6sr9b" event={"ID":"fa3cb9ec-890b-44a1-8ef2-149b421860cd","Type":"ContainerDied","Data":"159f6bb2662aace3edae8bda41df45a946d818d9cd93fac5c5ea32338ef5ab85"} Jan 09 00:54:02 crc kubenswrapper[4945]: I0109 00:54:02.681310 4945 generic.go:334] "Generic (PLEG): container finished" podID="777ee79a-fbcc-49ed-8e95-287c51727cee" containerID="624f452d4e92899034642e58f4ecaf0aed7b9155c18780fe176dcd1eb87e91f6" exitCode=0 Jan 09 00:54:02 crc kubenswrapper[4945]: I0109 00:54:02.681556 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2f1d-account-create-update-vm58z" event={"ID":"777ee79a-fbcc-49ed-8e95-287c51727cee","Type":"ContainerDied","Data":"624f452d4e92899034642e58f4ecaf0aed7b9155c18780fe176dcd1eb87e91f6"} Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.121152 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.140627 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.222236 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3cb9ec-890b-44a1-8ef2-149b421860cd-operator-scripts\") pod \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\" (UID: \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\") " Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.222482 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m26m\" (UniqueName: \"kubernetes.io/projected/fa3cb9ec-890b-44a1-8ef2-149b421860cd-kube-api-access-2m26m\") pod \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\" (UID: \"fa3cb9ec-890b-44a1-8ef2-149b421860cd\") " Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.222818 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa3cb9ec-890b-44a1-8ef2-149b421860cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa3cb9ec-890b-44a1-8ef2-149b421860cd" (UID: "fa3cb9ec-890b-44a1-8ef2-149b421860cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.228189 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3cb9ec-890b-44a1-8ef2-149b421860cd-kube-api-access-2m26m" (OuterVolumeSpecName: "kube-api-access-2m26m") pod "fa3cb9ec-890b-44a1-8ef2-149b421860cd" (UID: "fa3cb9ec-890b-44a1-8ef2-149b421860cd"). InnerVolumeSpecName "kube-api-access-2m26m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.324439 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr8cw\" (UniqueName: \"kubernetes.io/projected/777ee79a-fbcc-49ed-8e95-287c51727cee-kube-api-access-kr8cw\") pod \"777ee79a-fbcc-49ed-8e95-287c51727cee\" (UID: \"777ee79a-fbcc-49ed-8e95-287c51727cee\") " Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.324661 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/777ee79a-fbcc-49ed-8e95-287c51727cee-operator-scripts\") pod \"777ee79a-fbcc-49ed-8e95-287c51727cee\" (UID: \"777ee79a-fbcc-49ed-8e95-287c51727cee\") " Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.325147 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m26m\" (UniqueName: \"kubernetes.io/projected/fa3cb9ec-890b-44a1-8ef2-149b421860cd-kube-api-access-2m26m\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.325168 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa3cb9ec-890b-44a1-8ef2-149b421860cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.325208 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/777ee79a-fbcc-49ed-8e95-287c51727cee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "777ee79a-fbcc-49ed-8e95-287c51727cee" (UID: "777ee79a-fbcc-49ed-8e95-287c51727cee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.327733 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777ee79a-fbcc-49ed-8e95-287c51727cee-kube-api-access-kr8cw" (OuterVolumeSpecName: "kube-api-access-kr8cw") pod "777ee79a-fbcc-49ed-8e95-287c51727cee" (UID: "777ee79a-fbcc-49ed-8e95-287c51727cee"). InnerVolumeSpecName "kube-api-access-kr8cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.426797 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/777ee79a-fbcc-49ed-8e95-287c51727cee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.426838 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr8cw\" (UniqueName: \"kubernetes.io/projected/777ee79a-fbcc-49ed-8e95-287c51727cee-kube-api-access-kr8cw\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.699321 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2f1d-account-create-update-vm58z" event={"ID":"777ee79a-fbcc-49ed-8e95-287c51727cee","Type":"ContainerDied","Data":"36a9dc31d452c8ac08dbaa30dff2d3359ce9552f570806b5e350db1913bceddf"} Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.699352 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2f1d-account-create-update-vm58z" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.699368 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a9dc31d452c8ac08dbaa30dff2d3359ce9552f570806b5e350db1913bceddf" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.700823 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6sr9b" event={"ID":"fa3cb9ec-890b-44a1-8ef2-149b421860cd","Type":"ContainerDied","Data":"d4c6d5f4b8dbe4a67cae2a2463a913770ce17a2ead93f3805cc0cdf0a7fbae0c"} Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.700842 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c6d5f4b8dbe4a67cae2a2463a913770ce17a2ead93f3805cc0cdf0a7fbae0c" Jan 09 00:54:04 crc kubenswrapper[4945]: I0109 00:54:04.700877 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6sr9b" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.663092 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-n99ps"] Jan 09 00:54:05 crc kubenswrapper[4945]: E0109 00:54:05.663884 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3cb9ec-890b-44a1-8ef2-149b421860cd" containerName="mariadb-database-create" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.663909 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3cb9ec-890b-44a1-8ef2-149b421860cd" containerName="mariadb-database-create" Jan 09 00:54:05 crc kubenswrapper[4945]: E0109 00:54:05.663941 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777ee79a-fbcc-49ed-8e95-287c51727cee" containerName="mariadb-account-create-update" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.663950 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="777ee79a-fbcc-49ed-8e95-287c51727cee" containerName="mariadb-account-create-update" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.664204 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="777ee79a-fbcc-49ed-8e95-287c51727cee" containerName="mariadb-account-create-update" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.664241 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3cb9ec-890b-44a1-8ef2-149b421860cd" containerName="mariadb-database-create" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.665073 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.667968 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.668021 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r6sgw" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.670608 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.678043 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67ff6bf8fc-shzz7"] Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.679795 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.696290 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n99ps"] Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.711671 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff6bf8fc-shzz7"] Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851024 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/604969b3-7c1f-429e-ae71-a5ad0c8b9729-logs\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851087 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-combined-ca-bundle\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851305 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-config\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851392 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-nb\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851430 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-scripts\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851515 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc8wh\" (UniqueName: \"kubernetes.io/projected/ef6f3ace-5b81-49d4-9995-a61e35927da2-kube-api-access-qc8wh\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851555 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-dns-svc\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851590 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tthb\" (UniqueName: \"kubernetes.io/projected/604969b3-7c1f-429e-ae71-a5ad0c8b9729-kube-api-access-6tthb\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851634 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-config-data\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.851673 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-sb\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953068 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tthb\" (UniqueName: \"kubernetes.io/projected/604969b3-7c1f-429e-ae71-a5ad0c8b9729-kube-api-access-6tthb\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953170 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-config-data\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953204 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-sb\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953281 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/604969b3-7c1f-429e-ae71-a5ad0c8b9729-logs\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953326 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-combined-ca-bundle\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953367 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-config\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953402 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-nb\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953425 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-scripts\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953459 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc8wh\" (UniqueName: \"kubernetes.io/projected/ef6f3ace-5b81-49d4-9995-a61e35927da2-kube-api-access-qc8wh\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.953484 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-dns-svc\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.954380 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-dns-svc\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.954893 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/604969b3-7c1f-429e-ae71-a5ad0c8b9729-logs\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.954929 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-sb\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.955168 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-config\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.955617 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-nb\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.959890 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-scripts\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.960374 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-combined-ca-bundle\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.968806 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-config-data\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.971529 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tthb\" (UniqueName: \"kubernetes.io/projected/604969b3-7c1f-429e-ae71-a5ad0c8b9729-kube-api-access-6tthb\") pod \"placement-db-sync-n99ps\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.981924 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc8wh\" (UniqueName: \"kubernetes.io/projected/ef6f3ace-5b81-49d4-9995-a61e35927da2-kube-api-access-qc8wh\") pod \"dnsmasq-dns-67ff6bf8fc-shzz7\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:05 crc kubenswrapper[4945]: I0109 00:54:05.985831 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:06 crc kubenswrapper[4945]: I0109 00:54:06.008241 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:06 crc kubenswrapper[4945]: I0109 00:54:06.454710 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n99ps"] Jan 09 00:54:06 crc kubenswrapper[4945]: W0109 00:54:06.458632 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod604969b3_7c1f_429e_ae71_a5ad0c8b9729.slice/crio-83604e66b12eecaa4fc590ed110766ea5763838692bbda1b06b19874d8be302f WatchSource:0}: Error finding container 83604e66b12eecaa4fc590ed110766ea5763838692bbda1b06b19874d8be302f: Status 404 returned error can't find the container with id 83604e66b12eecaa4fc590ed110766ea5763838692bbda1b06b19874d8be302f Jan 09 00:54:06 crc kubenswrapper[4945]: I0109 00:54:06.513157 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67ff6bf8fc-shzz7"] Jan 09 00:54:06 crc kubenswrapper[4945]: W0109 00:54:06.519904 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6f3ace_5b81_49d4_9995_a61e35927da2.slice/crio-878142654f398fd82efd7ed42275aec4d35ad91519b479f6c9caebc86891bc75 WatchSource:0}: Error finding container 878142654f398fd82efd7ed42275aec4d35ad91519b479f6c9caebc86891bc75: Status 404 returned error can't find the container with id 878142654f398fd82efd7ed42275aec4d35ad91519b479f6c9caebc86891bc75 Jan 09 00:54:06 crc kubenswrapper[4945]: I0109 00:54:06.720535 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" event={"ID":"ef6f3ace-5b81-49d4-9995-a61e35927da2","Type":"ContainerStarted","Data":"1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91"} Jan 09 00:54:06 crc kubenswrapper[4945]: I0109 00:54:06.720592 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" event={"ID":"ef6f3ace-5b81-49d4-9995-a61e35927da2","Type":"ContainerStarted","Data":"878142654f398fd82efd7ed42275aec4d35ad91519b479f6c9caebc86891bc75"} Jan 09 00:54:06 crc kubenswrapper[4945]: I0109 00:54:06.722944 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n99ps" event={"ID":"604969b3-7c1f-429e-ae71-a5ad0c8b9729","Type":"ContainerStarted","Data":"dd17725e34c943b0ae4e752c797d784ce9dbf0fbccc78c403bbc91128fe5fe12"} Jan 09 00:54:06 crc kubenswrapper[4945]: I0109 00:54:06.722983 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n99ps" event={"ID":"604969b3-7c1f-429e-ae71-a5ad0c8b9729","Type":"ContainerStarted","Data":"83604e66b12eecaa4fc590ed110766ea5763838692bbda1b06b19874d8be302f"} Jan 09 00:54:06 crc kubenswrapper[4945]: I0109 00:54:06.768896 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-n99ps" podStartSLOduration=1.7688732090000001 podStartE2EDuration="1.768873209s" podCreationTimestamp="2026-01-09 00:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:54:06.761833786 +0000 UTC m=+5917.072992762" watchObservedRunningTime="2026-01-09 00:54:06.768873209 +0000 UTC m=+5917.080032155" Jan 09 00:54:06 crc kubenswrapper[4945]: E0109 00:54:06.902034 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6f3ace_5b81_49d4_9995_a61e35927da2.slice/crio-conmon-1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91.scope\": RecentStats: unable to find data in memory cache]" Jan 09 00:54:07 crc kubenswrapper[4945]: I0109 00:54:07.000351 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:54:07 crc kubenswrapper[4945]: E0109 00:54:07.000928 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:54:07 crc kubenswrapper[4945]: I0109 00:54:07.735922 4945 generic.go:334] "Generic (PLEG): container finished" podID="ef6f3ace-5b81-49d4-9995-a61e35927da2" containerID="1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91" exitCode=0 Jan 09 00:54:07 crc kubenswrapper[4945]: I0109 00:54:07.736031 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" event={"ID":"ef6f3ace-5b81-49d4-9995-a61e35927da2","Type":"ContainerDied","Data":"1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91"} Jan 09 00:54:08 crc kubenswrapper[4945]: I0109 00:54:08.748827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" event={"ID":"ef6f3ace-5b81-49d4-9995-a61e35927da2","Type":"ContainerStarted","Data":"52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985"} Jan 09 00:54:08 crc kubenswrapper[4945]: I0109 00:54:08.749493 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:08 crc kubenswrapper[4945]: I0109 00:54:08.752551 4945 generic.go:334] "Generic (PLEG): container finished" podID="604969b3-7c1f-429e-ae71-a5ad0c8b9729" containerID="dd17725e34c943b0ae4e752c797d784ce9dbf0fbccc78c403bbc91128fe5fe12" exitCode=0 Jan 09 00:54:08 crc kubenswrapper[4945]: I0109 00:54:08.752593 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n99ps" event={"ID":"604969b3-7c1f-429e-ae71-a5ad0c8b9729","Type":"ContainerDied","Data":"dd17725e34c943b0ae4e752c797d784ce9dbf0fbccc78c403bbc91128fe5fe12"} Jan 09 00:54:08 crc kubenswrapper[4945]: I0109 00:54:08.782192 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" podStartSLOduration=3.7821667359999998 podStartE2EDuration="3.782166736s" podCreationTimestamp="2026-01-09 00:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:54:08.777601323 +0000 UTC m=+5919.088760269" watchObservedRunningTime="2026-01-09 00:54:08.782166736 +0000 UTC m=+5919.093325682" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.147967 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.229838 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-combined-ca-bundle\") pod \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.230034 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-config-data\") pod \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.230099 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-scripts\") pod \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.230149 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tthb\" (UniqueName: \"kubernetes.io/projected/604969b3-7c1f-429e-ae71-a5ad0c8b9729-kube-api-access-6tthb\") pod \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.230226 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/604969b3-7c1f-429e-ae71-a5ad0c8b9729-logs\") pod \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\" (UID: \"604969b3-7c1f-429e-ae71-a5ad0c8b9729\") " Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.231346 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604969b3-7c1f-429e-ae71-a5ad0c8b9729-logs" (OuterVolumeSpecName: "logs") pod "604969b3-7c1f-429e-ae71-a5ad0c8b9729" (UID: "604969b3-7c1f-429e-ae71-a5ad0c8b9729"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.242333 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-scripts" (OuterVolumeSpecName: "scripts") pod "604969b3-7c1f-429e-ae71-a5ad0c8b9729" (UID: "604969b3-7c1f-429e-ae71-a5ad0c8b9729"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.242415 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604969b3-7c1f-429e-ae71-a5ad0c8b9729-kube-api-access-6tthb" (OuterVolumeSpecName: "kube-api-access-6tthb") pod "604969b3-7c1f-429e-ae71-a5ad0c8b9729" (UID: "604969b3-7c1f-429e-ae71-a5ad0c8b9729"). InnerVolumeSpecName "kube-api-access-6tthb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.256368 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "604969b3-7c1f-429e-ae71-a5ad0c8b9729" (UID: "604969b3-7c1f-429e-ae71-a5ad0c8b9729"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.257331 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-config-data" (OuterVolumeSpecName: "config-data") pod "604969b3-7c1f-429e-ae71-a5ad0c8b9729" (UID: "604969b3-7c1f-429e-ae71-a5ad0c8b9729"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.332232 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/604969b3-7c1f-429e-ae71-a5ad0c8b9729-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.332268 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.332283 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.332295 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604969b3-7c1f-429e-ae71-a5ad0c8b9729-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.332307 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tthb\" (UniqueName: \"kubernetes.io/projected/604969b3-7c1f-429e-ae71-a5ad0c8b9729-kube-api-access-6tthb\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.776655 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n99ps" event={"ID":"604969b3-7c1f-429e-ae71-a5ad0c8b9729","Type":"ContainerDied","Data":"83604e66b12eecaa4fc590ed110766ea5763838692bbda1b06b19874d8be302f"} Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.776705 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83604e66b12eecaa4fc590ed110766ea5763838692bbda1b06b19874d8be302f" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.776828 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n99ps" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.886763 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64997f759b-plbfx"] Jan 09 00:54:10 crc kubenswrapper[4945]: E0109 00:54:10.887163 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604969b3-7c1f-429e-ae71-a5ad0c8b9729" containerName="placement-db-sync" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.887179 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="604969b3-7c1f-429e-ae71-a5ad0c8b9729" containerName="placement-db-sync" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.887343 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="604969b3-7c1f-429e-ae71-a5ad0c8b9729" containerName="placement-db-sync" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.888317 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.891091 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.891598 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r6sgw" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.894660 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 09 00:54:10 crc kubenswrapper[4945]: I0109 00:54:10.946420 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64997f759b-plbfx"] Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.044242 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-scripts\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.044299 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl2gd\" (UniqueName: \"kubernetes.io/projected/de5eb7fd-a844-4806-92fb-b8d37736abee-kube-api-access-gl2gd\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.044335 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-config-data\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.044480 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5eb7fd-a844-4806-92fb-b8d37736abee-logs\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.044559 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-combined-ca-bundle\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.146557 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-combined-ca-bundle\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.146640 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-scripts\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.146679 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl2gd\" (UniqueName: \"kubernetes.io/projected/de5eb7fd-a844-4806-92fb-b8d37736abee-kube-api-access-gl2gd\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.146712 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-config-data\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.146824 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5eb7fd-a844-4806-92fb-b8d37736abee-logs\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.148506 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5eb7fd-a844-4806-92fb-b8d37736abee-logs\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.152403 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-combined-ca-bundle\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.152893 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-config-data\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.167839 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de5eb7fd-a844-4806-92fb-b8d37736abee-scripts\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.172299 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl2gd\" (UniqueName: \"kubernetes.io/projected/de5eb7fd-a844-4806-92fb-b8d37736abee-kube-api-access-gl2gd\") pod \"placement-64997f759b-plbfx\" (UID: \"de5eb7fd-a844-4806-92fb-b8d37736abee\") " pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.203857 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:11 crc kubenswrapper[4945]: W0109 00:54:11.664431 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde5eb7fd_a844_4806_92fb_b8d37736abee.slice/crio-82c55226c31db2bed14c914b885bf542e6e5f6587c248ada0e6f205c76d62bed WatchSource:0}: Error finding container 82c55226c31db2bed14c914b885bf542e6e5f6587c248ada0e6f205c76d62bed: Status 404 returned error can't find the container with id 82c55226c31db2bed14c914b885bf542e6e5f6587c248ada0e6f205c76d62bed Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.666327 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64997f759b-plbfx"] Jan 09 00:54:11 crc kubenswrapper[4945]: I0109 00:54:11.799425 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64997f759b-plbfx" event={"ID":"de5eb7fd-a844-4806-92fb-b8d37736abee","Type":"ContainerStarted","Data":"82c55226c31db2bed14c914b885bf542e6e5f6587c248ada0e6f205c76d62bed"} Jan 09 00:54:12 crc kubenswrapper[4945]: I0109 00:54:12.813964 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64997f759b-plbfx" event={"ID":"de5eb7fd-a844-4806-92fb-b8d37736abee","Type":"ContainerStarted","Data":"1536f0f0d0c71c38c60b77b6861ed0d97274e5d92bad0fb447328902bfa0e80b"} Jan 09 00:54:12 crc kubenswrapper[4945]: I0109 00:54:12.814414 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64997f759b-plbfx" event={"ID":"de5eb7fd-a844-4806-92fb-b8d37736abee","Type":"ContainerStarted","Data":"c91ea29b23b9502c9372aadacce77e330b170a51b290fc66ac74015a44fcb29b"} Jan 09 00:54:12 crc kubenswrapper[4945]: I0109 00:54:12.814668 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:12 crc kubenswrapper[4945]: I0109 00:54:12.814739 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:12 crc kubenswrapper[4945]: I0109 00:54:12.853544 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64997f759b-plbfx" podStartSLOduration=2.853508788 podStartE2EDuration="2.853508788s" podCreationTimestamp="2026-01-09 00:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:54:12.847636334 +0000 UTC m=+5923.158795310" watchObservedRunningTime="2026-01-09 00:54:12.853508788 +0000 UTC m=+5923.164667774" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.013700 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.102600 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d85cf7c-44cms"] Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.103141 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" podUID="a38369be-67be-4a77-89bb-23ea75f4fe4d" containerName="dnsmasq-dns" containerID="cri-o://68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea" gracePeriod=10 Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.647609 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.756640 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-config\") pod \"a38369be-67be-4a77-89bb-23ea75f4fe4d\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.756719 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-nb\") pod \"a38369be-67be-4a77-89bb-23ea75f4fe4d\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.756791 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trj7r\" (UniqueName: \"kubernetes.io/projected/a38369be-67be-4a77-89bb-23ea75f4fe4d-kube-api-access-trj7r\") pod \"a38369be-67be-4a77-89bb-23ea75f4fe4d\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.756874 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-sb\") pod \"a38369be-67be-4a77-89bb-23ea75f4fe4d\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.756978 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-dns-svc\") pod \"a38369be-67be-4a77-89bb-23ea75f4fe4d\" (UID: \"a38369be-67be-4a77-89bb-23ea75f4fe4d\") " Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.762874 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a38369be-67be-4a77-89bb-23ea75f4fe4d-kube-api-access-trj7r" (OuterVolumeSpecName: "kube-api-access-trj7r") pod "a38369be-67be-4a77-89bb-23ea75f4fe4d" (UID: "a38369be-67be-4a77-89bb-23ea75f4fe4d"). InnerVolumeSpecName "kube-api-access-trj7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.803247 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a38369be-67be-4a77-89bb-23ea75f4fe4d" (UID: "a38369be-67be-4a77-89bb-23ea75f4fe4d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.805257 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-config" (OuterVolumeSpecName: "config") pod "a38369be-67be-4a77-89bb-23ea75f4fe4d" (UID: "a38369be-67be-4a77-89bb-23ea75f4fe4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.807763 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a38369be-67be-4a77-89bb-23ea75f4fe4d" (UID: "a38369be-67be-4a77-89bb-23ea75f4fe4d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.808738 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a38369be-67be-4a77-89bb-23ea75f4fe4d" (UID: "a38369be-67be-4a77-89bb-23ea75f4fe4d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.856933 4945 generic.go:334] "Generic (PLEG): container finished" podID="a38369be-67be-4a77-89bb-23ea75f4fe4d" containerID="68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea" exitCode=0 Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.857059 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" event={"ID":"a38369be-67be-4a77-89bb-23ea75f4fe4d","Type":"ContainerDied","Data":"68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea"} Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.857103 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" event={"ID":"a38369be-67be-4a77-89bb-23ea75f4fe4d","Type":"ContainerDied","Data":"48b7f80469a179235a4f4a852dc9d040dc07937a82f6965278b5001b1191414a"} Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.857127 4945 scope.go:117] "RemoveContainer" containerID="68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.857157 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d85cf7c-44cms" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.858445 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.858474 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trj7r\" (UniqueName: \"kubernetes.io/projected/a38369be-67be-4a77-89bb-23ea75f4fe4d-kube-api-access-trj7r\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.858485 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.858496 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.858505 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a38369be-67be-4a77-89bb-23ea75f4fe4d-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.890451 4945 scope.go:117] "RemoveContainer" containerID="e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.892678 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d85cf7c-44cms"] Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.900488 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545d85cf7c-44cms"] Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.919415 4945 scope.go:117] "RemoveContainer" containerID="68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea" Jan 09 00:54:16 crc kubenswrapper[4945]: E0109 00:54:16.919877 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea\": container with ID starting with 68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea not found: ID does not exist" containerID="68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.919930 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea"} err="failed to get container status \"68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea\": rpc error: code = NotFound desc = could not find container \"68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea\": container with ID starting with 68179dfb1c4038c45bf2d23f2c42343533f2da89d4d5b6b2a21a33dec2e54dea not found: ID does not exist" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.919960 4945 scope.go:117] "RemoveContainer" containerID="e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9" Jan 09 00:54:16 crc kubenswrapper[4945]: E0109 00:54:16.920301 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9\": container with ID starting with e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9 not found: ID does not exist" containerID="e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9" Jan 09 00:54:16 crc kubenswrapper[4945]: I0109 00:54:16.920338 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9"} err="failed to get container status \"e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9\": rpc error: code = NotFound desc = could not find container \"e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9\": container with ID starting with e7934518dcf46feac258c7f0cbe65df7d376c0ba881b867df331b4ac73a56dc9 not found: ID does not exist" Jan 09 00:54:18 crc kubenswrapper[4945]: I0109 00:54:18.013561 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a38369be-67be-4a77-89bb-23ea75f4fe4d" path="/var/lib/kubelet/pods/a38369be-67be-4a77-89bb-23ea75f4fe4d/volumes" Jan 09 00:54:20 crc kubenswrapper[4945]: I0109 00:54:20.007595 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:54:20 crc kubenswrapper[4945]: E0109 00:54:20.008102 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:54:34 crc kubenswrapper[4945]: I0109 00:54:34.000710 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:54:34 crc kubenswrapper[4945]: E0109 00:54:34.001939 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:54:36 crc kubenswrapper[4945]: I0109 00:54:36.630767 4945 scope.go:117] "RemoveContainer" containerID="8dd84795942fc9763e6a73d30b08c173b2dab3485a28e2f0fa21f44dc06bdbe9" Jan 09 00:54:36 crc kubenswrapper[4945]: I0109 00:54:36.658443 4945 scope.go:117] "RemoveContainer" containerID="fdd14c843320a2b0200077ab4bc9e6403ef9ae620d8a7407c13be17e6882c05d" Jan 09 00:54:42 crc kubenswrapper[4945]: I0109 00:54:42.225072 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:42 crc kubenswrapper[4945]: I0109 00:54:42.251696 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64997f759b-plbfx" Jan 09 00:54:47 crc kubenswrapper[4945]: I0109 00:54:47.001653 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:54:47 crc kubenswrapper[4945]: E0109 00:54:47.002485 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:55:01 crc kubenswrapper[4945]: I0109 00:55:01.000773 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:55:01 crc kubenswrapper[4945]: E0109 00:55:01.001666 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:55:03 crc kubenswrapper[4945]: E0109 00:55:03.724617 4945 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.74:33654->38.102.83.74:45665: write tcp 38.102.83.74:33654->38.102.83.74:45665: write: connection reset by peer Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.321702 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-nvm8x"] Jan 09 00:55:06 crc kubenswrapper[4945]: E0109 00:55:06.322480 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38369be-67be-4a77-89bb-23ea75f4fe4d" containerName="init" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.322500 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38369be-67be-4a77-89bb-23ea75f4fe4d" containerName="init" Jan 09 00:55:06 crc kubenswrapper[4945]: E0109 00:55:06.322545 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38369be-67be-4a77-89bb-23ea75f4fe4d" containerName="dnsmasq-dns" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.322555 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38369be-67be-4a77-89bb-23ea75f4fe4d" containerName="dnsmasq-dns" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.322758 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38369be-67be-4a77-89bb-23ea75f4fe4d" containerName="dnsmasq-dns" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.323667 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.333059 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nvm8x"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.414152 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vsm4l"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.415414 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.431395 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vsm4l"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.447059 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4tw\" (UniqueName: \"kubernetes.io/projected/4e45dcc7-78b7-462c-9137-48181b6c114a-kube-api-access-4n4tw\") pod \"nova-api-db-create-nvm8x\" (UID: \"4e45dcc7-78b7-462c-9137-48181b6c114a\") " pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.447232 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e45dcc7-78b7-462c-9137-48181b6c114a-operator-scripts\") pod \"nova-api-db-create-nvm8x\" (UID: \"4e45dcc7-78b7-462c-9137-48181b6c114a\") " pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.523119 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3127-account-create-update-tnrnz"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.524181 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.526604 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.534852 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3127-account-create-update-tnrnz"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.548549 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1a19223-51f2-407d-92a3-8bea7f37f1fb-operator-scripts\") pod \"nova-cell0-db-create-vsm4l\" (UID: \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\") " pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.548628 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e45dcc7-78b7-462c-9137-48181b6c114a-operator-scripts\") pod \"nova-api-db-create-nvm8x\" (UID: \"4e45dcc7-78b7-462c-9137-48181b6c114a\") " pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.548853 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4cv\" (UniqueName: \"kubernetes.io/projected/d1a19223-51f2-407d-92a3-8bea7f37f1fb-kube-api-access-rn4cv\") pod \"nova-cell0-db-create-vsm4l\" (UID: \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\") " pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.548933 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4tw\" (UniqueName: \"kubernetes.io/projected/4e45dcc7-78b7-462c-9137-48181b6c114a-kube-api-access-4n4tw\") pod \"nova-api-db-create-nvm8x\" (UID: \"4e45dcc7-78b7-462c-9137-48181b6c114a\") " pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.549626 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e45dcc7-78b7-462c-9137-48181b6c114a-operator-scripts\") pod \"nova-api-db-create-nvm8x\" (UID: \"4e45dcc7-78b7-462c-9137-48181b6c114a\") " pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.568521 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4tw\" (UniqueName: \"kubernetes.io/projected/4e45dcc7-78b7-462c-9137-48181b6c114a-kube-api-access-4n4tw\") pod \"nova-api-db-create-nvm8x\" (UID: \"4e45dcc7-78b7-462c-9137-48181b6c114a\") " pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.622775 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7dp9w"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.624298 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.645074 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7dp9w"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.645495 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.651375 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-operator-scripts\") pod \"nova-api-3127-account-create-update-tnrnz\" (UID: \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\") " pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.651457 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4t4l\" (UniqueName: \"kubernetes.io/projected/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-kube-api-access-t4t4l\") pod \"nova-api-3127-account-create-update-tnrnz\" (UID: \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\") " pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.651513 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1a19223-51f2-407d-92a3-8bea7f37f1fb-operator-scripts\") pod \"nova-cell0-db-create-vsm4l\" (UID: \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\") " pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.651573 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4cv\" (UniqueName: \"kubernetes.io/projected/d1a19223-51f2-407d-92a3-8bea7f37f1fb-kube-api-access-rn4cv\") pod \"nova-cell0-db-create-vsm4l\" (UID: \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\") " pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.653122 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1a19223-51f2-407d-92a3-8bea7f37f1fb-operator-scripts\") pod \"nova-cell0-db-create-vsm4l\" (UID: \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\") " pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.675350 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4cv\" (UniqueName: \"kubernetes.io/projected/d1a19223-51f2-407d-92a3-8bea7f37f1fb-kube-api-access-rn4cv\") pod \"nova-cell0-db-create-vsm4l\" (UID: \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\") " pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.733820 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.741310 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-79e3-account-create-update-9rgmt"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.742537 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.745133 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.752986 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-79e3-account-create-update-9rgmt"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.753189 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-operator-scripts\") pod \"nova-cell1-db-create-7dp9w\" (UID: \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\") " pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.753260 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-operator-scripts\") pod \"nova-api-3127-account-create-update-tnrnz\" (UID: \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\") " pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.753336 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4t4l\" (UniqueName: \"kubernetes.io/projected/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-kube-api-access-t4t4l\") pod \"nova-api-3127-account-create-update-tnrnz\" (UID: \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\") " pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.753357 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd2sh\" (UniqueName: \"kubernetes.io/projected/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-kube-api-access-pd2sh\") pod \"nova-cell1-db-create-7dp9w\" (UID: \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\") " pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.754155 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-operator-scripts\") pod \"nova-api-3127-account-create-update-tnrnz\" (UID: \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\") " pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.799660 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4t4l\" (UniqueName: \"kubernetes.io/projected/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-kube-api-access-t4t4l\") pod \"nova-api-3127-account-create-update-tnrnz\" (UID: \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\") " pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.846888 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.856303 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd2sh\" (UniqueName: \"kubernetes.io/projected/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-kube-api-access-pd2sh\") pod \"nova-cell1-db-create-7dp9w\" (UID: \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\") " pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.856422 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4t4x\" (UniqueName: \"kubernetes.io/projected/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-kube-api-access-j4t4x\") pod \"nova-cell0-79e3-account-create-update-9rgmt\" (UID: \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\") " pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.856497 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-operator-scripts\") pod \"nova-cell0-79e3-account-create-update-9rgmt\" (UID: \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\") " pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.856556 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-operator-scripts\") pod \"nova-cell1-db-create-7dp9w\" (UID: \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\") " pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.857392 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-operator-scripts\") pod \"nova-cell1-db-create-7dp9w\" (UID: \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\") " pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.877489 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd2sh\" (UniqueName: \"kubernetes.io/projected/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-kube-api-access-pd2sh\") pod \"nova-cell1-db-create-7dp9w\" (UID: \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\") " pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.946142 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-73e8-account-create-update-tz4jl"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.948099 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.951368 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.951690 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.957804 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-73e8-account-create-update-tz4jl"] Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.958236 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4t4x\" (UniqueName: \"kubernetes.io/projected/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-kube-api-access-j4t4x\") pod \"nova-cell0-79e3-account-create-update-9rgmt\" (UID: \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\") " pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.958319 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-operator-scripts\") pod \"nova-cell0-79e3-account-create-update-9rgmt\" (UID: \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\") " pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.959602 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-operator-scripts\") pod \"nova-cell0-79e3-account-create-update-9rgmt\" (UID: \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\") " pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:06 crc kubenswrapper[4945]: I0109 00:55:06.986425 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4t4x\" (UniqueName: \"kubernetes.io/projected/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-kube-api-access-j4t4x\") pod \"nova-cell0-79e3-account-create-update-9rgmt\" (UID: \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\") " pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.060174 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-operator-scripts\") pod \"nova-cell1-73e8-account-create-update-tz4jl\" (UID: \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\") " pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.060279 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzc9g\" (UniqueName: \"kubernetes.io/projected/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-kube-api-access-dzc9g\") pod \"nova-cell1-73e8-account-create-update-tz4jl\" (UID: \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\") " pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.162169 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-operator-scripts\") pod \"nova-cell1-73e8-account-create-update-tz4jl\" (UID: \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\") " pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.162223 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzc9g\" (UniqueName: \"kubernetes.io/projected/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-kube-api-access-dzc9g\") pod \"nova-cell1-73e8-account-create-update-tz4jl\" (UID: \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\") " pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.163230 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-operator-scripts\") pod \"nova-cell1-73e8-account-create-update-tz4jl\" (UID: \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\") " pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.181775 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzc9g\" (UniqueName: \"kubernetes.io/projected/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-kube-api-access-dzc9g\") pod \"nova-cell1-73e8-account-create-update-tz4jl\" (UID: \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\") " pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.192306 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.253791 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nvm8x"] Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.283087 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.327025 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vsm4l"] Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.328785 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nvm8x" event={"ID":"4e45dcc7-78b7-462c-9137-48181b6c114a","Type":"ContainerStarted","Data":"703fe1a5116070f26410fbc3d42dbff28f6b23bb1c9015259f091d36f287af04"} Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.401240 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3127-account-create-update-tnrnz"] Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.465656 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7dp9w"] Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.634806 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-79e3-account-create-update-9rgmt"] Jan 09 00:55:07 crc kubenswrapper[4945]: W0109 00:55:07.653004 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda34718dd_0f79_40b2_afeb_e0c2f4b05d74.slice/crio-2d36cd448669bcb46303c023f617e383923a860e000495f2caf6e6d2adf5a557 WatchSource:0}: Error finding container 2d36cd448669bcb46303c023f617e383923a860e000495f2caf6e6d2adf5a557: Status 404 returned error can't find the container with id 2d36cd448669bcb46303c023f617e383923a860e000495f2caf6e6d2adf5a557 Jan 09 00:55:07 crc kubenswrapper[4945]: W0109 00:55:07.774254 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e4d7a36_fe46_48ef_835d_9ceef2b01dd3.slice/crio-eac53c0dd7e1bf9ace8c925e38a8b720ef2ae5795a47cf63da423b1993fd71d2 WatchSource:0}: Error finding container eac53c0dd7e1bf9ace8c925e38a8b720ef2ae5795a47cf63da423b1993fd71d2: Status 404 returned error can't find the container with id eac53c0dd7e1bf9ace8c925e38a8b720ef2ae5795a47cf63da423b1993fd71d2 Jan 09 00:55:07 crc kubenswrapper[4945]: I0109 00:55:07.776423 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-73e8-account-create-update-tz4jl"] Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.337243 4945 generic.go:334] "Generic (PLEG): container finished" podID="35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10" containerID="bfde8956adc8fc5147deb3a2c814a8d5124d17e932e110702310677e5ccd5ecf" exitCode=0 Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.337362 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7dp9w" event={"ID":"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10","Type":"ContainerDied","Data":"bfde8956adc8fc5147deb3a2c814a8d5124d17e932e110702310677e5ccd5ecf"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.337529 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7dp9w" event={"ID":"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10","Type":"ContainerStarted","Data":"5b5ceead267891bb1393f6ce4beee90559791d3eb66c32c4d4af4cc6a3d3435d"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.339005 4945 generic.go:334] "Generic (PLEG): container finished" podID="b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1" containerID="e48f67d17b31f692516d3f65e326421d8c10523621bb9891f19e07382787e384" exitCode=0 Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.339074 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3127-account-create-update-tnrnz" event={"ID":"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1","Type":"ContainerDied","Data":"e48f67d17b31f692516d3f65e326421d8c10523621bb9891f19e07382787e384"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.339103 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3127-account-create-update-tnrnz" event={"ID":"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1","Type":"ContainerStarted","Data":"967fdeb879fcf9f50dfd776ea575426355b40b2fac6fd56811ee4a05d9b61509"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.342064 4945 generic.go:334] "Generic (PLEG): container finished" podID="a34718dd-0f79-40b2-afeb-e0c2f4b05d74" containerID="5ef4a4794772e7f294b7a148a6ea2f5798bc54bb5108d36b20693886bb90542a" exitCode=0 Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.342149 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" event={"ID":"a34718dd-0f79-40b2-afeb-e0c2f4b05d74","Type":"ContainerDied","Data":"5ef4a4794772e7f294b7a148a6ea2f5798bc54bb5108d36b20693886bb90542a"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.342273 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" event={"ID":"a34718dd-0f79-40b2-afeb-e0c2f4b05d74","Type":"ContainerStarted","Data":"2d36cd448669bcb46303c023f617e383923a860e000495f2caf6e6d2adf5a557"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.343750 4945 generic.go:334] "Generic (PLEG): container finished" podID="8e4d7a36-fe46-48ef-835d-9ceef2b01dd3" containerID="c4c682af01ebe842e5031f26cc47aa0aa1e943988b431fb24ee27b51dab605c1" exitCode=0 Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.343779 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" event={"ID":"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3","Type":"ContainerDied","Data":"c4c682af01ebe842e5031f26cc47aa0aa1e943988b431fb24ee27b51dab605c1"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.343801 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" event={"ID":"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3","Type":"ContainerStarted","Data":"eac53c0dd7e1bf9ace8c925e38a8b720ef2ae5795a47cf63da423b1993fd71d2"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.345189 4945 generic.go:334] "Generic (PLEG): container finished" podID="d1a19223-51f2-407d-92a3-8bea7f37f1fb" containerID="055e64eb4a4c5ce1e53035e51e26c3e829a1b7cb58d9c8fd2aca834c9bb8ef6d" exitCode=0 Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.345243 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vsm4l" event={"ID":"d1a19223-51f2-407d-92a3-8bea7f37f1fb","Type":"ContainerDied","Data":"055e64eb4a4c5ce1e53035e51e26c3e829a1b7cb58d9c8fd2aca834c9bb8ef6d"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.345265 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vsm4l" event={"ID":"d1a19223-51f2-407d-92a3-8bea7f37f1fb","Type":"ContainerStarted","Data":"6f9c69e6163060edfd702c4adc622ca069ca9369b1dfe9fa850b72ad2de00ec2"} Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.347198 4945 generic.go:334] "Generic (PLEG): container finished" podID="4e45dcc7-78b7-462c-9137-48181b6c114a" containerID="6f4460bd69d58fb0645f055b1676f502c089957212f2edc9895fca63e9e46b07" exitCode=0 Jan 09 00:55:08 crc kubenswrapper[4945]: I0109 00:55:08.347227 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nvm8x" event={"ID":"4e45dcc7-78b7-462c-9137-48181b6c114a","Type":"ContainerDied","Data":"6f4460bd69d58fb0645f055b1676f502c089957212f2edc9895fca63e9e46b07"} Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.822071 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.897724 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.903204 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.913053 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4t4x\" (UniqueName: \"kubernetes.io/projected/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-kube-api-access-j4t4x\") pod \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\" (UID: \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\") " Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.913303 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-operator-scripts\") pod \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\" (UID: \"a34718dd-0f79-40b2-afeb-e0c2f4b05d74\") " Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.913965 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a34718dd-0f79-40b2-afeb-e0c2f4b05d74" (UID: "a34718dd-0f79-40b2-afeb-e0c2f4b05d74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.914465 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.916246 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.919539 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-kube-api-access-j4t4x" (OuterVolumeSpecName: "kube-api-access-j4t4x") pod "a34718dd-0f79-40b2-afeb-e0c2f4b05d74" (UID: "a34718dd-0f79-40b2-afeb-e0c2f4b05d74"). InnerVolumeSpecName "kube-api-access-j4t4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.922846 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:09 crc kubenswrapper[4945]: I0109 00:55:09.932825 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.015629 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn4cv\" (UniqueName: \"kubernetes.io/projected/d1a19223-51f2-407d-92a3-8bea7f37f1fb-kube-api-access-rn4cv\") pod \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\" (UID: \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.015769 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e45dcc7-78b7-462c-9137-48181b6c114a-operator-scripts\") pod \"4e45dcc7-78b7-462c-9137-48181b6c114a\" (UID: \"4e45dcc7-78b7-462c-9137-48181b6c114a\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.015798 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4t4l\" (UniqueName: \"kubernetes.io/projected/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-kube-api-access-t4t4l\") pod \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\" (UID: \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.015858 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-operator-scripts\") pod \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\" (UID: \"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.015886 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-operator-scripts\") pod \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\" (UID: \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.015945 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n4tw\" (UniqueName: \"kubernetes.io/projected/4e45dcc7-78b7-462c-9137-48181b6c114a-kube-api-access-4n4tw\") pod \"4e45dcc7-78b7-462c-9137-48181b6c114a\" (UID: \"4e45dcc7-78b7-462c-9137-48181b6c114a\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.015961 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd2sh\" (UniqueName: \"kubernetes.io/projected/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-kube-api-access-pd2sh\") pod \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\" (UID: \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016107 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1a19223-51f2-407d-92a3-8bea7f37f1fb-operator-scripts\") pod \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\" (UID: \"d1a19223-51f2-407d-92a3-8bea7f37f1fb\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016178 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzc9g\" (UniqueName: \"kubernetes.io/projected/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-kube-api-access-dzc9g\") pod \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\" (UID: \"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016222 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-operator-scripts\") pod \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\" (UID: \"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10\") " Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016362 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e45dcc7-78b7-462c-9137-48181b6c114a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e45dcc7-78b7-462c-9137-48181b6c114a" (UID: "4e45dcc7-78b7-462c-9137-48181b6c114a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016413 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e4d7a36-fe46-48ef-835d-9ceef2b01dd3" (UID: "8e4d7a36-fe46-48ef-835d-9ceef2b01dd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016705 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e45dcc7-78b7-462c-9137-48181b6c114a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016722 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016736 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4t4x\" (UniqueName: \"kubernetes.io/projected/a34718dd-0f79-40b2-afeb-e0c2f4b05d74-kube-api-access-j4t4x\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.016830 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1" (UID: "b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.017773 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1a19223-51f2-407d-92a3-8bea7f37f1fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1a19223-51f2-407d-92a3-8bea7f37f1fb" (UID: "d1a19223-51f2-407d-92a3-8bea7f37f1fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.018302 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10" (UID: "35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.018982 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a19223-51f2-407d-92a3-8bea7f37f1fb-kube-api-access-rn4cv" (OuterVolumeSpecName: "kube-api-access-rn4cv") pod "d1a19223-51f2-407d-92a3-8bea7f37f1fb" (UID: "d1a19223-51f2-407d-92a3-8bea7f37f1fb"). InnerVolumeSpecName "kube-api-access-rn4cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.019500 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-kube-api-access-t4t4l" (OuterVolumeSpecName: "kube-api-access-t4t4l") pod "b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1" (UID: "b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1"). InnerVolumeSpecName "kube-api-access-t4t4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.020972 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-kube-api-access-dzc9g" (OuterVolumeSpecName: "kube-api-access-dzc9g") pod "8e4d7a36-fe46-48ef-835d-9ceef2b01dd3" (UID: "8e4d7a36-fe46-48ef-835d-9ceef2b01dd3"). InnerVolumeSpecName "kube-api-access-dzc9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.023233 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e45dcc7-78b7-462c-9137-48181b6c114a-kube-api-access-4n4tw" (OuterVolumeSpecName: "kube-api-access-4n4tw") pod "4e45dcc7-78b7-462c-9137-48181b6c114a" (UID: "4e45dcc7-78b7-462c-9137-48181b6c114a"). InnerVolumeSpecName "kube-api-access-4n4tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.025367 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-kube-api-access-pd2sh" (OuterVolumeSpecName: "kube-api-access-pd2sh") pod "35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10" (UID: "35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10"). InnerVolumeSpecName "kube-api-access-pd2sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.118409 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzc9g\" (UniqueName: \"kubernetes.io/projected/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3-kube-api-access-dzc9g\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.118440 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.118450 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn4cv\" (UniqueName: \"kubernetes.io/projected/d1a19223-51f2-407d-92a3-8bea7f37f1fb-kube-api-access-rn4cv\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.118458 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4t4l\" (UniqueName: \"kubernetes.io/projected/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-kube-api-access-t4t4l\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.118468 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.118478 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n4tw\" (UniqueName: \"kubernetes.io/projected/4e45dcc7-78b7-462c-9137-48181b6c114a-kube-api-access-4n4tw\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.118487 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd2sh\" (UniqueName: \"kubernetes.io/projected/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10-kube-api-access-pd2sh\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.118495 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1a19223-51f2-407d-92a3-8bea7f37f1fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.370377 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" event={"ID":"8e4d7a36-fe46-48ef-835d-9ceef2b01dd3","Type":"ContainerDied","Data":"eac53c0dd7e1bf9ace8c925e38a8b720ef2ae5795a47cf63da423b1993fd71d2"} Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.370753 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac53c0dd7e1bf9ace8c925e38a8b720ef2ae5795a47cf63da423b1993fd71d2" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.370419 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-73e8-account-create-update-tz4jl" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.373009 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vsm4l" event={"ID":"d1a19223-51f2-407d-92a3-8bea7f37f1fb","Type":"ContainerDied","Data":"6f9c69e6163060edfd702c4adc622ca069ca9369b1dfe9fa850b72ad2de00ec2"} Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.373075 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9c69e6163060edfd702c4adc622ca069ca9369b1dfe9fa850b72ad2de00ec2" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.373019 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vsm4l" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.376141 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nvm8x" event={"ID":"4e45dcc7-78b7-462c-9137-48181b6c114a","Type":"ContainerDied","Data":"703fe1a5116070f26410fbc3d42dbff28f6b23bb1c9015259f091d36f287af04"} Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.376173 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="703fe1a5116070f26410fbc3d42dbff28f6b23bb1c9015259f091d36f287af04" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.376186 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nvm8x" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.380848 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7dp9w" event={"ID":"35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10","Type":"ContainerDied","Data":"5b5ceead267891bb1393f6ce4beee90559791d3eb66c32c4d4af4cc6a3d3435d"} Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.380878 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b5ceead267891bb1393f6ce4beee90559791d3eb66c32c4d4af4cc6a3d3435d" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.380875 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7dp9w" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.383273 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3127-account-create-update-tnrnz" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.383276 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3127-account-create-update-tnrnz" event={"ID":"b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1","Type":"ContainerDied","Data":"967fdeb879fcf9f50dfd776ea575426355b40b2fac6fd56811ee4a05d9b61509"} Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.383342 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="967fdeb879fcf9f50dfd776ea575426355b40b2fac6fd56811ee4a05d9b61509" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.385412 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" event={"ID":"a34718dd-0f79-40b2-afeb-e0c2f4b05d74","Type":"ContainerDied","Data":"2d36cd448669bcb46303c023f617e383923a860e000495f2caf6e6d2adf5a557"} Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.385464 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d36cd448669bcb46303c023f617e383923a860e000495f2caf6e6d2adf5a557" Jan 09 00:55:10 crc kubenswrapper[4945]: I0109 00:55:10.385489 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-79e3-account-create-update-9rgmt" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.909466 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2vhh7"] Jan 09 00:55:11 crc kubenswrapper[4945]: E0109 00:55:11.909929 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.909947 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: E0109 00:55:11.909960 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1a19223-51f2-407d-92a3-8bea7f37f1fb" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.909967 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a19223-51f2-407d-92a3-8bea7f37f1fb" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: E0109 00:55:11.909982 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e4d7a36-fe46-48ef-835d-9ceef2b01dd3" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910007 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e4d7a36-fe46-48ef-835d-9ceef2b01dd3" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: E0109 00:55:11.910025 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34718dd-0f79-40b2-afeb-e0c2f4b05d74" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910034 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34718dd-0f79-40b2-afeb-e0c2f4b05d74" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: E0109 00:55:11.910045 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e45dcc7-78b7-462c-9137-48181b6c114a" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910052 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e45dcc7-78b7-462c-9137-48181b6c114a" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: E0109 00:55:11.910082 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910090 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910335 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910349 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1a19223-51f2-407d-92a3-8bea7f37f1fb" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910365 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910379 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34718dd-0f79-40b2-afeb-e0c2f4b05d74" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910393 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e4d7a36-fe46-48ef-835d-9ceef2b01dd3" containerName="mariadb-account-create-update" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.910402 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e45dcc7-78b7-462c-9137-48181b6c114a" containerName="mariadb-database-create" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.911172 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.919759 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2vhh7"] Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.922045 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.922097 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.922157 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7hrnt" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.952467 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.952832 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d688t\" (UniqueName: \"kubernetes.io/projected/f55bd014-f2d4-456a-9961-5f7db7ff79d2-kube-api-access-d688t\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.952897 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-scripts\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:11 crc kubenswrapper[4945]: I0109 00:55:11.953011 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-config-data\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.055158 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-config-data\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.055288 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.055344 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d688t\" (UniqueName: \"kubernetes.io/projected/f55bd014-f2d4-456a-9961-5f7db7ff79d2-kube-api-access-d688t\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.055381 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-scripts\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.060206 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-config-data\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.060249 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.060500 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-scripts\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.071371 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d688t\" (UniqueName: \"kubernetes.io/projected/f55bd014-f2d4-456a-9961-5f7db7ff79d2-kube-api-access-d688t\") pod \"nova-cell0-conductor-db-sync-2vhh7\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.235507 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:12 crc kubenswrapper[4945]: I0109 00:55:12.688622 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2vhh7"] Jan 09 00:55:13 crc kubenswrapper[4945]: I0109 00:55:13.413306 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" event={"ID":"f55bd014-f2d4-456a-9961-5f7db7ff79d2","Type":"ContainerStarted","Data":"4e4bdf4057ca4569333923b7a5b6ca22bfd6bdf4b592a44390d815cab57e12a2"} Jan 09 00:55:13 crc kubenswrapper[4945]: I0109 00:55:13.413356 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" event={"ID":"f55bd014-f2d4-456a-9961-5f7db7ff79d2","Type":"ContainerStarted","Data":"63c467107a56238b7acf58609ee5b1a58e1e33c00a1efba9a8f867f30d7c31f1"} Jan 09 00:55:13 crc kubenswrapper[4945]: I0109 00:55:13.431611 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" podStartSLOduration=2.431580051 podStartE2EDuration="2.431580051s" podCreationTimestamp="2026-01-09 00:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:13.428223529 +0000 UTC m=+5983.739382505" watchObservedRunningTime="2026-01-09 00:55:13.431580051 +0000 UTC m=+5983.742739037" Jan 09 00:55:15 crc kubenswrapper[4945]: I0109 00:55:15.001510 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:55:15 crc kubenswrapper[4945]: E0109 00:55:15.002389 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:55:18 crc kubenswrapper[4945]: I0109 00:55:18.475277 4945 generic.go:334] "Generic (PLEG): container finished" podID="f55bd014-f2d4-456a-9961-5f7db7ff79d2" containerID="4e4bdf4057ca4569333923b7a5b6ca22bfd6bdf4b592a44390d815cab57e12a2" exitCode=0 Jan 09 00:55:18 crc kubenswrapper[4945]: I0109 00:55:18.475606 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" event={"ID":"f55bd014-f2d4-456a-9961-5f7db7ff79d2","Type":"ContainerDied","Data":"4e4bdf4057ca4569333923b7a5b6ca22bfd6bdf4b592a44390d815cab57e12a2"} Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.815322 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.933694 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-config-data\") pod \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.933757 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d688t\" (UniqueName: \"kubernetes.io/projected/f55bd014-f2d4-456a-9961-5f7db7ff79d2-kube-api-access-d688t\") pod \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.933803 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-combined-ca-bundle\") pod \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.934757 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-scripts\") pod \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\" (UID: \"f55bd014-f2d4-456a-9961-5f7db7ff79d2\") " Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.939037 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55bd014-f2d4-456a-9961-5f7db7ff79d2-kube-api-access-d688t" (OuterVolumeSpecName: "kube-api-access-d688t") pod "f55bd014-f2d4-456a-9961-5f7db7ff79d2" (UID: "f55bd014-f2d4-456a-9961-5f7db7ff79d2"). InnerVolumeSpecName "kube-api-access-d688t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.943156 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-scripts" (OuterVolumeSpecName: "scripts") pod "f55bd014-f2d4-456a-9961-5f7db7ff79d2" (UID: "f55bd014-f2d4-456a-9961-5f7db7ff79d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.973027 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-config-data" (OuterVolumeSpecName: "config-data") pod "f55bd014-f2d4-456a-9961-5f7db7ff79d2" (UID: "f55bd014-f2d4-456a-9961-5f7db7ff79d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:19 crc kubenswrapper[4945]: I0109 00:55:19.978904 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f55bd014-f2d4-456a-9961-5f7db7ff79d2" (UID: "f55bd014-f2d4-456a-9961-5f7db7ff79d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.037310 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.037359 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.037371 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d688t\" (UniqueName: \"kubernetes.io/projected/f55bd014-f2d4-456a-9961-5f7db7ff79d2-kube-api-access-d688t\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.037385 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55bd014-f2d4-456a-9961-5f7db7ff79d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.498833 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" event={"ID":"f55bd014-f2d4-456a-9961-5f7db7ff79d2","Type":"ContainerDied","Data":"63c467107a56238b7acf58609ee5b1a58e1e33c00a1efba9a8f867f30d7c31f1"} Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.498878 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63c467107a56238b7acf58609ee5b1a58e1e33c00a1efba9a8f867f30d7c31f1" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.498973 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2vhh7" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.571952 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:55:20 crc kubenswrapper[4945]: E0109 00:55:20.572359 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55bd014-f2d4-456a-9961-5f7db7ff79d2" containerName="nova-cell0-conductor-db-sync" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.572379 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55bd014-f2d4-456a-9961-5f7db7ff79d2" containerName="nova-cell0-conductor-db-sync" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.572570 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55bd014-f2d4-456a-9961-5f7db7ff79d2" containerName="nova-cell0-conductor-db-sync" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.573340 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.575112 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7hrnt" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.575374 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.590944 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.648828 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.648881 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56hf\" (UniqueName: \"kubernetes.io/projected/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-kube-api-access-r56hf\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.648913 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.750493 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.750540 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r56hf\" (UniqueName: \"kubernetes.io/projected/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-kube-api-access-r56hf\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.751559 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.755326 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.755675 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.777642 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r56hf\" (UniqueName: \"kubernetes.io/projected/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-kube-api-access-r56hf\") pod \"nova-cell0-conductor-0\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:20 crc kubenswrapper[4945]: I0109 00:55:20.927547 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:21 crc kubenswrapper[4945]: I0109 00:55:21.364502 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:55:21 crc kubenswrapper[4945]: I0109 00:55:21.509469 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7c618d8e-c04a-4e54-ac99-03c2f2e963d7","Type":"ContainerStarted","Data":"f83858314f2de6214e993c72f12720572dad2889af9a62f7ad8c2bb455761711"} Jan 09 00:55:22 crc kubenswrapper[4945]: I0109 00:55:22.519761 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7c618d8e-c04a-4e54-ac99-03c2f2e963d7","Type":"ContainerStarted","Data":"2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741"} Jan 09 00:55:22 crc kubenswrapper[4945]: I0109 00:55:22.520400 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:22 crc kubenswrapper[4945]: I0109 00:55:22.546159 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.546137266 podStartE2EDuration="2.546137266s" podCreationTimestamp="2026-01-09 00:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:22.537379971 +0000 UTC m=+5992.848538937" watchObservedRunningTime="2026-01-09 00:55:22.546137266 +0000 UTC m=+5992.857296212" Jan 09 00:55:26 crc kubenswrapper[4945]: I0109 00:55:26.001103 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:55:26 crc kubenswrapper[4945]: E0109 00:55:26.002082 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:55:30 crc kubenswrapper[4945]: I0109 00:55:30.953916 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.366054 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-cgs5h"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.367568 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.369577 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.370143 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.378507 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cgs5h"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.462651 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-scripts\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.462974 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.463125 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28bl9\" (UniqueName: \"kubernetes.io/projected/8cd86543-8a26-480f-9ce7-f74a2d3da10c-kube-api-access-28bl9\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.463171 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-config-data\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.554129 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.555636 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.561115 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.563302 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.565043 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.566122 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.566191 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28bl9\" (UniqueName: \"kubernetes.io/projected/8cd86543-8a26-480f-9ce7-f74a2d3da10c-kube-api-access-28bl9\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.566220 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-config-data\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.566544 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-scripts\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.579519 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.579527 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-scripts\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.581605 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.582517 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-config-data\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.601129 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.625942 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28bl9\" (UniqueName: \"kubernetes.io/projected/8cd86543-8a26-480f-9ce7-f74a2d3da10c-kube-api-access-28bl9\") pod \"nova-cell0-cell-mapping-cgs5h\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.631651 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.669157 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.669235 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrgq\" (UniqueName: \"kubernetes.io/projected/807c9c64-c777-46bb-bc62-b2144e3c6ce1-kube-api-access-mkrgq\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.669302 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-config-data\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.669347 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807c9c64-c777-46bb-bc62-b2144e3c6ce1-logs\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.669369 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsldj\" (UniqueName: \"kubernetes.io/projected/b111e6e3-ce23-41b6-9950-073a2c55e2f1-kube-api-access-vsldj\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.669440 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-config-data\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.669471 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b111e6e3-ce23-41b6-9950-073a2c55e2f1-logs\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.669534 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.688655 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.744504 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.745946 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.753181 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.768580 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-788568d9c9-5mvgb"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773378 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-config-data\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773421 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b111e6e3-ce23-41b6-9950-073a2c55e2f1-logs\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773470 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773504 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773530 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkrgq\" (UniqueName: \"kubernetes.io/projected/807c9c64-c777-46bb-bc62-b2144e3c6ce1-kube-api-access-mkrgq\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773568 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-config-data\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773593 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807c9c64-c777-46bb-bc62-b2144e3c6ce1-logs\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773610 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsldj\" (UniqueName: \"kubernetes.io/projected/b111e6e3-ce23-41b6-9950-073a2c55e2f1-kube-api-access-vsldj\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773670 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.773901 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b111e6e3-ce23-41b6-9950-073a2c55e2f1-logs\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.774741 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807c9c64-c777-46bb-bc62-b2144e3c6ce1-logs\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.790046 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-config-data\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.794916 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.796216 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.796408 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-config-data\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.799064 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.800872 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkrgq\" (UniqueName: \"kubernetes.io/projected/807c9c64-c777-46bb-bc62-b2144e3c6ce1-kube-api-access-mkrgq\") pod \"nova-metadata-0\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " pod="openstack/nova-metadata-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.800982 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.804337 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.804634 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsldj\" (UniqueName: \"kubernetes.io/projected/b111e6e3-ce23-41b6-9950-073a2c55e2f1-kube-api-access-vsldj\") pod \"nova-api-0\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.811196 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.824526 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-788568d9c9-5mvgb"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.839866 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.876609 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-268tf\" (UniqueName: \"kubernetes.io/projected/17a7de34-259d-4d7d-805c-ea9cf140a3e2-kube-api-access-268tf\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.876801 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-dns-svc\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.876906 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-nb\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.877035 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.877137 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.877296 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbxj\" (UniqueName: \"kubernetes.io/projected/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-kube-api-access-jzbxj\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.877377 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.877481 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqbb\" (UniqueName: \"kubernetes.io/projected/32828769-35bc-4ff6-b715-62d67c23e6e2-kube-api-access-hfqbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.877564 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-sb\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.877653 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-config-data\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.877734 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-config\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.979756 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzbxj\" (UniqueName: \"kubernetes.io/projected/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-kube-api-access-jzbxj\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.979820 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.979852 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfqbb\" (UniqueName: \"kubernetes.io/projected/32828769-35bc-4ff6-b715-62d67c23e6e2-kube-api-access-hfqbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.979872 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-sb\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.979896 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-config-data\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.979916 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-config\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.979962 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-268tf\" (UniqueName: \"kubernetes.io/projected/17a7de34-259d-4d7d-805c-ea9cf140a3e2-kube-api-access-268tf\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.979979 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-dns-svc\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.980019 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-nb\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.980048 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.980075 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.981703 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-config\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.984633 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.984720 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.985089 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-sb\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.987215 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-nb\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.989294 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-dns-svc\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.989598 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.995096 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-config-data\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:31 crc kubenswrapper[4945]: I0109 00:55:31.996377 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.002018 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzbxj\" (UniqueName: \"kubernetes.io/projected/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-kube-api-access-jzbxj\") pod \"dnsmasq-dns-788568d9c9-5mvgb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.002220 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-268tf\" (UniqueName: \"kubernetes.io/projected/17a7de34-259d-4d7d-805c-ea9cf140a3e2-kube-api-access-268tf\") pod \"nova-scheduler-0\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.004659 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfqbb\" (UniqueName: \"kubernetes.io/projected/32828769-35bc-4ff6-b715-62d67c23e6e2-kube-api-access-hfqbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.005146 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.124808 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.138671 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.146690 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.265214 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cgs5h"] Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.554807 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.570256 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.599310 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n2c85"] Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.600608 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.602894 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.603511 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.605309 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n2c85"] Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.627498 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b111e6e3-ce23-41b6-9950-073a2c55e2f1","Type":"ContainerStarted","Data":"a69afe630d2b49fefdc20d8cd33604b73a5e8a7863d829f035406fd52b808247"} Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.629163 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"807c9c64-c777-46bb-bc62-b2144e3c6ce1","Type":"ContainerStarted","Data":"2d45c0791703e06f1dd64dae0aa450e8b782f9acc7fdf14dd00a9de961082a40"} Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.634190 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cgs5h" event={"ID":"8cd86543-8a26-480f-9ce7-f74a2d3da10c","Type":"ContainerStarted","Data":"99b8428d154d57c509cee1d9fa42733ac8ab887dd634b62dacbd91d2385e2f34"} Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.634250 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cgs5h" event={"ID":"8cd86543-8a26-480f-9ce7-f74a2d3da10c","Type":"ContainerStarted","Data":"1cbda10b0f4ca32b370ccd329323f3792a5e7d56a59523317bd185d3653de751"} Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.657811 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-cgs5h" podStartSLOduration=1.657790319 podStartE2EDuration="1.657790319s" podCreationTimestamp="2026-01-09 00:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:32.654539579 +0000 UTC m=+6002.965698525" watchObservedRunningTime="2026-01-09 00:55:32.657790319 +0000 UTC m=+6002.968949265" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.696866 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.696983 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-config-data\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.697075 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5j8f\" (UniqueName: \"kubernetes.io/projected/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-kube-api-access-b5j8f\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.697097 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-scripts\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.701397 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.785097 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-788568d9c9-5mvgb"] Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.795150 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.797891 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-config-data\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.798051 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5j8f\" (UniqueName: \"kubernetes.io/projected/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-kube-api-access-b5j8f\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.798106 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-scripts\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.798178 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.802017 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.802606 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-config-data\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.806347 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-scripts\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.813365 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5j8f\" (UniqueName: \"kubernetes.io/projected/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-kube-api-access-b5j8f\") pod \"nova-cell1-conductor-db-sync-n2c85\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:32 crc kubenswrapper[4945]: I0109 00:55:32.888205 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.350724 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n2c85"] Jan 09 00:55:33 crc kubenswrapper[4945]: W0109 00:55:33.356433 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65c2c0a7_3d97_4a52_b029_f3a64c96a68c.slice/crio-3bab08b8b3dbd8b74b3b6fc28c18eceaad11a51a2852df42f8af4b9b0c8e4b94 WatchSource:0}: Error finding container 3bab08b8b3dbd8b74b3b6fc28c18eceaad11a51a2852df42f8af4b9b0c8e4b94: Status 404 returned error can't find the container with id 3bab08b8b3dbd8b74b3b6fc28c18eceaad11a51a2852df42f8af4b9b0c8e4b94 Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.645422 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b111e6e3-ce23-41b6-9950-073a2c55e2f1","Type":"ContainerStarted","Data":"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.645473 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b111e6e3-ce23-41b6-9950-073a2c55e2f1","Type":"ContainerStarted","Data":"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.646847 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17a7de34-259d-4d7d-805c-ea9cf140a3e2","Type":"ContainerStarted","Data":"7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.646879 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17a7de34-259d-4d7d-805c-ea9cf140a3e2","Type":"ContainerStarted","Data":"5f6f5eb116dfa708f9351bda4971d72adc580cd70a9724e890b6e75370b46a4d"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.650197 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"807c9c64-c777-46bb-bc62-b2144e3c6ce1","Type":"ContainerStarted","Data":"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.650222 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"807c9c64-c777-46bb-bc62-b2144e3c6ce1","Type":"ContainerStarted","Data":"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.668670 4945 generic.go:334] "Generic (PLEG): container finished" podID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" containerID="282b51394323560df1485118b76d9905e2c3cbc373cba54712693d75f15ce33a" exitCode=0 Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.668760 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" event={"ID":"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb","Type":"ContainerDied","Data":"282b51394323560df1485118b76d9905e2c3cbc373cba54712693d75f15ce33a"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.668795 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" event={"ID":"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb","Type":"ContainerStarted","Data":"a0536201742a508f6ba46789c9ae1c6cfff7724f78e928bfd57dcba5557546ce"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.669471 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6694592459999997 podStartE2EDuration="2.669459246s" podCreationTimestamp="2026-01-09 00:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:33.667603641 +0000 UTC m=+6003.978762587" watchObservedRunningTime="2026-01-09 00:55:33.669459246 +0000 UTC m=+6003.980618192" Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.695295 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32828769-35bc-4ff6-b715-62d67c23e6e2","Type":"ContainerStarted","Data":"76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.695412 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32828769-35bc-4ff6-b715-62d67c23e6e2","Type":"ContainerStarted","Data":"366589bf7bb48fc4f5f22e30c16b272436bf85fce53bd98a831b39428ae83ec4"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.698668 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.698650603 podStartE2EDuration="2.698650603s" podCreationTimestamp="2026-01-09 00:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:33.696003838 +0000 UTC m=+6004.007162784" watchObservedRunningTime="2026-01-09 00:55:33.698650603 +0000 UTC m=+6004.009809549" Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.708916 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n2c85" event={"ID":"65c2c0a7-3d97-4a52-b029-f3a64c96a68c","Type":"ContainerStarted","Data":"7412d5bba03cfde89f567c48c934a9136919eaac82eb94284d8abd8bcb728ab5"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.708974 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n2c85" event={"ID":"65c2c0a7-3d97-4a52-b029-f3a64c96a68c","Type":"ContainerStarted","Data":"3bab08b8b3dbd8b74b3b6fc28c18eceaad11a51a2852df42f8af4b9b0c8e4b94"} Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.717087 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.717033095 podStartE2EDuration="2.717033095s" podCreationTimestamp="2026-01-09 00:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:33.713715303 +0000 UTC m=+6004.024874259" watchObservedRunningTime="2026-01-09 00:55:33.717033095 +0000 UTC m=+6004.028192051" Jan 09 00:55:33 crc kubenswrapper[4945]: I0109 00:55:33.770583 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-n2c85" podStartSLOduration=1.770560199 podStartE2EDuration="1.770560199s" podCreationTimestamp="2026-01-09 00:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:33.764363577 +0000 UTC m=+6004.075522523" watchObservedRunningTime="2026-01-09 00:55:33.770560199 +0000 UTC m=+6004.081719145" Jan 09 00:55:34 crc kubenswrapper[4945]: I0109 00:55:34.719848 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" event={"ID":"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb","Type":"ContainerStarted","Data":"51477c22c40eb0ba4caf981578102f1606287875f59d43633600fa643039c446"} Jan 09 00:55:34 crc kubenswrapper[4945]: I0109 00:55:34.786006 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" podStartSLOduration=3.785959387 podStartE2EDuration="3.785959387s" podCreationTimestamp="2026-01-09 00:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:34.762495901 +0000 UTC m=+6005.073654887" watchObservedRunningTime="2026-01-09 00:55:34.785959387 +0000 UTC m=+6005.097118323" Jan 09 00:55:34 crc kubenswrapper[4945]: I0109 00:55:34.786168 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.786162982 podStartE2EDuration="3.786162982s" podCreationTimestamp="2026-01-09 00:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:33.786402268 +0000 UTC m=+6004.097561224" watchObservedRunningTime="2026-01-09 00:55:34.786162982 +0000 UTC m=+6005.097321928" Jan 09 00:55:35 crc kubenswrapper[4945]: I0109 00:55:35.727160 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:36 crc kubenswrapper[4945]: I0109 00:55:36.741552 4945 generic.go:334] "Generic (PLEG): container finished" podID="65c2c0a7-3d97-4a52-b029-f3a64c96a68c" containerID="7412d5bba03cfde89f567c48c934a9136919eaac82eb94284d8abd8bcb728ab5" exitCode=0 Jan 09 00:55:36 crc kubenswrapper[4945]: I0109 00:55:36.742743 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n2c85" event={"ID":"65c2c0a7-3d97-4a52-b029-f3a64c96a68c","Type":"ContainerDied","Data":"7412d5bba03cfde89f567c48c934a9136919eaac82eb94284d8abd8bcb728ab5"} Jan 09 00:55:36 crc kubenswrapper[4945]: I0109 00:55:36.819138 4945 scope.go:117] "RemoveContainer" containerID="585fab122d0997953f1abe75abdb9eb9b7c1c7a67c35d44a8a9573c564e87e3e" Jan 09 00:55:37 crc kubenswrapper[4945]: I0109 00:55:37.005614 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 00:55:37 crc kubenswrapper[4945]: I0109 00:55:37.005673 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 00:55:37 crc kubenswrapper[4945]: I0109 00:55:37.124977 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 00:55:37 crc kubenswrapper[4945]: I0109 00:55:37.147880 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:37 crc kubenswrapper[4945]: I0109 00:55:37.758398 4945 generic.go:334] "Generic (PLEG): container finished" podID="8cd86543-8a26-480f-9ce7-f74a2d3da10c" containerID="99b8428d154d57c509cee1d9fa42733ac8ab887dd634b62dacbd91d2385e2f34" exitCode=0 Jan 09 00:55:37 crc kubenswrapper[4945]: I0109 00:55:37.758481 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cgs5h" event={"ID":"8cd86543-8a26-480f-9ce7-f74a2d3da10c","Type":"ContainerDied","Data":"99b8428d154d57c509cee1d9fa42733ac8ab887dd634b62dacbd91d2385e2f34"} Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.169919 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.305905 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-scripts\") pod \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.306144 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-combined-ca-bundle\") pod \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.306171 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5j8f\" (UniqueName: \"kubernetes.io/projected/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-kube-api-access-b5j8f\") pod \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.306278 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-config-data\") pod \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\" (UID: \"65c2c0a7-3d97-4a52-b029-f3a64c96a68c\") " Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.311413 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-kube-api-access-b5j8f" (OuterVolumeSpecName: "kube-api-access-b5j8f") pod "65c2c0a7-3d97-4a52-b029-f3a64c96a68c" (UID: "65c2c0a7-3d97-4a52-b029-f3a64c96a68c"). InnerVolumeSpecName "kube-api-access-b5j8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.311531 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-scripts" (OuterVolumeSpecName: "scripts") pod "65c2c0a7-3d97-4a52-b029-f3a64c96a68c" (UID: "65c2c0a7-3d97-4a52-b029-f3a64c96a68c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.337154 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-config-data" (OuterVolumeSpecName: "config-data") pod "65c2c0a7-3d97-4a52-b029-f3a64c96a68c" (UID: "65c2c0a7-3d97-4a52-b029-f3a64c96a68c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.347624 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65c2c0a7-3d97-4a52-b029-f3a64c96a68c" (UID: "65c2c0a7-3d97-4a52-b029-f3a64c96a68c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.409745 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.409799 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5j8f\" (UniqueName: \"kubernetes.io/projected/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-kube-api-access-b5j8f\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.409820 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.409837 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65c2c0a7-3d97-4a52-b029-f3a64c96a68c-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.776918 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n2c85" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.779212 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n2c85" event={"ID":"65c2c0a7-3d97-4a52-b029-f3a64c96a68c","Type":"ContainerDied","Data":"3bab08b8b3dbd8b74b3b6fc28c18eceaad11a51a2852df42f8af4b9b0c8e4b94"} Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.779285 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bab08b8b3dbd8b74b3b6fc28c18eceaad11a51a2852df42f8af4b9b0c8e4b94" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.919278 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:55:38 crc kubenswrapper[4945]: E0109 00:55:38.919685 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c2c0a7-3d97-4a52-b029-f3a64c96a68c" containerName="nova-cell1-conductor-db-sync" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.919705 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c2c0a7-3d97-4a52-b029-f3a64c96a68c" containerName="nova-cell1-conductor-db-sync" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.919879 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c2c0a7-3d97-4a52-b029-f3a64c96a68c" containerName="nova-cell1-conductor-db-sync" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.920532 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.928899 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 00:55:38 crc kubenswrapper[4945]: I0109 00:55:38.958093 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.037560 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:55:39 crc kubenswrapper[4945]: E0109 00:55:39.047892 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.139751 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm25j\" (UniqueName: \"kubernetes.io/projected/0300748c-8b02-462f-8601-89536212a820-kube-api-access-vm25j\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.139820 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.139928 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.241655 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm25j\" (UniqueName: \"kubernetes.io/projected/0300748c-8b02-462f-8601-89536212a820-kube-api-access-vm25j\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.241938 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.242048 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.248320 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.248833 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.257686 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm25j\" (UniqueName: \"kubernetes.io/projected/0300748c-8b02-462f-8601-89536212a820-kube-api-access-vm25j\") pod \"nova-cell1-conductor-0\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.272770 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.365510 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.444951 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-combined-ca-bundle\") pod \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.444989 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-config-data\") pod \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.445062 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28bl9\" (UniqueName: \"kubernetes.io/projected/8cd86543-8a26-480f-9ce7-f74a2d3da10c-kube-api-access-28bl9\") pod \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.445086 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-scripts\") pod \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\" (UID: \"8cd86543-8a26-480f-9ce7-f74a2d3da10c\") " Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.449678 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-scripts" (OuterVolumeSpecName: "scripts") pod "8cd86543-8a26-480f-9ce7-f74a2d3da10c" (UID: "8cd86543-8a26-480f-9ce7-f74a2d3da10c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.452763 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cd86543-8a26-480f-9ce7-f74a2d3da10c-kube-api-access-28bl9" (OuterVolumeSpecName: "kube-api-access-28bl9") pod "8cd86543-8a26-480f-9ce7-f74a2d3da10c" (UID: "8cd86543-8a26-480f-9ce7-f74a2d3da10c"). InnerVolumeSpecName "kube-api-access-28bl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.467435 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-config-data" (OuterVolumeSpecName: "config-data") pod "8cd86543-8a26-480f-9ce7-f74a2d3da10c" (UID: "8cd86543-8a26-480f-9ce7-f74a2d3da10c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.471375 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cd86543-8a26-480f-9ce7-f74a2d3da10c" (UID: "8cd86543-8a26-480f-9ce7-f74a2d3da10c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.546925 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.547208 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.547220 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28bl9\" (UniqueName: \"kubernetes.io/projected/8cd86543-8a26-480f-9ce7-f74a2d3da10c-kube-api-access-28bl9\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.547235 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cd86543-8a26-480f-9ce7-f74a2d3da10c-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.696782 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:55:39 crc kubenswrapper[4945]: W0109 00:55:39.703260 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0300748c_8b02_462f_8601_89536212a820.slice/crio-4ee90fd8ac1e4c8d9f4fef937d661a3c21f6b143858782344af4af4203ea0398 WatchSource:0}: Error finding container 4ee90fd8ac1e4c8d9f4fef937d661a3c21f6b143858782344af4af4203ea0398: Status 404 returned error can't find the container with id 4ee90fd8ac1e4c8d9f4fef937d661a3c21f6b143858782344af4af4203ea0398 Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.791338 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cgs5h" event={"ID":"8cd86543-8a26-480f-9ce7-f74a2d3da10c","Type":"ContainerDied","Data":"1cbda10b0f4ca32b370ccd329323f3792a5e7d56a59523317bd185d3653de751"} Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.791383 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cbda10b0f4ca32b370ccd329323f3792a5e7d56a59523317bd185d3653de751" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.791525 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cgs5h" Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.793461 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0300748c-8b02-462f-8601-89536212a820","Type":"ContainerStarted","Data":"4ee90fd8ac1e4c8d9f4fef937d661a3c21f6b143858782344af4af4203ea0398"} Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.972713 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.973114 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerName="nova-api-log" containerID="cri-o://de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437" gracePeriod=30 Jan 09 00:55:39 crc kubenswrapper[4945]: I0109 00:55:39.973416 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerName="nova-api-api" containerID="cri-o://5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92" gracePeriod=30 Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.025586 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.025831 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="17a7de34-259d-4d7d-805c-ea9cf140a3e2" containerName="nova-scheduler-scheduler" containerID="cri-o://7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec" gracePeriod=30 Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.035974 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.036242 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerName="nova-metadata-log" containerID="cri-o://f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245" gracePeriod=30 Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.036342 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerName="nova-metadata-metadata" containerID="cri-o://01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a" gracePeriod=30 Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.670233 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.679542 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.774867 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkrgq\" (UniqueName: \"kubernetes.io/projected/807c9c64-c777-46bb-bc62-b2144e3c6ce1-kube-api-access-mkrgq\") pod \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.774916 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-combined-ca-bundle\") pod \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.774949 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b111e6e3-ce23-41b6-9950-073a2c55e2f1-logs\") pod \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.775011 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-config-data\") pod \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.775123 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807c9c64-c777-46bb-bc62-b2144e3c6ce1-logs\") pod \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.775186 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-combined-ca-bundle\") pod \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.775252 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-config-data\") pod \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\" (UID: \"807c9c64-c777-46bb-bc62-b2144e3c6ce1\") " Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.775279 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsldj\" (UniqueName: \"kubernetes.io/projected/b111e6e3-ce23-41b6-9950-073a2c55e2f1-kube-api-access-vsldj\") pod \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\" (UID: \"b111e6e3-ce23-41b6-9950-073a2c55e2f1\") " Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.775683 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/807c9c64-c777-46bb-bc62-b2144e3c6ce1-logs" (OuterVolumeSpecName: "logs") pod "807c9c64-c777-46bb-bc62-b2144e3c6ce1" (UID: "807c9c64-c777-46bb-bc62-b2144e3c6ce1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.775821 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b111e6e3-ce23-41b6-9950-073a2c55e2f1-logs" (OuterVolumeSpecName: "logs") pod "b111e6e3-ce23-41b6-9950-073a2c55e2f1" (UID: "b111e6e3-ce23-41b6-9950-073a2c55e2f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.780746 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b111e6e3-ce23-41b6-9950-073a2c55e2f1-kube-api-access-vsldj" (OuterVolumeSpecName: "kube-api-access-vsldj") pod "b111e6e3-ce23-41b6-9950-073a2c55e2f1" (UID: "b111e6e3-ce23-41b6-9950-073a2c55e2f1"). InnerVolumeSpecName "kube-api-access-vsldj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.780947 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807c9c64-c777-46bb-bc62-b2144e3c6ce1-kube-api-access-mkrgq" (OuterVolumeSpecName: "kube-api-access-mkrgq") pod "807c9c64-c777-46bb-bc62-b2144e3c6ce1" (UID: "807c9c64-c777-46bb-bc62-b2144e3c6ce1"). InnerVolumeSpecName "kube-api-access-mkrgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.800048 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "807c9c64-c777-46bb-bc62-b2144e3c6ce1" (UID: "807c9c64-c777-46bb-bc62-b2144e3c6ce1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.801638 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-config-data" (OuterVolumeSpecName: "config-data") pod "b111e6e3-ce23-41b6-9950-073a2c55e2f1" (UID: "b111e6e3-ce23-41b6-9950-073a2c55e2f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.805141 4945 generic.go:334] "Generic (PLEG): container finished" podID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerID="01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a" exitCode=0 Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.805178 4945 generic.go:334] "Generic (PLEG): container finished" podID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerID="f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245" exitCode=143 Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.805197 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"807c9c64-c777-46bb-bc62-b2144e3c6ce1","Type":"ContainerDied","Data":"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a"} Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.805244 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"807c9c64-c777-46bb-bc62-b2144e3c6ce1","Type":"ContainerDied","Data":"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245"} Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.805260 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"807c9c64-c777-46bb-bc62-b2144e3c6ce1","Type":"ContainerDied","Data":"2d45c0791703e06f1dd64dae0aa450e8b782f9acc7fdf14dd00a9de961082a40"} Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.805255 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.805294 4945 scope.go:117] "RemoveContainer" containerID="01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.807413 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0300748c-8b02-462f-8601-89536212a820","Type":"ContainerStarted","Data":"b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f"} Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.807573 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.809427 4945 generic.go:334] "Generic (PLEG): container finished" podID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerID="5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92" exitCode=0 Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.809461 4945 generic.go:334] "Generic (PLEG): container finished" podID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerID="de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437" exitCode=143 Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.809491 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b111e6e3-ce23-41b6-9950-073a2c55e2f1","Type":"ContainerDied","Data":"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92"} Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.809520 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b111e6e3-ce23-41b6-9950-073a2c55e2f1","Type":"ContainerDied","Data":"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437"} Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.809535 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b111e6e3-ce23-41b6-9950-073a2c55e2f1","Type":"ContainerDied","Data":"a69afe630d2b49fefdc20d8cd33604b73a5e8a7863d829f035406fd52b808247"} Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.809588 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.811826 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b111e6e3-ce23-41b6-9950-073a2c55e2f1" (UID: "b111e6e3-ce23-41b6-9950-073a2c55e2f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.826227 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-config-data" (OuterVolumeSpecName: "config-data") pod "807c9c64-c777-46bb-bc62-b2144e3c6ce1" (UID: "807c9c64-c777-46bb-bc62-b2144e3c6ce1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.837616 4945 scope.go:117] "RemoveContainer" containerID="f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.837680 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.8376459929999998 podStartE2EDuration="2.837645993s" podCreationTimestamp="2026-01-09 00:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:40.830071357 +0000 UTC m=+6011.141230313" watchObservedRunningTime="2026-01-09 00:55:40.837645993 +0000 UTC m=+6011.148804949" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.855791 4945 scope.go:117] "RemoveContainer" containerID="01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a" Jan 09 00:55:40 crc kubenswrapper[4945]: E0109 00:55:40.856219 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a\": container with ID starting with 01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a not found: ID does not exist" containerID="01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.856271 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a"} err="failed to get container status \"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a\": rpc error: code = NotFound desc = could not find container \"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a\": container with ID starting with 01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a not found: ID does not exist" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.856303 4945 scope.go:117] "RemoveContainer" containerID="f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245" Jan 09 00:55:40 crc kubenswrapper[4945]: E0109 00:55:40.856619 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245\": container with ID starting with f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245 not found: ID does not exist" containerID="f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.856664 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245"} err="failed to get container status \"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245\": rpc error: code = NotFound desc = could not find container \"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245\": container with ID starting with f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245 not found: ID does not exist" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.856693 4945 scope.go:117] "RemoveContainer" containerID="01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.857040 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a"} err="failed to get container status \"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a\": rpc error: code = NotFound desc = could not find container \"01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a\": container with ID starting with 01cb9a8b3ecb1d0e7002d3d3e22a6394a6d3f5a512d22c074a8a3a933491297a not found: ID does not exist" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.857065 4945 scope.go:117] "RemoveContainer" containerID="f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.857287 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245"} err="failed to get container status \"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245\": rpc error: code = NotFound desc = could not find container \"f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245\": container with ID starting with f4b8f5c3f719f8014a229345688b345effed226cf9d7124bdb92b74981b8d245 not found: ID does not exist" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.857306 4945 scope.go:117] "RemoveContainer" containerID="5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.873862 4945 scope.go:117] "RemoveContainer" containerID="de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.878276 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkrgq\" (UniqueName: \"kubernetes.io/projected/807c9c64-c777-46bb-bc62-b2144e3c6ce1-kube-api-access-mkrgq\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.878349 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.878367 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b111e6e3-ce23-41b6-9950-073a2c55e2f1-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.878376 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b111e6e3-ce23-41b6-9950-073a2c55e2f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.878386 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/807c9c64-c777-46bb-bc62-b2144e3c6ce1-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.878394 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.878402 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/807c9c64-c777-46bb-bc62-b2144e3c6ce1-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.878413 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsldj\" (UniqueName: \"kubernetes.io/projected/b111e6e3-ce23-41b6-9950-073a2c55e2f1-kube-api-access-vsldj\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.889371 4945 scope.go:117] "RemoveContainer" containerID="5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92" Jan 09 00:55:40 crc kubenswrapper[4945]: E0109 00:55:40.889726 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92\": container with ID starting with 5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92 not found: ID does not exist" containerID="5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.889767 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92"} err="failed to get container status \"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92\": rpc error: code = NotFound desc = could not find container \"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92\": container with ID starting with 5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92 not found: ID does not exist" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.889798 4945 scope.go:117] "RemoveContainer" containerID="de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437" Jan 09 00:55:40 crc kubenswrapper[4945]: E0109 00:55:40.890081 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437\": container with ID starting with de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437 not found: ID does not exist" containerID="de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.890120 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437"} err="failed to get container status \"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437\": rpc error: code = NotFound desc = could not find container \"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437\": container with ID starting with de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437 not found: ID does not exist" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.890149 4945 scope.go:117] "RemoveContainer" containerID="5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.890491 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92"} err="failed to get container status \"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92\": rpc error: code = NotFound desc = could not find container \"5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92\": container with ID starting with 5ba53882d53ce74eb1d7fb75b57d5264654b719c8d62206a083770dd7ad77c92 not found: ID does not exist" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.890516 4945 scope.go:117] "RemoveContainer" containerID="de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437" Jan 09 00:55:40 crc kubenswrapper[4945]: I0109 00:55:40.890771 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437"} err="failed to get container status \"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437\": rpc error: code = NotFound desc = could not find container \"de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437\": container with ID starting with de07103ac1026c3e156fd8316ce486076a05072eb3413ae352e2e4b2dbdcd437 not found: ID does not exist" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.143948 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.157452 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.170405 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:41 crc kubenswrapper[4945]: E0109 00:55:41.170955 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerName="nova-api-api" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.170974 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerName="nova-api-api" Jan 09 00:55:41 crc kubenswrapper[4945]: E0109 00:55:41.171010 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerName="nova-metadata-metadata" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171018 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerName="nova-metadata-metadata" Jan 09 00:55:41 crc kubenswrapper[4945]: E0109 00:55:41.171031 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerName="nova-metadata-log" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171038 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerName="nova-metadata-log" Jan 09 00:55:41 crc kubenswrapper[4945]: E0109 00:55:41.171063 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cd86543-8a26-480f-9ce7-f74a2d3da10c" containerName="nova-manage" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171070 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cd86543-8a26-480f-9ce7-f74a2d3da10c" containerName="nova-manage" Jan 09 00:55:41 crc kubenswrapper[4945]: E0109 00:55:41.171091 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerName="nova-api-log" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171097 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerName="nova-api-log" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171294 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerName="nova-api-api" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171309 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cd86543-8a26-480f-9ce7-f74a2d3da10c" containerName="nova-manage" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171320 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerName="nova-metadata-log" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171331 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" containerName="nova-api-log" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.171346 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" containerName="nova-metadata-metadata" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.172494 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.175745 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.187157 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.197623 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.207254 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.216764 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.218442 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.225118 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.229497 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.284080 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-config-data\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.284165 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzknm\" (UniqueName: \"kubernetes.io/projected/c57d9b20-c804-4a12-ac78-39c180760468-kube-api-access-bzknm\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.284326 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-config-data\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.284486 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.284529 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.284553 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57d9b20-c804-4a12-ac78-39c180760468-logs\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.284579 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-logs\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.284621 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94pd6\" (UniqueName: \"kubernetes.io/projected/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-kube-api-access-94pd6\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.386360 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94pd6\" (UniqueName: \"kubernetes.io/projected/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-kube-api-access-94pd6\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.386486 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-config-data\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.386550 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzknm\" (UniqueName: \"kubernetes.io/projected/c57d9b20-c804-4a12-ac78-39c180760468-kube-api-access-bzknm\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.386628 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-config-data\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.386702 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.386729 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.386766 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57d9b20-c804-4a12-ac78-39c180760468-logs\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.386787 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-logs\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.387344 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57d9b20-c804-4a12-ac78-39c180760468-logs\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.387357 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-logs\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.391173 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.391322 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-config-data\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.392059 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-config-data\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.403477 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.406609 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94pd6\" (UniqueName: \"kubernetes.io/projected/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-kube-api-access-94pd6\") pod \"nova-api-0\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.408608 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzknm\" (UniqueName: \"kubernetes.io/projected/c57d9b20-c804-4a12-ac78-39c180760468-kube-api-access-bzknm\") pod \"nova-metadata-0\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.499757 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.601984 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:41 crc kubenswrapper[4945]: W0109 00:55:41.946345 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc57d9b20_c804_4a12_ac78_39c180760468.slice/crio-b6c82df9c8747b18a10be4d6ca2e40ae18a5f5e3f4ebea32d8c905ff866b1b0c WatchSource:0}: Error finding container b6c82df9c8747b18a10be4d6ca2e40ae18a5f5e3f4ebea32d8c905ff866b1b0c: Status 404 returned error can't find the container with id b6c82df9c8747b18a10be4d6ca2e40ae18a5f5e3f4ebea32d8c905ff866b1b0c Jan 09 00:55:41 crc kubenswrapper[4945]: I0109 00:55:41.951362 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.011622 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807c9c64-c777-46bb-bc62-b2144e3c6ce1" path="/var/lib/kubelet/pods/807c9c64-c777-46bb-bc62-b2144e3c6ce1/volumes" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.012240 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b111e6e3-ce23-41b6-9950-073a2c55e2f1" path="/var/lib/kubelet/pods/b111e6e3-ce23-41b6-9950-073a2c55e2f1/volumes" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.052756 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.140193 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.147534 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.164610 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.222250 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff6bf8fc-shzz7"] Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.222788 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" podUID="ef6f3ace-5b81-49d4-9995-a61e35927da2" containerName="dnsmasq-dns" containerID="cri-o://52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985" gracePeriod=10 Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.649382 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.714619 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-nb\") pod \"ef6f3ace-5b81-49d4-9995-a61e35927da2\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.714768 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-dns-svc\") pod \"ef6f3ace-5b81-49d4-9995-a61e35927da2\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.714820 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-config\") pod \"ef6f3ace-5b81-49d4-9995-a61e35927da2\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.714851 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-sb\") pod \"ef6f3ace-5b81-49d4-9995-a61e35927da2\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.715046 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc8wh\" (UniqueName: \"kubernetes.io/projected/ef6f3ace-5b81-49d4-9995-a61e35927da2-kube-api-access-qc8wh\") pod \"ef6f3ace-5b81-49d4-9995-a61e35927da2\" (UID: \"ef6f3ace-5b81-49d4-9995-a61e35927da2\") " Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.719933 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6f3ace-5b81-49d4-9995-a61e35927da2-kube-api-access-qc8wh" (OuterVolumeSpecName: "kube-api-access-qc8wh") pod "ef6f3ace-5b81-49d4-9995-a61e35927da2" (UID: "ef6f3ace-5b81-49d4-9995-a61e35927da2"). InnerVolumeSpecName "kube-api-access-qc8wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.764875 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-config" (OuterVolumeSpecName: "config") pod "ef6f3ace-5b81-49d4-9995-a61e35927da2" (UID: "ef6f3ace-5b81-49d4-9995-a61e35927da2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.770362 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef6f3ace-5b81-49d4-9995-a61e35927da2" (UID: "ef6f3ace-5b81-49d4-9995-a61e35927da2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.775613 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef6f3ace-5b81-49d4-9995-a61e35927da2" (UID: "ef6f3ace-5b81-49d4-9995-a61e35927da2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.780514 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef6f3ace-5b81-49d4-9995-a61e35927da2" (UID: "ef6f3ace-5b81-49d4-9995-a61e35927da2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.817613 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc8wh\" (UniqueName: \"kubernetes.io/projected/ef6f3ace-5b81-49d4-9995-a61e35927da2-kube-api-access-qc8wh\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.817654 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.817664 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.817675 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.817683 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6f3ace-5b81-49d4-9995-a61e35927da2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.830536 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62","Type":"ContainerStarted","Data":"949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5"} Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.830592 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62","Type":"ContainerStarted","Data":"9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e"} Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.830608 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62","Type":"ContainerStarted","Data":"5619c724a5917cb487f0d9101c54eeccfdea2f6310114882c2556f7a5c7477b8"} Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.833500 4945 generic.go:334] "Generic (PLEG): container finished" podID="ef6f3ace-5b81-49d4-9995-a61e35927da2" containerID="52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985" exitCode=0 Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.833566 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" event={"ID":"ef6f3ace-5b81-49d4-9995-a61e35927da2","Type":"ContainerDied","Data":"52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985"} Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.833596 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" event={"ID":"ef6f3ace-5b81-49d4-9995-a61e35927da2","Type":"ContainerDied","Data":"878142654f398fd82efd7ed42275aec4d35ad91519b479f6c9caebc86891bc75"} Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.833620 4945 scope.go:117] "RemoveContainer" containerID="52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.833754 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67ff6bf8fc-shzz7" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.840115 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57d9b20-c804-4a12-ac78-39c180760468","Type":"ContainerStarted","Data":"efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd"} Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.840238 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57d9b20-c804-4a12-ac78-39c180760468","Type":"ContainerStarted","Data":"f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e"} Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.840313 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57d9b20-c804-4a12-ac78-39c180760468","Type":"ContainerStarted","Data":"b6c82df9c8747b18a10be4d6ca2e40ae18a5f5e3f4ebea32d8c905ff866b1b0c"} Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.852571 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.852547095 podStartE2EDuration="1.852547095s" podCreationTimestamp="2026-01-09 00:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:42.851410017 +0000 UTC m=+6013.162569063" watchObservedRunningTime="2026-01-09 00:55:42.852547095 +0000 UTC m=+6013.163706051" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.859974 4945 scope.go:117] "RemoveContainer" containerID="1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.861069 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.883946 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.883925118 podStartE2EDuration="1.883925118s" podCreationTimestamp="2026-01-09 00:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:42.877865878 +0000 UTC m=+6013.189024824" watchObservedRunningTime="2026-01-09 00:55:42.883925118 +0000 UTC m=+6013.195084064" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.901124 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67ff6bf8fc-shzz7"] Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.903172 4945 scope.go:117] "RemoveContainer" containerID="52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985" Jan 09 00:55:42 crc kubenswrapper[4945]: E0109 00:55:42.903683 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985\": container with ID starting with 52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985 not found: ID does not exist" containerID="52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.903716 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985"} err="failed to get container status \"52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985\": rpc error: code = NotFound desc = could not find container \"52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985\": container with ID starting with 52ebc4ae63f7a76a85de5d103531d87962202f825516043a2747de7af6762985 not found: ID does not exist" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.903738 4945 scope.go:117] "RemoveContainer" containerID="1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91" Jan 09 00:55:42 crc kubenswrapper[4945]: E0109 00:55:42.904022 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91\": container with ID starting with 1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91 not found: ID does not exist" containerID="1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.904054 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91"} err="failed to get container status \"1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91\": rpc error: code = NotFound desc = could not find container \"1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91\": container with ID starting with 1e34d3f9474c1b7b8cc485221c88f25f471dcb86478b577a95bee228dbafff91 not found: ID does not exist" Jan 09 00:55:42 crc kubenswrapper[4945]: I0109 00:55:42.908788 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67ff6bf8fc-shzz7"] Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.023353 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6f3ace-5b81-49d4-9995-a61e35927da2" path="/var/lib/kubelet/pods/ef6f3ace-5b81-49d4-9995-a61e35927da2/volumes" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.304764 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.771905 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.898787 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-config-data\") pod \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.899378 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-combined-ca-bundle\") pod \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.899548 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-268tf\" (UniqueName: \"kubernetes.io/projected/17a7de34-259d-4d7d-805c-ea9cf140a3e2-kube-api-access-268tf\") pod \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\" (UID: \"17a7de34-259d-4d7d-805c-ea9cf140a3e2\") " Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.912941 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a7de34-259d-4d7d-805c-ea9cf140a3e2-kube-api-access-268tf" (OuterVolumeSpecName: "kube-api-access-268tf") pod "17a7de34-259d-4d7d-805c-ea9cf140a3e2" (UID: "17a7de34-259d-4d7d-805c-ea9cf140a3e2"). InnerVolumeSpecName "kube-api-access-268tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.919909 4945 generic.go:334] "Generic (PLEG): container finished" podID="17a7de34-259d-4d7d-805c-ea9cf140a3e2" containerID="7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec" exitCode=0 Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.919959 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17a7de34-259d-4d7d-805c-ea9cf140a3e2","Type":"ContainerDied","Data":"7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec"} Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.920097 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"17a7de34-259d-4d7d-805c-ea9cf140a3e2","Type":"ContainerDied","Data":"5f6f5eb116dfa708f9351bda4971d72adc580cd70a9724e890b6e75370b46a4d"} Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.920119 4945 scope.go:117] "RemoveContainer" containerID="7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.920127 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.935782 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mjfmf"] Jan 09 00:55:44 crc kubenswrapper[4945]: E0109 00:55:44.936241 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a7de34-259d-4d7d-805c-ea9cf140a3e2" containerName="nova-scheduler-scheduler" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.936409 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a7de34-259d-4d7d-805c-ea9cf140a3e2" containerName="nova-scheduler-scheduler" Jan 09 00:55:44 crc kubenswrapper[4945]: E0109 00:55:44.936449 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6f3ace-5b81-49d4-9995-a61e35927da2" containerName="init" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.936459 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6f3ace-5b81-49d4-9995-a61e35927da2" containerName="init" Jan 09 00:55:44 crc kubenswrapper[4945]: E0109 00:55:44.936546 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6f3ace-5b81-49d4-9995-a61e35927da2" containerName="dnsmasq-dns" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.936562 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6f3ace-5b81-49d4-9995-a61e35927da2" containerName="dnsmasq-dns" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.936801 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a7de34-259d-4d7d-805c-ea9cf140a3e2" containerName="nova-scheduler-scheduler" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.936838 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6f3ace-5b81-49d4-9995-a61e35927da2" containerName="dnsmasq-dns" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.939227 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.945594 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.946020 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.951377 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mjfmf"] Jan 09 00:55:44 crc kubenswrapper[4945]: I0109 00:55:44.956665 4945 scope.go:117] "RemoveContainer" containerID="7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.003303 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-268tf\" (UniqueName: \"kubernetes.io/projected/17a7de34-259d-4d7d-805c-ea9cf140a3e2-kube-api-access-268tf\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.004330 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-config-data" (OuterVolumeSpecName: "config-data") pod "17a7de34-259d-4d7d-805c-ea9cf140a3e2" (UID: "17a7de34-259d-4d7d-805c-ea9cf140a3e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:45 crc kubenswrapper[4945]: E0109 00:55:45.004352 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec\": container with ID starting with 7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec not found: ID does not exist" containerID="7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.004389 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec"} err="failed to get container status \"7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec\": rpc error: code = NotFound desc = could not find container \"7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec\": container with ID starting with 7813844c04eb406e41276ec9bef451585f80aadcaef3d714e0bdceac31c574ec not found: ID does not exist" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.016525 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17a7de34-259d-4d7d-805c-ea9cf140a3e2" (UID: "17a7de34-259d-4d7d-805c-ea9cf140a3e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.106501 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-config-data\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.106551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkl7p\" (UniqueName: \"kubernetes.io/projected/fc970906-6f53-4c9c-931f-ba1bb6758411-kube-api-access-jkl7p\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.106598 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.106720 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-scripts\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.106780 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.106790 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a7de34-259d-4d7d-805c-ea9cf140a3e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.208380 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-config-data\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.208447 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkl7p\" (UniqueName: \"kubernetes.io/projected/fc970906-6f53-4c9c-931f-ba1bb6758411-kube-api-access-jkl7p\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.208508 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.209419 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-scripts\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.218819 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-scripts\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.219390 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.220629 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-config-data\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.230668 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkl7p\" (UniqueName: \"kubernetes.io/projected/fc970906-6f53-4c9c-931f-ba1bb6758411-kube-api-access-jkl7p\") pod \"nova-cell1-cell-mapping-mjfmf\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.255961 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.265156 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.286762 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.296570 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.296690 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.299293 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.366945 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.421522 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9xt\" (UniqueName: \"kubernetes.io/projected/639d2ba2-21bc-41cd-9227-090f4fc150ab-kube-api-access-ln9xt\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.421636 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.421675 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-config-data\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.523215 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln9xt\" (UniqueName: \"kubernetes.io/projected/639d2ba2-21bc-41cd-9227-090f4fc150ab-kube-api-access-ln9xt\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.523359 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.523414 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-config-data\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.531684 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-config-data\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.544765 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.547524 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln9xt\" (UniqueName: \"kubernetes.io/projected/639d2ba2-21bc-41cd-9227-090f4fc150ab-kube-api-access-ln9xt\") pod \"nova-scheduler-0\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:45 crc kubenswrapper[4945]: I0109 00:55:45.630439 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.011130 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a7de34-259d-4d7d-805c-ea9cf140a3e2" path="/var/lib/kubelet/pods/17a7de34-259d-4d7d-805c-ea9cf140a3e2/volumes" Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.103782 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mjfmf"] Jan 09 00:55:46 crc kubenswrapper[4945]: W0109 00:55:46.108312 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc970906_6f53_4c9c_931f_ba1bb6758411.slice/crio-b38df42e33f4715a3b79f03372d5d0857cbd0009988a0479b0f995d5d2300b65 WatchSource:0}: Error finding container b38df42e33f4715a3b79f03372d5d0857cbd0009988a0479b0f995d5d2300b65: Status 404 returned error can't find the container with id b38df42e33f4715a3b79f03372d5d0857cbd0009988a0479b0f995d5d2300b65 Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.158855 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:46 crc kubenswrapper[4945]: W0109 00:55:46.161585 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod639d2ba2_21bc_41cd_9227_090f4fc150ab.slice/crio-50608d3f618abf0bf18acc96d53f828baf075ef2c6159441894c914de036eaf5 WatchSource:0}: Error finding container 50608d3f618abf0bf18acc96d53f828baf075ef2c6159441894c914de036eaf5: Status 404 returned error can't find the container with id 50608d3f618abf0bf18acc96d53f828baf075ef2c6159441894c914de036eaf5 Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.500912 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.502063 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.942154 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"639d2ba2-21bc-41cd-9227-090f4fc150ab","Type":"ContainerStarted","Data":"f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2"} Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.942522 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"639d2ba2-21bc-41cd-9227-090f4fc150ab","Type":"ContainerStarted","Data":"50608d3f618abf0bf18acc96d53f828baf075ef2c6159441894c914de036eaf5"} Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.946360 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mjfmf" event={"ID":"fc970906-6f53-4c9c-931f-ba1bb6758411","Type":"ContainerStarted","Data":"184dd71414cf8eb1b5b7e931902b7a58b81159888ff1fcdd658fe98f555dd2f0"} Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.946400 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mjfmf" event={"ID":"fc970906-6f53-4c9c-931f-ba1bb6758411","Type":"ContainerStarted","Data":"b38df42e33f4715a3b79f03372d5d0857cbd0009988a0479b0f995d5d2300b65"} Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.965246 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.965223588 podStartE2EDuration="1.965223588s" podCreationTimestamp="2026-01-09 00:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:46.959260072 +0000 UTC m=+6017.270419018" watchObservedRunningTime="2026-01-09 00:55:46.965223588 +0000 UTC m=+6017.276382534" Jan 09 00:55:46 crc kubenswrapper[4945]: I0109 00:55:46.985379 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mjfmf" podStartSLOduration=2.985353054 podStartE2EDuration="2.985353054s" podCreationTimestamp="2026-01-09 00:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:55:46.981669323 +0000 UTC m=+6017.292828289" watchObservedRunningTime="2026-01-09 00:55:46.985353054 +0000 UTC m=+6017.296512010" Jan 09 00:55:50 crc kubenswrapper[4945]: I0109 00:55:50.631540 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 00:55:51 crc kubenswrapper[4945]: I0109 00:55:50.997545 4945 generic.go:334] "Generic (PLEG): container finished" podID="fc970906-6f53-4c9c-931f-ba1bb6758411" containerID="184dd71414cf8eb1b5b7e931902b7a58b81159888ff1fcdd658fe98f555dd2f0" exitCode=0 Jan 09 00:55:51 crc kubenswrapper[4945]: I0109 00:55:50.997589 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mjfmf" event={"ID":"fc970906-6f53-4c9c-931f-ba1bb6758411","Type":"ContainerDied","Data":"184dd71414cf8eb1b5b7e931902b7a58b81159888ff1fcdd658fe98f555dd2f0"} Jan 09 00:55:51 crc kubenswrapper[4945]: I0109 00:55:51.500654 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 00:55:51 crc kubenswrapper[4945]: I0109 00:55:51.500723 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 00:55:51 crc kubenswrapper[4945]: I0109 00:55:51.603330 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 00:55:51 crc kubenswrapper[4945]: I0109 00:55:51.603390 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.412291 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.544625 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.544626 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.563953 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-scripts\") pod \"fc970906-6f53-4c9c-931f-ba1bb6758411\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.564080 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-config-data\") pod \"fc970906-6f53-4c9c-931f-ba1bb6758411\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.564227 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-combined-ca-bundle\") pod \"fc970906-6f53-4c9c-931f-ba1bb6758411\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.564262 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkl7p\" (UniqueName: \"kubernetes.io/projected/fc970906-6f53-4c9c-931f-ba1bb6758411-kube-api-access-jkl7p\") pod \"fc970906-6f53-4c9c-931f-ba1bb6758411\" (UID: \"fc970906-6f53-4c9c-931f-ba1bb6758411\") " Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.573801 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc970906-6f53-4c9c-931f-ba1bb6758411-kube-api-access-jkl7p" (OuterVolumeSpecName: "kube-api-access-jkl7p") pod "fc970906-6f53-4c9c-931f-ba1bb6758411" (UID: "fc970906-6f53-4c9c-931f-ba1bb6758411"). InnerVolumeSpecName "kube-api-access-jkl7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.591973 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-scripts" (OuterVolumeSpecName: "scripts") pod "fc970906-6f53-4c9c-931f-ba1bb6758411" (UID: "fc970906-6f53-4c9c-931f-ba1bb6758411"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.598118 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-config-data" (OuterVolumeSpecName: "config-data") pod "fc970906-6f53-4c9c-931f-ba1bb6758411" (UID: "fc970906-6f53-4c9c-931f-ba1bb6758411"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.603146 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc970906-6f53-4c9c-931f-ba1bb6758411" (UID: "fc970906-6f53-4c9c-931f-ba1bb6758411"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.666147 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkl7p\" (UniqueName: \"kubernetes.io/projected/fc970906-6f53-4c9c-931f-ba1bb6758411-kube-api-access-jkl7p\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.666185 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.666194 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.666205 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc970906-6f53-4c9c-931f-ba1bb6758411-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.688231 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.73:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:55:52 crc kubenswrapper[4945]: I0109 00:55:52.688247 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.73:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.022068 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mjfmf" event={"ID":"fc970906-6f53-4c9c-931f-ba1bb6758411","Type":"ContainerDied","Data":"b38df42e33f4715a3b79f03372d5d0857cbd0009988a0479b0f995d5d2300b65"} Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.022802 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38df42e33f4715a3b79f03372d5d0857cbd0009988a0479b0f995d5d2300b65" Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.022545 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mjfmf" Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.217799 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.219226 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-log" containerID="cri-o://9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e" gracePeriod=30 Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.219329 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-api" containerID="cri-o://949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5" gracePeriod=30 Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.248707 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.249148 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="639d2ba2-21bc-41cd-9227-090f4fc150ab" containerName="nova-scheduler-scheduler" containerID="cri-o://f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2" gracePeriod=30 Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.269717 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.270205 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-log" containerID="cri-o://f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e" gracePeriod=30 Jan 09 00:55:53 crc kubenswrapper[4945]: I0109 00:55:53.270555 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-metadata" containerID="cri-o://efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd" gracePeriod=30 Jan 09 00:55:54 crc kubenswrapper[4945]: I0109 00:55:54.002275 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:55:54 crc kubenswrapper[4945]: I0109 00:55:54.046312 4945 generic.go:334] "Generic (PLEG): container finished" podID="c57d9b20-c804-4a12-ac78-39c180760468" containerID="f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e" exitCode=143 Jan 09 00:55:54 crc kubenswrapper[4945]: I0109 00:55:54.046417 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57d9b20-c804-4a12-ac78-39c180760468","Type":"ContainerDied","Data":"f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e"} Jan 09 00:55:54 crc kubenswrapper[4945]: I0109 00:55:54.048973 4945 generic.go:334] "Generic (PLEG): container finished" podID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerID="9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e" exitCode=143 Jan 09 00:55:54 crc kubenswrapper[4945]: I0109 00:55:54.049046 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62","Type":"ContainerDied","Data":"9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e"} Jan 09 00:55:55 crc kubenswrapper[4945]: I0109 00:55:55.061574 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"eccb70fe4fbdfd4fa48564790297f305ce79c1abeb00e0431dd2d34f92ff2a95"} Jan 09 00:55:57 crc kubenswrapper[4945]: I0109 00:55:57.925890 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:57 crc kubenswrapper[4945]: I0109 00:55:57.985505 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.062076 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.099861 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-config-data\") pod \"c57d9b20-c804-4a12-ac78-39c180760468\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.099940 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-combined-ca-bundle\") pod \"639d2ba2-21bc-41cd-9227-090f4fc150ab\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.100078 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln9xt\" (UniqueName: \"kubernetes.io/projected/639d2ba2-21bc-41cd-9227-090f4fc150ab-kube-api-access-ln9xt\") pod \"639d2ba2-21bc-41cd-9227-090f4fc150ab\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.100190 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzknm\" (UniqueName: \"kubernetes.io/projected/c57d9b20-c804-4a12-ac78-39c180760468-kube-api-access-bzknm\") pod \"c57d9b20-c804-4a12-ac78-39c180760468\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.100254 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-config-data\") pod \"639d2ba2-21bc-41cd-9227-090f4fc150ab\" (UID: \"639d2ba2-21bc-41cd-9227-090f4fc150ab\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.100298 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57d9b20-c804-4a12-ac78-39c180760468-logs\") pod \"c57d9b20-c804-4a12-ac78-39c180760468\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.100366 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-combined-ca-bundle\") pod \"c57d9b20-c804-4a12-ac78-39c180760468\" (UID: \"c57d9b20-c804-4a12-ac78-39c180760468\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.101457 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c57d9b20-c804-4a12-ac78-39c180760468-logs" (OuterVolumeSpecName: "logs") pod "c57d9b20-c804-4a12-ac78-39c180760468" (UID: "c57d9b20-c804-4a12-ac78-39c180760468"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.105806 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57d9b20-c804-4a12-ac78-39c180760468-kube-api-access-bzknm" (OuterVolumeSpecName: "kube-api-access-bzknm") pod "c57d9b20-c804-4a12-ac78-39c180760468" (UID: "c57d9b20-c804-4a12-ac78-39c180760468"). InnerVolumeSpecName "kube-api-access-bzknm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.106127 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639d2ba2-21bc-41cd-9227-090f4fc150ab-kube-api-access-ln9xt" (OuterVolumeSpecName: "kube-api-access-ln9xt") pod "639d2ba2-21bc-41cd-9227-090f4fc150ab" (UID: "639d2ba2-21bc-41cd-9227-090f4fc150ab"). InnerVolumeSpecName "kube-api-access-ln9xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.118971 4945 generic.go:334] "Generic (PLEG): container finished" podID="639d2ba2-21bc-41cd-9227-090f4fc150ab" containerID="f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2" exitCode=0 Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.119107 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.119116 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"639d2ba2-21bc-41cd-9227-090f4fc150ab","Type":"ContainerDied","Data":"f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2"} Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.119185 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"639d2ba2-21bc-41cd-9227-090f4fc150ab","Type":"ContainerDied","Data":"50608d3f618abf0bf18acc96d53f828baf075ef2c6159441894c914de036eaf5"} Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.119209 4945 scope.go:117] "RemoveContainer" containerID="f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.121980 4945 generic.go:334] "Generic (PLEG): container finished" podID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerID="949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5" exitCode=0 Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.122134 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62","Type":"ContainerDied","Data":"949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5"} Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.122156 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62","Type":"ContainerDied","Data":"5619c724a5917cb487f0d9101c54eeccfdea2f6310114882c2556f7a5c7477b8"} Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.122222 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.124715 4945 generic.go:334] "Generic (PLEG): container finished" podID="c57d9b20-c804-4a12-ac78-39c180760468" containerID="efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd" exitCode=0 Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.124761 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.124761 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57d9b20-c804-4a12-ac78-39c180760468","Type":"ContainerDied","Data":"efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd"} Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.124797 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c57d9b20-c804-4a12-ac78-39c180760468","Type":"ContainerDied","Data":"b6c82df9c8747b18a10be4d6ca2e40ae18a5f5e3f4ebea32d8c905ff866b1b0c"} Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.136428 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c57d9b20-c804-4a12-ac78-39c180760468" (UID: "c57d9b20-c804-4a12-ac78-39c180760468"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.136428 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-config-data" (OuterVolumeSpecName: "config-data") pod "639d2ba2-21bc-41cd-9227-090f4fc150ab" (UID: "639d2ba2-21bc-41cd-9227-090f4fc150ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.136492 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "639d2ba2-21bc-41cd-9227-090f4fc150ab" (UID: "639d2ba2-21bc-41cd-9227-090f4fc150ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.140011 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-config-data" (OuterVolumeSpecName: "config-data") pod "c57d9b20-c804-4a12-ac78-39c180760468" (UID: "c57d9b20-c804-4a12-ac78-39c180760468"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.143268 4945 scope.go:117] "RemoveContainer" containerID="f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.144075 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2\": container with ID starting with f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2 not found: ID does not exist" containerID="f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.144105 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2"} err="failed to get container status \"f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2\": rpc error: code = NotFound desc = could not find container \"f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2\": container with ID starting with f1facbc1dd965d1d2cf800b7ec75dcd27508784246b4c306ebfce200f91926b2 not found: ID does not exist" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.144125 4945 scope.go:117] "RemoveContainer" containerID="949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.166513 4945 scope.go:117] "RemoveContainer" containerID="9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.187915 4945 scope.go:117] "RemoveContainer" containerID="949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.188427 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5\": container with ID starting with 949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5 not found: ID does not exist" containerID="949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.188462 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5"} err="failed to get container status \"949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5\": rpc error: code = NotFound desc = could not find container \"949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5\": container with ID starting with 949c9649241d85fb89e68457812b6c3da080fe7e8480132627fe634cc629e1f5 not found: ID does not exist" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.188486 4945 scope.go:117] "RemoveContainer" containerID="9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.188819 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e\": container with ID starting with 9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e not found: ID does not exist" containerID="9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.188883 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e"} err="failed to get container status \"9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e\": rpc error: code = NotFound desc = could not find container \"9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e\": container with ID starting with 9b9eb2a8a9c95e1a0192898a7e8ec8f5429a301f4ca4ed64a0858c03cffcaa2e not found: ID does not exist" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.189139 4945 scope.go:117] "RemoveContainer" containerID="efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.201349 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-config-data\") pod \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.201452 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94pd6\" (UniqueName: \"kubernetes.io/projected/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-kube-api-access-94pd6\") pod \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.201524 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-logs\") pod \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.201547 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-combined-ca-bundle\") pod \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\" (UID: \"019c18f5-ddc0-4a2c-a320-5dbe2de6fa62\") " Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.201979 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-logs" (OuterVolumeSpecName: "logs") pod "019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" (UID: "019c18f5-ddc0-4a2c-a320-5dbe2de6fa62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.202029 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln9xt\" (UniqueName: \"kubernetes.io/projected/639d2ba2-21bc-41cd-9227-090f4fc150ab-kube-api-access-ln9xt\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.202045 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzknm\" (UniqueName: \"kubernetes.io/projected/c57d9b20-c804-4a12-ac78-39c180760468-kube-api-access-bzknm\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.202057 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.202066 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c57d9b20-c804-4a12-ac78-39c180760468-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.202075 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.202085 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c57d9b20-c804-4a12-ac78-39c180760468-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.202097 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/639d2ba2-21bc-41cd-9227-090f4fc150ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.204947 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-kube-api-access-94pd6" (OuterVolumeSpecName: "kube-api-access-94pd6") pod "019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" (UID: "019c18f5-ddc0-4a2c-a320-5dbe2de6fa62"). InnerVolumeSpecName "kube-api-access-94pd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.206051 4945 scope.go:117] "RemoveContainer" containerID="f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.224664 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-config-data" (OuterVolumeSpecName: "config-data") pod "019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" (UID: "019c18f5-ddc0-4a2c-a320-5dbe2de6fa62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.228240 4945 scope.go:117] "RemoveContainer" containerID="efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.228651 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd\": container with ID starting with efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd not found: ID does not exist" containerID="efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.228681 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd"} err="failed to get container status \"efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd\": rpc error: code = NotFound desc = could not find container \"efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd\": container with ID starting with efa486421d50c6e62e3555a73016b2df73f96896dd3edd648249282d85f742dd not found: ID does not exist" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.228704 4945 scope.go:117] "RemoveContainer" containerID="f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.229208 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e\": container with ID starting with f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e not found: ID does not exist" containerID="f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.229265 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e"} err="failed to get container status \"f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e\": rpc error: code = NotFound desc = could not find container \"f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e\": container with ID starting with f8f81c4c2acf4edd33f87c9f4d0fee0145b5efaa4e3b7f674a9ece4f8baa522e not found: ID does not exist" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.229679 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" (UID: "019c18f5-ddc0-4a2c-a320-5dbe2de6fa62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.304647 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.304684 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94pd6\" (UniqueName: \"kubernetes.io/projected/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-kube-api-access-94pd6\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.304694 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.304702 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.465072 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.479816 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.500322 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.513418 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528054 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.528526 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-log" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528584 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-log" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.528613 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc970906-6f53-4c9c-931f-ba1bb6758411" containerName="nova-manage" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528621 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc970906-6f53-4c9c-931f-ba1bb6758411" containerName="nova-manage" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.528640 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-log" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528650 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-log" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.528684 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-metadata" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528692 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-metadata" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.528703 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-api" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528709 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-api" Jan 09 00:55:58 crc kubenswrapper[4945]: E0109 00:55:58.528720 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639d2ba2-21bc-41cd-9227-090f4fc150ab" containerName="nova-scheduler-scheduler" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528727 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="639d2ba2-21bc-41cd-9227-090f4fc150ab" containerName="nova-scheduler-scheduler" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528946 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="639d2ba2-21bc-41cd-9227-090f4fc150ab" containerName="nova-scheduler-scheduler" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528966 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-log" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.528982 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc970906-6f53-4c9c-931f-ba1bb6758411" containerName="nova-manage" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.529012 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57d9b20-c804-4a12-ac78-39c180760468" containerName="nova-metadata-metadata" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.529026 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-api" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.529037 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" containerName="nova-api-log" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.529655 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.532231 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.537359 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.551445 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.564156 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.587852 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.589768 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.593899 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.605326 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.607141 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.610252 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.611423 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-config-data\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.611503 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgck\" (UniqueName: \"kubernetes.io/projected/86b1035d-f717-456a-9a37-a7930dfe5a22-kube-api-access-ptgck\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.611588 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.613766 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.632310 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.713852 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.713962 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714031 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3572dc12-51d9-4923-ae10-b803485aa49b-logs\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714063 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714101 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-logs\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714141 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbzm\" (UniqueName: \"kubernetes.io/projected/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-kube-api-access-sqbzm\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714437 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-config-data\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714529 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rtqj\" (UniqueName: \"kubernetes.io/projected/3572dc12-51d9-4923-ae10-b803485aa49b-kube-api-access-8rtqj\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714596 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-config-data\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714627 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-config-data\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.714659 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptgck\" (UniqueName: \"kubernetes.io/projected/86b1035d-f717-456a-9a37-a7930dfe5a22-kube-api-access-ptgck\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.720718 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-config-data\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.723938 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.732544 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptgck\" (UniqueName: \"kubernetes.io/projected/86b1035d-f717-456a-9a37-a7930dfe5a22-kube-api-access-ptgck\") pod \"nova-scheduler-0\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816285 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-logs\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816369 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbzm\" (UniqueName: \"kubernetes.io/projected/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-kube-api-access-sqbzm\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816480 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rtqj\" (UniqueName: \"kubernetes.io/projected/3572dc12-51d9-4923-ae10-b803485aa49b-kube-api-access-8rtqj\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816510 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-config-data\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816528 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-config-data\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816561 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816596 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3572dc12-51d9-4923-ae10-b803485aa49b-logs\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816612 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.816806 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-logs\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.818103 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3572dc12-51d9-4923-ae10-b803485aa49b-logs\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.820352 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-config-data\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.842476 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.850161 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rtqj\" (UniqueName: \"kubernetes.io/projected/3572dc12-51d9-4923-ae10-b803485aa49b-kube-api-access-8rtqj\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.852776 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-config-data\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.857751 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.867144 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " pod="openstack/nova-metadata-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.889780 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbzm\" (UniqueName: \"kubernetes.io/projected/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-kube-api-access-sqbzm\") pod \"nova-api-0\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.913826 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:55:58 crc kubenswrapper[4945]: I0109 00:55:58.932543 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:55:59 crc kubenswrapper[4945]: I0109 00:55:59.491053 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:55:59 crc kubenswrapper[4945]: I0109 00:55:59.593607 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:55:59 crc kubenswrapper[4945]: I0109 00:55:59.604816 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.015231 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="019c18f5-ddc0-4a2c-a320-5dbe2de6fa62" path="/var/lib/kubelet/pods/019c18f5-ddc0-4a2c-a320-5dbe2de6fa62/volumes" Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.016645 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639d2ba2-21bc-41cd-9227-090f4fc150ab" path="/var/lib/kubelet/pods/639d2ba2-21bc-41cd-9227-090f4fc150ab/volumes" Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.017240 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57d9b20-c804-4a12-ac78-39c180760468" path="/var/lib/kubelet/pods/c57d9b20-c804-4a12-ac78-39c180760468/volumes" Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.158939 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3572dc12-51d9-4923-ae10-b803485aa49b","Type":"ContainerStarted","Data":"f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a"} Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.159001 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3572dc12-51d9-4923-ae10-b803485aa49b","Type":"ContainerStarted","Data":"566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6"} Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.159012 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3572dc12-51d9-4923-ae10-b803485aa49b","Type":"ContainerStarted","Data":"bc4da872d73fdad8b63bc0ed9e953f41bf6b69d4aaa039d548027d2f6769cba6"} Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.160697 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7","Type":"ContainerStarted","Data":"664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff"} Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.160732 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7","Type":"ContainerStarted","Data":"8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7"} Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.160748 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7","Type":"ContainerStarted","Data":"3cf87b5009311829bc836a0838ad205afea4628e055233c1cff867eac9602ec8"} Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.163797 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86b1035d-f717-456a-9a37-a7930dfe5a22","Type":"ContainerStarted","Data":"4448b4046d667955af74c666640c7610912a1dede5cd19d47d15b93e141a2253"} Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.163854 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86b1035d-f717-456a-9a37-a7930dfe5a22","Type":"ContainerStarted","Data":"5d582121f2ffbf223ac52b83e4646528487eb5b57d9055e13336f6b87e752683"} Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.182546 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.182522074 podStartE2EDuration="2.182522074s" podCreationTimestamp="2026-01-09 00:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:00.175739007 +0000 UTC m=+6030.486897963" watchObservedRunningTime="2026-01-09 00:56:00.182522074 +0000 UTC m=+6030.493681020" Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.210380 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.210354019 podStartE2EDuration="2.210354019s" podCreationTimestamp="2026-01-09 00:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:00.202316571 +0000 UTC m=+6030.513475507" watchObservedRunningTime="2026-01-09 00:56:00.210354019 +0000 UTC m=+6030.521512965" Jan 09 00:56:00 crc kubenswrapper[4945]: I0109 00:56:00.224720 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.224697922 podStartE2EDuration="2.224697922s" podCreationTimestamp="2026-01-09 00:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:00.224323003 +0000 UTC m=+6030.535481949" watchObservedRunningTime="2026-01-09 00:56:00.224697922 +0000 UTC m=+6030.535856878" Jan 09 00:56:03 crc kubenswrapper[4945]: I0109 00:56:03.858283 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 00:56:03 crc kubenswrapper[4945]: I0109 00:56:03.933304 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 00:56:03 crc kubenswrapper[4945]: I0109 00:56:03.933347 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 00:56:08 crc kubenswrapper[4945]: I0109 00:56:08.858111 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 00:56:08 crc kubenswrapper[4945]: I0109 00:56:08.889869 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 00:56:08 crc kubenswrapper[4945]: I0109 00:56:08.915074 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 00:56:08 crc kubenswrapper[4945]: I0109 00:56:08.915136 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 00:56:08 crc kubenswrapper[4945]: I0109 00:56:08.933917 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 00:56:08 crc kubenswrapper[4945]: I0109 00:56:08.933976 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 00:56:09 crc kubenswrapper[4945]: I0109 00:56:09.284676 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 00:56:10 crc kubenswrapper[4945]: I0109 00:56:10.091216 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.78:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:56:10 crc kubenswrapper[4945]: I0109 00:56:10.091232 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.77:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:56:10 crc kubenswrapper[4945]: I0109 00:56:10.091260 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.78:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:56:10 crc kubenswrapper[4945]: I0109 00:56:10.091216 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.77:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.012647 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qmqgr"] Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.016224 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.032568 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmqgr"] Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.136123 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-utilities\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.136255 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fpgk\" (UniqueName: \"kubernetes.io/projected/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-kube-api-access-6fpgk\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.136288 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-catalog-content\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.238643 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-catalog-content\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.238789 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-utilities\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.238898 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fpgk\" (UniqueName: \"kubernetes.io/projected/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-kube-api-access-6fpgk\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.239190 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-catalog-content\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.239585 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-utilities\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.263267 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fpgk\" (UniqueName: \"kubernetes.io/projected/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-kube-api-access-6fpgk\") pod \"redhat-operators-qmqgr\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.407159 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:15 crc kubenswrapper[4945]: I0109 00:56:15.865620 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qmqgr"] Jan 09 00:56:15 crc kubenswrapper[4945]: W0109 00:56:15.866208 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d15e8af_638c_4ac3_a03b_8f4072b8d55c.slice/crio-63dab5ca028dbe1cbeccedf2a516b3a664b9c6dcc44634c1a49236b42304ee16 WatchSource:0}: Error finding container 63dab5ca028dbe1cbeccedf2a516b3a664b9c6dcc44634c1a49236b42304ee16: Status 404 returned error can't find the container with id 63dab5ca028dbe1cbeccedf2a516b3a664b9c6dcc44634c1a49236b42304ee16 Jan 09 00:56:16 crc kubenswrapper[4945]: I0109 00:56:16.310424 4945 generic.go:334] "Generic (PLEG): container finished" podID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerID="13c2d3f564773b901ed721a25428c69c28eecf8e86bf4d0c88b4554296a77dee" exitCode=0 Jan 09 00:56:16 crc kubenswrapper[4945]: I0109 00:56:16.310498 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmqgr" event={"ID":"9d15e8af-638c-4ac3-a03b-8f4072b8d55c","Type":"ContainerDied","Data":"13c2d3f564773b901ed721a25428c69c28eecf8e86bf4d0c88b4554296a77dee"} Jan 09 00:56:16 crc kubenswrapper[4945]: I0109 00:56:16.310532 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmqgr" event={"ID":"9d15e8af-638c-4ac3-a03b-8f4072b8d55c","Type":"ContainerStarted","Data":"63dab5ca028dbe1cbeccedf2a516b3a664b9c6dcc44634c1a49236b42304ee16"} Jan 09 00:56:16 crc kubenswrapper[4945]: I0109 00:56:16.313546 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 00:56:17 crc kubenswrapper[4945]: I0109 00:56:17.321307 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmqgr" event={"ID":"9d15e8af-638c-4ac3-a03b-8f4072b8d55c","Type":"ContainerStarted","Data":"eed9b856f3d25117c70f42021d574b7abb5d41ff3978a64e4b4b403b5230b084"} Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.331493 4945 generic.go:334] "Generic (PLEG): container finished" podID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerID="eed9b856f3d25117c70f42021d574b7abb5d41ff3978a64e4b4b403b5230b084" exitCode=0 Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.331582 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmqgr" event={"ID":"9d15e8af-638c-4ac3-a03b-8f4072b8d55c","Type":"ContainerDied","Data":"eed9b856f3d25117c70f42021d574b7abb5d41ff3978a64e4b4b403b5230b084"} Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.941521 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.942443 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.943255 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.944165 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.944523 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.946486 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 00:56:18 crc kubenswrapper[4945]: I0109 00:56:18.948505 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.343479 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmqgr" event={"ID":"9d15e8af-638c-4ac3-a03b-8f4072b8d55c","Type":"ContainerStarted","Data":"2c3bfa0841e1968680f57b119752b2b4850fbd594648e78c45c898dfe40ef448"} Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.344282 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.376423 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.384335 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.400120 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qmqgr" podStartSLOduration=2.663446489 podStartE2EDuration="5.400095392s" podCreationTimestamp="2026-01-09 00:56:14 +0000 UTC" firstStartedPulling="2026-01-09 00:56:16.313289333 +0000 UTC m=+6046.624448279" lastFinishedPulling="2026-01-09 00:56:19.049938236 +0000 UTC m=+6049.361097182" observedRunningTime="2026-01-09 00:56:19.390332152 +0000 UTC m=+6049.701491118" watchObservedRunningTime="2026-01-09 00:56:19.400095392 +0000 UTC m=+6049.711254338" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.680156 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74bcb569c7-spwnr"] Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.682118 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.697502 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bcb569c7-spwnr"] Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.825481 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-578q5\" (UniqueName: \"kubernetes.io/projected/1f585148-b171-42e4-8278-be3ac570c103-kube-api-access-578q5\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.825561 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-dns-svc\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.825589 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-nb\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.825877 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-config\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.825918 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-sb\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.928093 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-config\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.928151 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-sb\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.928217 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-578q5\" (UniqueName: \"kubernetes.io/projected/1f585148-b171-42e4-8278-be3ac570c103-kube-api-access-578q5\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.928261 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-dns-svc\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.928282 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-nb\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.929136 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-config\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.929303 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-sb\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.929396 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-nb\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.929423 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-dns-svc\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:19 crc kubenswrapper[4945]: I0109 00:56:19.952054 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-578q5\" (UniqueName: \"kubernetes.io/projected/1f585148-b171-42e4-8278-be3ac570c103-kube-api-access-578q5\") pod \"dnsmasq-dns-74bcb569c7-spwnr\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:20 crc kubenswrapper[4945]: I0109 00:56:20.008638 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:20 crc kubenswrapper[4945]: I0109 00:56:20.571151 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bcb569c7-spwnr"] Jan 09 00:56:21 crc kubenswrapper[4945]: I0109 00:56:21.361832 4945 generic.go:334] "Generic (PLEG): container finished" podID="1f585148-b171-42e4-8278-be3ac570c103" containerID="10a63d09f7d28b40d533e2383e6b0fd72ddb646aab421c23d8771ec24ab4de82" exitCode=0 Jan 09 00:56:21 crc kubenswrapper[4945]: I0109 00:56:21.362033 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" event={"ID":"1f585148-b171-42e4-8278-be3ac570c103","Type":"ContainerDied","Data":"10a63d09f7d28b40d533e2383e6b0fd72ddb646aab421c23d8771ec24ab4de82"} Jan 09 00:56:21 crc kubenswrapper[4945]: I0109 00:56:21.362239 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" event={"ID":"1f585148-b171-42e4-8278-be3ac570c103","Type":"ContainerStarted","Data":"1fbfe6c5223a9bcdb29383663ac06dc0f8cb8034512ead148cc4ffeb001a98d7"} Jan 09 00:56:22 crc kubenswrapper[4945]: I0109 00:56:22.373282 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" event={"ID":"1f585148-b171-42e4-8278-be3ac570c103","Type":"ContainerStarted","Data":"bfae634e9f7d8fc78c1edbdf59883ef87324a85024168a7d03fd8ff519eab291"} Jan 09 00:56:22 crc kubenswrapper[4945]: I0109 00:56:22.373648 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:22 crc kubenswrapper[4945]: I0109 00:56:22.406668 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" podStartSLOduration=3.406644206 podStartE2EDuration="3.406644206s" podCreationTimestamp="2026-01-09 00:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:22.398770152 +0000 UTC m=+6052.709929128" watchObservedRunningTime="2026-01-09 00:56:22.406644206 +0000 UTC m=+6052.717803152" Jan 09 00:56:25 crc kubenswrapper[4945]: I0109 00:56:25.407863 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:25 crc kubenswrapper[4945]: I0109 00:56:25.408672 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:25 crc kubenswrapper[4945]: I0109 00:56:25.468063 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:26 crc kubenswrapper[4945]: I0109 00:56:26.461948 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:26 crc kubenswrapper[4945]: I0109 00:56:26.532274 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmqgr"] Jan 09 00:56:28 crc kubenswrapper[4945]: I0109 00:56:28.433127 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qmqgr" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerName="registry-server" containerID="cri-o://2c3bfa0841e1968680f57b119752b2b4850fbd594648e78c45c898dfe40ef448" gracePeriod=2 Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.018601 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.109281 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-788568d9c9-5mvgb"] Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.109792 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" podUID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" containerName="dnsmasq-dns" containerID="cri-o://51477c22c40eb0ba4caf981578102f1606287875f59d43633600fa643039c446" gracePeriod=10 Jan 09 00:56:30 crc kubenswrapper[4945]: E0109 00:56:30.425521 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca45b03d_1d0b_4833_92b9_e799a1aa9bdb.slice/crio-conmon-51477c22c40eb0ba4caf981578102f1606287875f59d43633600fa643039c446.scope\": RecentStats: unable to find data in memory cache]" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.468191 4945 generic.go:334] "Generic (PLEG): container finished" podID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" containerID="51477c22c40eb0ba4caf981578102f1606287875f59d43633600fa643039c446" exitCode=0 Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.468265 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" event={"ID":"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb","Type":"ContainerDied","Data":"51477c22c40eb0ba4caf981578102f1606287875f59d43633600fa643039c446"} Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.473348 4945 generic.go:334] "Generic (PLEG): container finished" podID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerID="2c3bfa0841e1968680f57b119752b2b4850fbd594648e78c45c898dfe40ef448" exitCode=0 Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.473393 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmqgr" event={"ID":"9d15e8af-638c-4ac3-a03b-8f4072b8d55c","Type":"ContainerDied","Data":"2c3bfa0841e1968680f57b119752b2b4850fbd594648e78c45c898dfe40ef448"} Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.643024 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.727752 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-config\") pod \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.727966 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-nb\") pod \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.728055 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzbxj\" (UniqueName: \"kubernetes.io/projected/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-kube-api-access-jzbxj\") pod \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.728077 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-sb\") pod \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.728183 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-dns-svc\") pod \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\" (UID: \"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb\") " Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.733322 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-kube-api-access-jzbxj" (OuterVolumeSpecName: "kube-api-access-jzbxj") pod "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" (UID: "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb"). InnerVolumeSpecName "kube-api-access-jzbxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.775096 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" (UID: "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.777110 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-config" (OuterVolumeSpecName: "config") pod "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" (UID: "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.777706 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" (UID: "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.779734 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" (UID: "ca45b03d-1d0b-4833-92b9-e799a1aa9bdb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.809169 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.834230 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fpgk\" (UniqueName: \"kubernetes.io/projected/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-kube-api-access-6fpgk\") pod \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.834389 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-utilities\") pod \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.834457 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-catalog-content\") pod \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\" (UID: \"9d15e8af-638c-4ac3-a03b-8f4072b8d55c\") " Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.835203 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.835216 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.835226 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.835238 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzbxj\" (UniqueName: \"kubernetes.io/projected/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-kube-api-access-jzbxj\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.835251 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.836341 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-utilities" (OuterVolumeSpecName: "utilities") pod "9d15e8af-638c-4ac3-a03b-8f4072b8d55c" (UID: "9d15e8af-638c-4ac3-a03b-8f4072b8d55c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.838495 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-kube-api-access-6fpgk" (OuterVolumeSpecName: "kube-api-access-6fpgk") pod "9d15e8af-638c-4ac3-a03b-8f4072b8d55c" (UID: "9d15e8af-638c-4ac3-a03b-8f4072b8d55c"). InnerVolumeSpecName "kube-api-access-6fpgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.937568 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fpgk\" (UniqueName: \"kubernetes.io/projected/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-kube-api-access-6fpgk\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.937604 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:30 crc kubenswrapper[4945]: I0109 00:56:30.958695 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d15e8af-638c-4ac3-a03b-8f4072b8d55c" (UID: "9d15e8af-638c-4ac3-a03b-8f4072b8d55c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.040060 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d15e8af-638c-4ac3-a03b-8f4072b8d55c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.484639 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qmqgr" event={"ID":"9d15e8af-638c-4ac3-a03b-8f4072b8d55c","Type":"ContainerDied","Data":"63dab5ca028dbe1cbeccedf2a516b3a664b9c6dcc44634c1a49236b42304ee16"} Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.484969 4945 scope.go:117] "RemoveContainer" containerID="2c3bfa0841e1968680f57b119752b2b4850fbd594648e78c45c898dfe40ef448" Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.485159 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qmqgr" Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.489673 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" event={"ID":"ca45b03d-1d0b-4833-92b9-e799a1aa9bdb","Type":"ContainerDied","Data":"a0536201742a508f6ba46789c9ae1c6cfff7724f78e928bfd57dcba5557546ce"} Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.489779 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-788568d9c9-5mvgb" Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.509006 4945 scope.go:117] "RemoveContainer" containerID="eed9b856f3d25117c70f42021d574b7abb5d41ff3978a64e4b4b403b5230b084" Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.522054 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qmqgr"] Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.532017 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qmqgr"] Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.549183 4945 scope.go:117] "RemoveContainer" containerID="13c2d3f564773b901ed721a25428c69c28eecf8e86bf4d0c88b4554296a77dee" Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.551642 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-788568d9c9-5mvgb"] Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.561447 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-788568d9c9-5mvgb"] Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.592366 4945 scope.go:117] "RemoveContainer" containerID="51477c22c40eb0ba4caf981578102f1606287875f59d43633600fa643039c446" Jan 09 00:56:31 crc kubenswrapper[4945]: I0109 00:56:31.616493 4945 scope.go:117] "RemoveContainer" containerID="282b51394323560df1485118b76d9905e2c3cbc373cba54712693d75f15ce33a" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.010693 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" path="/var/lib/kubelet/pods/9d15e8af-638c-4ac3-a03b-8f4072b8d55c/volumes" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.011699 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" path="/var/lib/kubelet/pods/ca45b03d-1d0b-4833-92b9-e799a1aa9bdb/volumes" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.410706 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-swcpn"] Jan 09 00:56:32 crc kubenswrapper[4945]: E0109 00:56:32.416127 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" containerName="dnsmasq-dns" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.416160 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" containerName="dnsmasq-dns" Jan 09 00:56:32 crc kubenswrapper[4945]: E0109 00:56:32.416170 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" containerName="init" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.416177 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" containerName="init" Jan 09 00:56:32 crc kubenswrapper[4945]: E0109 00:56:32.416202 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerName="extract-content" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.416210 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerName="extract-content" Jan 09 00:56:32 crc kubenswrapper[4945]: E0109 00:56:32.416233 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerName="registry-server" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.416240 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerName="registry-server" Jan 09 00:56:32 crc kubenswrapper[4945]: E0109 00:56:32.416247 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerName="extract-utilities" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.416254 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerName="extract-utilities" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.416515 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca45b03d-1d0b-4833-92b9-e799a1aa9bdb" containerName="dnsmasq-dns" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.416532 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d15e8af-638c-4ac3-a03b-8f4072b8d55c" containerName="registry-server" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.417772 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.424262 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-swcpn"] Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.465485 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp8t4\" (UniqueName: \"kubernetes.io/projected/16705031-b599-4f15-91f2-5258e613426e-kube-api-access-vp8t4\") pod \"cinder-db-create-swcpn\" (UID: \"16705031-b599-4f15-91f2-5258e613426e\") " pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.465570 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16705031-b599-4f15-91f2-5258e613426e-operator-scripts\") pod \"cinder-db-create-swcpn\" (UID: \"16705031-b599-4f15-91f2-5258e613426e\") " pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.509622 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-73d7-account-create-update-6r96b"] Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.511082 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.516838 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.519838 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-73d7-account-create-update-6r96b"] Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.566585 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp8t4\" (UniqueName: \"kubernetes.io/projected/16705031-b599-4f15-91f2-5258e613426e-kube-api-access-vp8t4\") pod \"cinder-db-create-swcpn\" (UID: \"16705031-b599-4f15-91f2-5258e613426e\") " pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.566685 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16705031-b599-4f15-91f2-5258e613426e-operator-scripts\") pod \"cinder-db-create-swcpn\" (UID: \"16705031-b599-4f15-91f2-5258e613426e\") " pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.566756 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-operator-scripts\") pod \"cinder-73d7-account-create-update-6r96b\" (UID: \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\") " pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.566794 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnzc\" (UniqueName: \"kubernetes.io/projected/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-kube-api-access-mjnzc\") pod \"cinder-73d7-account-create-update-6r96b\" (UID: \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\") " pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.567892 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16705031-b599-4f15-91f2-5258e613426e-operator-scripts\") pod \"cinder-db-create-swcpn\" (UID: \"16705031-b599-4f15-91f2-5258e613426e\") " pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.591257 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp8t4\" (UniqueName: \"kubernetes.io/projected/16705031-b599-4f15-91f2-5258e613426e-kube-api-access-vp8t4\") pod \"cinder-db-create-swcpn\" (UID: \"16705031-b599-4f15-91f2-5258e613426e\") " pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.668842 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-operator-scripts\") pod \"cinder-73d7-account-create-update-6r96b\" (UID: \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\") " pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.669191 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjnzc\" (UniqueName: \"kubernetes.io/projected/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-kube-api-access-mjnzc\") pod \"cinder-73d7-account-create-update-6r96b\" (UID: \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\") " pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.669685 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-operator-scripts\") pod \"cinder-73d7-account-create-update-6r96b\" (UID: \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\") " pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.691070 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjnzc\" (UniqueName: \"kubernetes.io/projected/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-kube-api-access-mjnzc\") pod \"cinder-73d7-account-create-update-6r96b\" (UID: \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\") " pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.760837 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:32 crc kubenswrapper[4945]: I0109 00:56:32.825876 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:33 crc kubenswrapper[4945]: I0109 00:56:33.198037 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-swcpn"] Jan 09 00:56:33 crc kubenswrapper[4945]: W0109 00:56:33.203253 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16705031_b599_4f15_91f2_5258e613426e.slice/crio-18f1be2b032acaa145c449b08819da0308bc872c0a77312131ee73313f610623 WatchSource:0}: Error finding container 18f1be2b032acaa145c449b08819da0308bc872c0a77312131ee73313f610623: Status 404 returned error can't find the container with id 18f1be2b032acaa145c449b08819da0308bc872c0a77312131ee73313f610623 Jan 09 00:56:33 crc kubenswrapper[4945]: W0109 00:56:33.325664 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bffef94_e4a5_4fec_9fed_5199d1eb52e3.slice/crio-959eaecde6e31f02094b23cd0528bca598efc5ed00065b837471c07e4fea8134 WatchSource:0}: Error finding container 959eaecde6e31f02094b23cd0528bca598efc5ed00065b837471c07e4fea8134: Status 404 returned error can't find the container with id 959eaecde6e31f02094b23cd0528bca598efc5ed00065b837471c07e4fea8134 Jan 09 00:56:33 crc kubenswrapper[4945]: I0109 00:56:33.323208 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-73d7-account-create-update-6r96b"] Jan 09 00:56:33 crc kubenswrapper[4945]: I0109 00:56:33.514531 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-73d7-account-create-update-6r96b" event={"ID":"1bffef94-e4a5-4fec-9fed-5199d1eb52e3","Type":"ContainerStarted","Data":"c168a479d6a02854dfde8d1ce4fdb7fe1e298b6b14e6c602536f2626a9483f39"} Jan 09 00:56:33 crc kubenswrapper[4945]: I0109 00:56:33.514587 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-73d7-account-create-update-6r96b" event={"ID":"1bffef94-e4a5-4fec-9fed-5199d1eb52e3","Type":"ContainerStarted","Data":"959eaecde6e31f02094b23cd0528bca598efc5ed00065b837471c07e4fea8134"} Jan 09 00:56:33 crc kubenswrapper[4945]: I0109 00:56:33.517496 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-swcpn" event={"ID":"16705031-b599-4f15-91f2-5258e613426e","Type":"ContainerStarted","Data":"91eafc14becc9e8efd3e606ce5a4a22cefdddf74ae934e452d5be8c5ba811203"} Jan 09 00:56:33 crc kubenswrapper[4945]: I0109 00:56:33.517580 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-swcpn" event={"ID":"16705031-b599-4f15-91f2-5258e613426e","Type":"ContainerStarted","Data":"18f1be2b032acaa145c449b08819da0308bc872c0a77312131ee73313f610623"} Jan 09 00:56:33 crc kubenswrapper[4945]: I0109 00:56:33.537101 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-73d7-account-create-update-6r96b" podStartSLOduration=1.5370815690000001 podStartE2EDuration="1.537081569s" podCreationTimestamp="2026-01-09 00:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:33.535439798 +0000 UTC m=+6063.846598754" watchObservedRunningTime="2026-01-09 00:56:33.537081569 +0000 UTC m=+6063.848240515" Jan 09 00:56:33 crc kubenswrapper[4945]: I0109 00:56:33.558678 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-swcpn" podStartSLOduration=1.55866013 podStartE2EDuration="1.55866013s" podCreationTimestamp="2026-01-09 00:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:33.55417733 +0000 UTC m=+6063.865336276" watchObservedRunningTime="2026-01-09 00:56:33.55866013 +0000 UTC m=+6063.869819076" Jan 09 00:56:34 crc kubenswrapper[4945]: I0109 00:56:34.528647 4945 generic.go:334] "Generic (PLEG): container finished" podID="16705031-b599-4f15-91f2-5258e613426e" containerID="91eafc14becc9e8efd3e606ce5a4a22cefdddf74ae934e452d5be8c5ba811203" exitCode=0 Jan 09 00:56:34 crc kubenswrapper[4945]: I0109 00:56:34.528767 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-swcpn" event={"ID":"16705031-b599-4f15-91f2-5258e613426e","Type":"ContainerDied","Data":"91eafc14becc9e8efd3e606ce5a4a22cefdddf74ae934e452d5be8c5ba811203"} Jan 09 00:56:34 crc kubenswrapper[4945]: I0109 00:56:34.531689 4945 generic.go:334] "Generic (PLEG): container finished" podID="1bffef94-e4a5-4fec-9fed-5199d1eb52e3" containerID="c168a479d6a02854dfde8d1ce4fdb7fe1e298b6b14e6c602536f2626a9483f39" exitCode=0 Jan 09 00:56:34 crc kubenswrapper[4945]: I0109 00:56:34.531720 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-73d7-account-create-update-6r96b" event={"ID":"1bffef94-e4a5-4fec-9fed-5199d1eb52e3","Type":"ContainerDied","Data":"c168a479d6a02854dfde8d1ce4fdb7fe1e298b6b14e6c602536f2626a9483f39"} Jan 09 00:56:35 crc kubenswrapper[4945]: I0109 00:56:35.921580 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:35 crc kubenswrapper[4945]: I0109 00:56:35.932837 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.042886 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16705031-b599-4f15-91f2-5258e613426e-operator-scripts\") pod \"16705031-b599-4f15-91f2-5258e613426e\" (UID: \"16705031-b599-4f15-91f2-5258e613426e\") " Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.042953 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp8t4\" (UniqueName: \"kubernetes.io/projected/16705031-b599-4f15-91f2-5258e613426e-kube-api-access-vp8t4\") pod \"16705031-b599-4f15-91f2-5258e613426e\" (UID: \"16705031-b599-4f15-91f2-5258e613426e\") " Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.043147 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-operator-scripts\") pod \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\" (UID: \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\") " Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.043176 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjnzc\" (UniqueName: \"kubernetes.io/projected/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-kube-api-access-mjnzc\") pod \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\" (UID: \"1bffef94-e4a5-4fec-9fed-5199d1eb52e3\") " Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.043933 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16705031-b599-4f15-91f2-5258e613426e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16705031-b599-4f15-91f2-5258e613426e" (UID: "16705031-b599-4f15-91f2-5258e613426e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.044413 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bffef94-e4a5-4fec-9fed-5199d1eb52e3" (UID: "1bffef94-e4a5-4fec-9fed-5199d1eb52e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.050233 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16705031-b599-4f15-91f2-5258e613426e-kube-api-access-vp8t4" (OuterVolumeSpecName: "kube-api-access-vp8t4") pod "16705031-b599-4f15-91f2-5258e613426e" (UID: "16705031-b599-4f15-91f2-5258e613426e"). InnerVolumeSpecName "kube-api-access-vp8t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.050944 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-kube-api-access-mjnzc" (OuterVolumeSpecName: "kube-api-access-mjnzc") pod "1bffef94-e4a5-4fec-9fed-5199d1eb52e3" (UID: "1bffef94-e4a5-4fec-9fed-5199d1eb52e3"). InnerVolumeSpecName "kube-api-access-mjnzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.145842 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16705031-b599-4f15-91f2-5258e613426e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.146404 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp8t4\" (UniqueName: \"kubernetes.io/projected/16705031-b599-4f15-91f2-5258e613426e-kube-api-access-vp8t4\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.146425 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.146437 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjnzc\" (UniqueName: \"kubernetes.io/projected/1bffef94-e4a5-4fec-9fed-5199d1eb52e3-kube-api-access-mjnzc\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.548579 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-73d7-account-create-update-6r96b" event={"ID":"1bffef94-e4a5-4fec-9fed-5199d1eb52e3","Type":"ContainerDied","Data":"959eaecde6e31f02094b23cd0528bca598efc5ed00065b837471c07e4fea8134"} Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.548629 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959eaecde6e31f02094b23cd0528bca598efc5ed00065b837471c07e4fea8134" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.548598 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-73d7-account-create-update-6r96b" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.549961 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-swcpn" event={"ID":"16705031-b599-4f15-91f2-5258e613426e","Type":"ContainerDied","Data":"18f1be2b032acaa145c449b08819da0308bc872c0a77312131ee73313f610623"} Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.549984 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18f1be2b032acaa145c449b08819da0308bc872c0a77312131ee73313f610623" Jan 09 00:56:36 crc kubenswrapper[4945]: I0109 00:56:36.550045 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-swcpn" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.817413 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-lbcpp"] Jan 09 00:56:37 crc kubenswrapper[4945]: E0109 00:56:37.817956 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bffef94-e4a5-4fec-9fed-5199d1eb52e3" containerName="mariadb-account-create-update" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.817976 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bffef94-e4a5-4fec-9fed-5199d1eb52e3" containerName="mariadb-account-create-update" Jan 09 00:56:37 crc kubenswrapper[4945]: E0109 00:56:37.818024 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16705031-b599-4f15-91f2-5258e613426e" containerName="mariadb-database-create" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.818036 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="16705031-b599-4f15-91f2-5258e613426e" containerName="mariadb-database-create" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.818240 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bffef94-e4a5-4fec-9fed-5199d1eb52e3" containerName="mariadb-account-create-update" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.818260 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="16705031-b599-4f15-91f2-5258e613426e" containerName="mariadb-database-create" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.819125 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.821612 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.821747 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qmwlp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.821747 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.830020 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lbcpp"] Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.876442 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c6d1165-979d-43be-8b0b-76917ab91e5e-etc-machine-id\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.876515 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-config-data\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.876542 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-combined-ca-bundle\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.876592 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4ssz\" (UniqueName: \"kubernetes.io/projected/6c6d1165-979d-43be-8b0b-76917ab91e5e-kube-api-access-r4ssz\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.876789 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-scripts\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.876849 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-db-sync-config-data\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.978730 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-scripts\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.979077 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-db-sync-config-data\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.979126 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c6d1165-979d-43be-8b0b-76917ab91e5e-etc-machine-id\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.979166 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-config-data\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.979182 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-combined-ca-bundle\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.979219 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4ssz\" (UniqueName: \"kubernetes.io/projected/6c6d1165-979d-43be-8b0b-76917ab91e5e-kube-api-access-r4ssz\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.979568 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c6d1165-979d-43be-8b0b-76917ab91e5e-etc-machine-id\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.983580 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-scripts\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.983645 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-combined-ca-bundle\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.984198 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-config-data\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:37 crc kubenswrapper[4945]: I0109 00:56:37.990594 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-db-sync-config-data\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:38 crc kubenswrapper[4945]: I0109 00:56:38.001595 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4ssz\" (UniqueName: \"kubernetes.io/projected/6c6d1165-979d-43be-8b0b-76917ab91e5e-kube-api-access-r4ssz\") pod \"cinder-db-sync-lbcpp\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:38 crc kubenswrapper[4945]: I0109 00:56:38.140824 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:38 crc kubenswrapper[4945]: I0109 00:56:38.588543 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-lbcpp"] Jan 09 00:56:38 crc kubenswrapper[4945]: W0109 00:56:38.592270 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6d1165_979d_43be_8b0b_76917ab91e5e.slice/crio-0c140bc4c8e318876a42893224462a36daf8b9b9c8fe1c47d813ea1e94d07f5f WatchSource:0}: Error finding container 0c140bc4c8e318876a42893224462a36daf8b9b9c8fe1c47d813ea1e94d07f5f: Status 404 returned error can't find the container with id 0c140bc4c8e318876a42893224462a36daf8b9b9c8fe1c47d813ea1e94d07f5f Jan 09 00:56:39 crc kubenswrapper[4945]: I0109 00:56:39.574304 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lbcpp" event={"ID":"6c6d1165-979d-43be-8b0b-76917ab91e5e","Type":"ContainerStarted","Data":"1e6e25672e2df6fb6afd22e3fd4a7309944afdd24f8282f65ad2646df5b506e9"} Jan 09 00:56:39 crc kubenswrapper[4945]: I0109 00:56:39.574568 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lbcpp" event={"ID":"6c6d1165-979d-43be-8b0b-76917ab91e5e","Type":"ContainerStarted","Data":"0c140bc4c8e318876a42893224462a36daf8b9b9c8fe1c47d813ea1e94d07f5f"} Jan 09 00:56:39 crc kubenswrapper[4945]: I0109 00:56:39.597157 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-lbcpp" podStartSLOduration=2.597128701 podStartE2EDuration="2.597128701s" podCreationTimestamp="2026-01-09 00:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:39.588720875 +0000 UTC m=+6069.899879821" watchObservedRunningTime="2026-01-09 00:56:39.597128701 +0000 UTC m=+6069.908287647" Jan 09 00:56:42 crc kubenswrapper[4945]: I0109 00:56:42.607097 4945 generic.go:334] "Generic (PLEG): container finished" podID="6c6d1165-979d-43be-8b0b-76917ab91e5e" containerID="1e6e25672e2df6fb6afd22e3fd4a7309944afdd24f8282f65ad2646df5b506e9" exitCode=0 Jan 09 00:56:42 crc kubenswrapper[4945]: I0109 00:56:42.607191 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lbcpp" event={"ID":"6c6d1165-979d-43be-8b0b-76917ab91e5e","Type":"ContainerDied","Data":"1e6e25672e2df6fb6afd22e3fd4a7309944afdd24f8282f65ad2646df5b506e9"} Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.970502 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.997022 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c6d1165-979d-43be-8b0b-76917ab91e5e-etc-machine-id\") pod \"6c6d1165-979d-43be-8b0b-76917ab91e5e\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.997107 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4ssz\" (UniqueName: \"kubernetes.io/projected/6c6d1165-979d-43be-8b0b-76917ab91e5e-kube-api-access-r4ssz\") pod \"6c6d1165-979d-43be-8b0b-76917ab91e5e\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.997128 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c6d1165-979d-43be-8b0b-76917ab91e5e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6c6d1165-979d-43be-8b0b-76917ab91e5e" (UID: "6c6d1165-979d-43be-8b0b-76917ab91e5e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.997268 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-scripts\") pod \"6c6d1165-979d-43be-8b0b-76917ab91e5e\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.997311 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-combined-ca-bundle\") pod \"6c6d1165-979d-43be-8b0b-76917ab91e5e\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.997354 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-db-sync-config-data\") pod \"6c6d1165-979d-43be-8b0b-76917ab91e5e\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.997380 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-config-data\") pod \"6c6d1165-979d-43be-8b0b-76917ab91e5e\" (UID: \"6c6d1165-979d-43be-8b0b-76917ab91e5e\") " Jan 09 00:56:43 crc kubenswrapper[4945]: I0109 00:56:43.997744 4945 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6c6d1165-979d-43be-8b0b-76917ab91e5e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.004629 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6c6d1165-979d-43be-8b0b-76917ab91e5e" (UID: "6c6d1165-979d-43be-8b0b-76917ab91e5e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.010954 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-scripts" (OuterVolumeSpecName: "scripts") pod "6c6d1165-979d-43be-8b0b-76917ab91e5e" (UID: "6c6d1165-979d-43be-8b0b-76917ab91e5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.015836 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6d1165-979d-43be-8b0b-76917ab91e5e-kube-api-access-r4ssz" (OuterVolumeSpecName: "kube-api-access-r4ssz") pod "6c6d1165-979d-43be-8b0b-76917ab91e5e" (UID: "6c6d1165-979d-43be-8b0b-76917ab91e5e"). InnerVolumeSpecName "kube-api-access-r4ssz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.028412 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c6d1165-979d-43be-8b0b-76917ab91e5e" (UID: "6c6d1165-979d-43be-8b0b-76917ab91e5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.060804 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-config-data" (OuterVolumeSpecName: "config-data") pod "6c6d1165-979d-43be-8b0b-76917ab91e5e" (UID: "6c6d1165-979d-43be-8b0b-76917ab91e5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.110965 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.111015 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.111030 4945 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.111045 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c6d1165-979d-43be-8b0b-76917ab91e5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.111057 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4ssz\" (UniqueName: \"kubernetes.io/projected/6c6d1165-979d-43be-8b0b-76917ab91e5e-kube-api-access-r4ssz\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.625546 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-lbcpp" event={"ID":"6c6d1165-979d-43be-8b0b-76917ab91e5e","Type":"ContainerDied","Data":"0c140bc4c8e318876a42893224462a36daf8b9b9c8fe1c47d813ea1e94d07f5f"} Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.626220 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c140bc4c8e318876a42893224462a36daf8b9b9c8fe1c47d813ea1e94d07f5f" Jan 09 00:56:44 crc kubenswrapper[4945]: I0109 00:56:44.625624 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-lbcpp" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.011247 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65dbcdd779-zl7sv"] Jan 09 00:56:45 crc kubenswrapper[4945]: E0109 00:56:45.011747 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6d1165-979d-43be-8b0b-76917ab91e5e" containerName="cinder-db-sync" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.011769 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6d1165-979d-43be-8b0b-76917ab91e5e" containerName="cinder-db-sync" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.012037 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6d1165-979d-43be-8b0b-76917ab91e5e" containerName="cinder-db-sync" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.013363 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.032335 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65dbcdd779-zl7sv"] Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.140665 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-nb\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.140826 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw8tn\" (UniqueName: \"kubernetes.io/projected/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-kube-api-access-rw8tn\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.140874 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-dns-svc\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.140893 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-sb\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.140938 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-config\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.168918 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.171020 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.178333 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.178697 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.182188 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qmwlp" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.186223 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.187508 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244423 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244502 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw8tn\" (UniqueName: \"kubernetes.io/projected/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-kube-api-access-rw8tn\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244555 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-dns-svc\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244574 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-sb\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244610 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data-custom\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244636 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244661 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-config\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244690 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-scripts\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-nb\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244767 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47g8k\" (UniqueName: \"kubernetes.io/projected/e12d04f7-b525-4969-9683-f2dc3e7a6466-kube-api-access-47g8k\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244793 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e12d04f7-b525-4969-9683-f2dc3e7a6466-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.244814 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e12d04f7-b525-4969-9683-f2dc3e7a6466-logs\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.247268 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-dns-svc\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.247925 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-config\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.248229 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-nb\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.248530 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-sb\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.265522 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw8tn\" (UniqueName: \"kubernetes.io/projected/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-kube-api-access-rw8tn\") pod \"dnsmasq-dns-65dbcdd779-zl7sv\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.346016 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e12d04f7-b525-4969-9683-f2dc3e7a6466-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.346074 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e12d04f7-b525-4969-9683-f2dc3e7a6466-logs\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.346115 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.346184 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data-custom\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.346210 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.346202 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e12d04f7-b525-4969-9683-f2dc3e7a6466-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.346240 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-scripts\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.346429 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47g8k\" (UniqueName: \"kubernetes.io/projected/e12d04f7-b525-4969-9683-f2dc3e7a6466-kube-api-access-47g8k\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.347216 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e12d04f7-b525-4969-9683-f2dc3e7a6466-logs\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.350852 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data-custom\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.351040 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-scripts\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.351137 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.352212 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.370480 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.373917 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47g8k\" (UniqueName: \"kubernetes.io/projected/e12d04f7-b525-4969-9683-f2dc3e7a6466-kube-api-access-47g8k\") pod \"cinder-api-0\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.502923 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 00:56:45 crc kubenswrapper[4945]: I0109 00:56:45.698331 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65dbcdd779-zl7sv"] Jan 09 00:56:46 crc kubenswrapper[4945]: I0109 00:56:46.070802 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:56:46 crc kubenswrapper[4945]: W0109 00:56:46.074816 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode12d04f7_b525_4969_9683_f2dc3e7a6466.slice/crio-3485d7d1cc9cf463fb0131f66ead9864de7438bdf739268eb503361407fc1abf WatchSource:0}: Error finding container 3485d7d1cc9cf463fb0131f66ead9864de7438bdf739268eb503361407fc1abf: Status 404 returned error can't find the container with id 3485d7d1cc9cf463fb0131f66ead9864de7438bdf739268eb503361407fc1abf Jan 09 00:56:46 crc kubenswrapper[4945]: I0109 00:56:46.662032 4945 generic.go:334] "Generic (PLEG): container finished" podID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerID="c8960da6e46e713a1bd01ef5c271429f3e230cfff53463715c38cd883b320261" exitCode=0 Jan 09 00:56:46 crc kubenswrapper[4945]: I0109 00:56:46.662147 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" event={"ID":"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902","Type":"ContainerDied","Data":"c8960da6e46e713a1bd01ef5c271429f3e230cfff53463715c38cd883b320261"} Jan 09 00:56:46 crc kubenswrapper[4945]: I0109 00:56:46.662356 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" event={"ID":"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902","Type":"ContainerStarted","Data":"2e6bad86ce051ae0953b402d87ee8a93c4f01cc45d4fd119148b440e52c06682"} Jan 09 00:56:46 crc kubenswrapper[4945]: I0109 00:56:46.676962 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e12d04f7-b525-4969-9683-f2dc3e7a6466","Type":"ContainerStarted","Data":"3485d7d1cc9cf463fb0131f66ead9864de7438bdf739268eb503361407fc1abf"} Jan 09 00:56:47 crc kubenswrapper[4945]: I0109 00:56:47.689180 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e12d04f7-b525-4969-9683-f2dc3e7a6466","Type":"ContainerStarted","Data":"4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02"} Jan 09 00:56:47 crc kubenswrapper[4945]: I0109 00:56:47.689855 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e12d04f7-b525-4969-9683-f2dc3e7a6466","Type":"ContainerStarted","Data":"3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902"} Jan 09 00:56:47 crc kubenswrapper[4945]: I0109 00:56:47.689880 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 09 00:56:47 crc kubenswrapper[4945]: I0109 00:56:47.693482 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" event={"ID":"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902","Type":"ContainerStarted","Data":"bf18348951b20e24bf96c447bbabd7f41d7ebb9467abfef5283d14b4f128edd4"} Jan 09 00:56:47 crc kubenswrapper[4945]: I0109 00:56:47.694359 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:47 crc kubenswrapper[4945]: I0109 00:56:47.716604 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.716572062 podStartE2EDuration="2.716572062s" podCreationTimestamp="2026-01-09 00:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:47.713376353 +0000 UTC m=+6078.024535309" watchObservedRunningTime="2026-01-09 00:56:47.716572062 +0000 UTC m=+6078.027731008" Jan 09 00:56:47 crc kubenswrapper[4945]: I0109 00:56:47.737200 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" podStartSLOduration=3.737173599 podStartE2EDuration="3.737173599s" podCreationTimestamp="2026-01-09 00:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:47.733257452 +0000 UTC m=+6078.044416418" watchObservedRunningTime="2026-01-09 00:56:47.737173599 +0000 UTC m=+6078.048332545" Jan 09 00:56:55 crc kubenswrapper[4945]: I0109 00:56:55.373339 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 00:56:55 crc kubenswrapper[4945]: I0109 00:56:55.440572 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bcb569c7-spwnr"] Jan 09 00:56:55 crc kubenswrapper[4945]: I0109 00:56:55.440832 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" podUID="1f585148-b171-42e4-8278-be3ac570c103" containerName="dnsmasq-dns" containerID="cri-o://bfae634e9f7d8fc78c1edbdf59883ef87324a85024168a7d03fd8ff519eab291" gracePeriod=10 Jan 09 00:56:55 crc kubenswrapper[4945]: I0109 00:56:55.778823 4945 generic.go:334] "Generic (PLEG): container finished" podID="1f585148-b171-42e4-8278-be3ac570c103" containerID="bfae634e9f7d8fc78c1edbdf59883ef87324a85024168a7d03fd8ff519eab291" exitCode=0 Jan 09 00:56:55 crc kubenswrapper[4945]: I0109 00:56:55.778918 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" event={"ID":"1f585148-b171-42e4-8278-be3ac570c103","Type":"ContainerDied","Data":"bfae634e9f7d8fc78c1edbdf59883ef87324a85024168a7d03fd8ff519eab291"} Jan 09 00:56:55 crc kubenswrapper[4945]: I0109 00:56:55.928576 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.062966 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-nb\") pod \"1f585148-b171-42e4-8278-be3ac570c103\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.063225 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-config\") pod \"1f585148-b171-42e4-8278-be3ac570c103\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.063270 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-dns-svc\") pod \"1f585148-b171-42e4-8278-be3ac570c103\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.063368 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-sb\") pod \"1f585148-b171-42e4-8278-be3ac570c103\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.063423 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-578q5\" (UniqueName: \"kubernetes.io/projected/1f585148-b171-42e4-8278-be3ac570c103-kube-api-access-578q5\") pod \"1f585148-b171-42e4-8278-be3ac570c103\" (UID: \"1f585148-b171-42e4-8278-be3ac570c103\") " Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.077867 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f585148-b171-42e4-8278-be3ac570c103-kube-api-access-578q5" (OuterVolumeSpecName: "kube-api-access-578q5") pod "1f585148-b171-42e4-8278-be3ac570c103" (UID: "1f585148-b171-42e4-8278-be3ac570c103"). InnerVolumeSpecName "kube-api-access-578q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.133783 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f585148-b171-42e4-8278-be3ac570c103" (UID: "1f585148-b171-42e4-8278-be3ac570c103"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.138099 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f585148-b171-42e4-8278-be3ac570c103" (UID: "1f585148-b171-42e4-8278-be3ac570c103"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.141517 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-config" (OuterVolumeSpecName: "config") pod "1f585148-b171-42e4-8278-be3ac570c103" (UID: "1f585148-b171-42e4-8278-be3ac570c103"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.150822 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1f585148-b171-42e4-8278-be3ac570c103" (UID: "1f585148-b171-42e4-8278-be3ac570c103"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.165385 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.165419 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-578q5\" (UniqueName: \"kubernetes.io/projected/1f585148-b171-42e4-8278-be3ac570c103-kube-api-access-578q5\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.165428 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.165437 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-config\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.165446 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f585148-b171-42e4-8278-be3ac570c103-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.705597 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.705931 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="0300748c-8b02-462f-8601-89536212a820" containerName="nova-cell1-conductor-conductor" containerID="cri-o://b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f" gracePeriod=30 Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.720180 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.720545 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-log" containerID="cri-o://566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6" gracePeriod=30 Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.720615 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-metadata" containerID="cri-o://f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a" gracePeriod=30 Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.732853 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.733405 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-log" containerID="cri-o://8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7" gracePeriod=30 Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.733463 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-api" containerID="cri-o://664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff" gracePeriod=30 Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.750446 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.750752 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="86b1035d-f717-456a-9a37-a7930dfe5a22" containerName="nova-scheduler-scheduler" containerID="cri-o://4448b4046d667955af74c666640c7610912a1dede5cd19d47d15b93e141a2253" gracePeriod=30 Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.763919 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.764239 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="32828769-35bc-4ff6-b715-62d67c23e6e2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53" gracePeriod=30 Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.790497 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" event={"ID":"1f585148-b171-42e4-8278-be3ac570c103","Type":"ContainerDied","Data":"1fbfe6c5223a9bcdb29383663ac06dc0f8cb8034512ead148cc4ffeb001a98d7"} Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.790551 4945 scope.go:117] "RemoveContainer" containerID="bfae634e9f7d8fc78c1edbdf59883ef87324a85024168a7d03fd8ff519eab291" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.790683 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bcb569c7-spwnr" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.815118 4945 scope.go:117] "RemoveContainer" containerID="10a63d09f7d28b40d533e2383e6b0fd72ddb646aab421c23d8771ec24ab4de82" Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.854235 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bcb569c7-spwnr"] Jan 09 00:56:56 crc kubenswrapper[4945]: I0109 00:56:56.875457 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74bcb569c7-spwnr"] Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.148162 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="32828769-35bc-4ff6-b715-62d67c23e6e2" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.1.69:6080/vnc_lite.html\": dial tcp 10.217.1.69:6080: connect: connection refused" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.644530 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.654538 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.793210 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfqbb\" (UniqueName: \"kubernetes.io/projected/32828769-35bc-4ff6-b715-62d67c23e6e2-kube-api-access-hfqbb\") pod \"32828769-35bc-4ff6-b715-62d67c23e6e2\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.793282 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-config-data\") pod \"32828769-35bc-4ff6-b715-62d67c23e6e2\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.793340 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-combined-ca-bundle\") pod \"32828769-35bc-4ff6-b715-62d67c23e6e2\" (UID: \"32828769-35bc-4ff6-b715-62d67c23e6e2\") " Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.802825 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32828769-35bc-4ff6-b715-62d67c23e6e2-kube-api-access-hfqbb" (OuterVolumeSpecName: "kube-api-access-hfqbb") pod "32828769-35bc-4ff6-b715-62d67c23e6e2" (UID: "32828769-35bc-4ff6-b715-62d67c23e6e2"). InnerVolumeSpecName "kube-api-access-hfqbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.814806 4945 generic.go:334] "Generic (PLEG): container finished" podID="32828769-35bc-4ff6-b715-62d67c23e6e2" containerID="76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53" exitCode=0 Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.814919 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32828769-35bc-4ff6-b715-62d67c23e6e2","Type":"ContainerDied","Data":"76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53"} Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.814950 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"32828769-35bc-4ff6-b715-62d67c23e6e2","Type":"ContainerDied","Data":"366589bf7bb48fc4f5f22e30c16b272436bf85fce53bd98a831b39428ae83ec4"} Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.814967 4945 scope.go:117] "RemoveContainer" containerID="76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.815115 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.827243 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-config-data" (OuterVolumeSpecName: "config-data") pod "32828769-35bc-4ff6-b715-62d67c23e6e2" (UID: "32828769-35bc-4ff6-b715-62d67c23e6e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.827254 4945 generic.go:334] "Generic (PLEG): container finished" podID="86b1035d-f717-456a-9a37-a7930dfe5a22" containerID="4448b4046d667955af74c666640c7610912a1dede5cd19d47d15b93e141a2253" exitCode=0 Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.827330 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86b1035d-f717-456a-9a37-a7930dfe5a22","Type":"ContainerDied","Data":"4448b4046d667955af74c666640c7610912a1dede5cd19d47d15b93e141a2253"} Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.833230 4945 generic.go:334] "Generic (PLEG): container finished" podID="3572dc12-51d9-4923-ae10-b803485aa49b" containerID="566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6" exitCode=143 Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.833320 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3572dc12-51d9-4923-ae10-b803485aa49b","Type":"ContainerDied","Data":"566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6"} Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.841388 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32828769-35bc-4ff6-b715-62d67c23e6e2" (UID: "32828769-35bc-4ff6-b715-62d67c23e6e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.849352 4945 generic.go:334] "Generic (PLEG): container finished" podID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerID="8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7" exitCode=143 Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.849403 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7","Type":"ContainerDied","Data":"8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7"} Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.898655 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfqbb\" (UniqueName: \"kubernetes.io/projected/32828769-35bc-4ff6-b715-62d67c23e6e2-kube-api-access-hfqbb\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.898685 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.898695 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32828769-35bc-4ff6-b715-62d67c23e6e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.908663 4945 scope.go:117] "RemoveContainer" containerID="76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53" Jan 09 00:56:57 crc kubenswrapper[4945]: E0109 00:56:57.909073 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53\": container with ID starting with 76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53 not found: ID does not exist" containerID="76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53" Jan 09 00:56:57 crc kubenswrapper[4945]: I0109 00:56:57.909133 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53"} err="failed to get container status \"76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53\": rpc error: code = NotFound desc = could not find container \"76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53\": container with ID starting with 76f7fdbd951fc66f07efae62e5159d2fa74bf0d198e656a2be59b695229c2b53 not found: ID does not exist" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.019854 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f585148-b171-42e4-8278-be3ac570c103" path="/var/lib/kubelet/pods/1f585148-b171-42e4-8278-be3ac570c103/volumes" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.145290 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.146130 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.155662 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.200874 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:56:58 crc kubenswrapper[4945]: E0109 00:56:58.201303 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f585148-b171-42e4-8278-be3ac570c103" containerName="dnsmasq-dns" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.201321 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f585148-b171-42e4-8278-be3ac570c103" containerName="dnsmasq-dns" Jan 09 00:56:58 crc kubenswrapper[4945]: E0109 00:56:58.201333 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f585148-b171-42e4-8278-be3ac570c103" containerName="init" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.201340 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f585148-b171-42e4-8278-be3ac570c103" containerName="init" Jan 09 00:56:58 crc kubenswrapper[4945]: E0109 00:56:58.201350 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32828769-35bc-4ff6-b715-62d67c23e6e2" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.201356 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="32828769-35bc-4ff6-b715-62d67c23e6e2" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 00:56:58 crc kubenswrapper[4945]: E0109 00:56:58.201378 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b1035d-f717-456a-9a37-a7930dfe5a22" containerName="nova-scheduler-scheduler" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.201384 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b1035d-f717-456a-9a37-a7930dfe5a22" containerName="nova-scheduler-scheduler" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.201556 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b1035d-f717-456a-9a37-a7930dfe5a22" containerName="nova-scheduler-scheduler" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.201571 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="32828769-35bc-4ff6-b715-62d67c23e6e2" containerName="nova-cell1-novncproxy-novncproxy" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.201585 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f585148-b171-42e4-8278-be3ac570c103" containerName="dnsmasq-dns" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.203966 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.211710 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-config-data\") pod \"86b1035d-f717-456a-9a37-a7930dfe5a22\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.211839 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-combined-ca-bundle\") pod \"86b1035d-f717-456a-9a37-a7930dfe5a22\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.211906 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptgck\" (UniqueName: \"kubernetes.io/projected/86b1035d-f717-456a-9a37-a7930dfe5a22-kube-api-access-ptgck\") pod \"86b1035d-f717-456a-9a37-a7930dfe5a22\" (UID: \"86b1035d-f717-456a-9a37-a7930dfe5a22\") " Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.214241 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.217416 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.223573 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b1035d-f717-456a-9a37-a7930dfe5a22-kube-api-access-ptgck" (OuterVolumeSpecName: "kube-api-access-ptgck") pod "86b1035d-f717-456a-9a37-a7930dfe5a22" (UID: "86b1035d-f717-456a-9a37-a7930dfe5a22"). InnerVolumeSpecName "kube-api-access-ptgck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.240850 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-config-data" (OuterVolumeSpecName: "config-data") pod "86b1035d-f717-456a-9a37-a7930dfe5a22" (UID: "86b1035d-f717-456a-9a37-a7930dfe5a22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.248825 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86b1035d-f717-456a-9a37-a7930dfe5a22" (UID: "86b1035d-f717-456a-9a37-a7930dfe5a22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.316459 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5pf\" (UniqueName: \"kubernetes.io/projected/076c2b0b-d880-4de5-8e11-796837092802-kube-api-access-4p5pf\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.316530 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076c2b0b-d880-4de5-8e11-796837092802-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.316720 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076c2b0b-d880-4de5-8e11-796837092802-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.316937 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.316959 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b1035d-f717-456a-9a37-a7930dfe5a22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.316970 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptgck\" (UniqueName: \"kubernetes.io/projected/86b1035d-f717-456a-9a37-a7930dfe5a22-kube-api-access-ptgck\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.424103 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p5pf\" (UniqueName: \"kubernetes.io/projected/076c2b0b-d880-4de5-8e11-796837092802-kube-api-access-4p5pf\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.424168 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076c2b0b-d880-4de5-8e11-796837092802-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.424253 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076c2b0b-d880-4de5-8e11-796837092802-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.428512 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076c2b0b-d880-4de5-8e11-796837092802-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.430828 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076c2b0b-d880-4de5-8e11-796837092802-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.443481 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p5pf\" (UniqueName: \"kubernetes.io/projected/076c2b0b-d880-4de5-8e11-796837092802-kube-api-access-4p5pf\") pod \"nova-cell1-novncproxy-0\" (UID: \"076c2b0b-d880-4de5-8e11-796837092802\") " pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.608334 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.727765 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.832217 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-config-data\") pod \"0300748c-8b02-462f-8601-89536212a820\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.832401 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-combined-ca-bundle\") pod \"0300748c-8b02-462f-8601-89536212a820\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.832436 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm25j\" (UniqueName: \"kubernetes.io/projected/0300748c-8b02-462f-8601-89536212a820-kube-api-access-vm25j\") pod \"0300748c-8b02-462f-8601-89536212a820\" (UID: \"0300748c-8b02-462f-8601-89536212a820\") " Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.844071 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0300748c-8b02-462f-8601-89536212a820-kube-api-access-vm25j" (OuterVolumeSpecName: "kube-api-access-vm25j") pod "0300748c-8b02-462f-8601-89536212a820" (UID: "0300748c-8b02-462f-8601-89536212a820"). InnerVolumeSpecName "kube-api-access-vm25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.864577 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0300748c-8b02-462f-8601-89536212a820" (UID: "0300748c-8b02-462f-8601-89536212a820"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.864786 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.864781 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"86b1035d-f717-456a-9a37-a7930dfe5a22","Type":"ContainerDied","Data":"5d582121f2ffbf223ac52b83e4646528487eb5b57d9055e13336f6b87e752683"} Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.864924 4945 scope.go:117] "RemoveContainer" containerID="4448b4046d667955af74c666640c7610912a1dede5cd19d47d15b93e141a2253" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.868667 4945 generic.go:334] "Generic (PLEG): container finished" podID="0300748c-8b02-462f-8601-89536212a820" containerID="b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f" exitCode=0 Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.868721 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0300748c-8b02-462f-8601-89536212a820","Type":"ContainerDied","Data":"b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f"} Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.868761 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0300748c-8b02-462f-8601-89536212a820","Type":"ContainerDied","Data":"4ee90fd8ac1e4c8d9f4fef937d661a3c21f6b143858782344af4af4203ea0398"} Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.868821 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.878212 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-config-data" (OuterVolumeSpecName: "config-data") pod "0300748c-8b02-462f-8601-89536212a820" (UID: "0300748c-8b02-462f-8601-89536212a820"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.938346 4945 scope.go:117] "RemoveContainer" containerID="b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.943147 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.943302 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm25j\" (UniqueName: \"kubernetes.io/projected/0300748c-8b02-462f-8601-89536212a820-kube-api-access-vm25j\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.943317 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0300748c-8b02-462f-8601-89536212a820-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.953605 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.967059 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.968773 4945 scope.go:117] "RemoveContainer" containerID="b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f" Jan 09 00:56:58 crc kubenswrapper[4945]: E0109 00:56:58.969382 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f\": container with ID starting with b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f not found: ID does not exist" containerID="b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.969427 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f"} err="failed to get container status \"b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f\": rpc error: code = NotFound desc = could not find container \"b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f\": container with ID starting with b6f7476700eb281ce9568d0797af82b3e126e90a10235862a89854829d11926f not found: ID does not exist" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.976239 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:56:58 crc kubenswrapper[4945]: E0109 00:56:58.976739 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0300748c-8b02-462f-8601-89536212a820" containerName="nova-cell1-conductor-conductor" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.976755 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0300748c-8b02-462f-8601-89536212a820" containerName="nova-cell1-conductor-conductor" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.976937 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0300748c-8b02-462f-8601-89536212a820" containerName="nova-cell1-conductor-conductor" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.977679 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.980300 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 00:56:58 crc kubenswrapper[4945]: I0109 00:56:58.990564 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.122370 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.146456 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fbh\" (UniqueName: \"kubernetes.io/projected/e8041767-cb7e-460e-b5c6-d5de80c5f244-kube-api-access-58fbh\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.146638 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-config-data\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.146824 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.233122 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.243044 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.250942 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.252093 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.253115 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58fbh\" (UniqueName: \"kubernetes.io/projected/e8041767-cb7e-460e-b5c6-d5de80c5f244-kube-api-access-58fbh\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.253176 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-config-data\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.253220 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.255175 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.258674 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.259053 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-config-data\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.264206 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.275558 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58fbh\" (UniqueName: \"kubernetes.io/projected/e8041767-cb7e-460e-b5c6-d5de80c5f244-kube-api-access-58fbh\") pod \"nova-scheduler-0\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.305266 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.355138 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.355202 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pxtz\" (UniqueName: \"kubernetes.io/projected/a35509ac-ad70-4941-9024-c4bbe22a7497-kube-api-access-7pxtz\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.355258 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.457177 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.457467 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pxtz\" (UniqueName: \"kubernetes.io/projected/a35509ac-ad70-4941-9024-c4bbe22a7497-kube-api-access-7pxtz\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.457504 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.463612 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.463997 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.487813 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pxtz\" (UniqueName: \"kubernetes.io/projected/a35509ac-ad70-4941-9024-c4bbe22a7497-kube-api-access-7pxtz\") pod \"nova-cell1-conductor-0\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.576573 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.745347 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.869759 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.78:8775/\": read tcp 10.217.0.2:33618->10.217.1.78:8775: read: connection reset by peer" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.869975 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.78:8775/\": read tcp 10.217.0.2:33602->10.217.1.78:8775: read: connection reset by peer" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.893895 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"076c2b0b-d880-4de5-8e11-796837092802","Type":"ContainerStarted","Data":"85b6bfb4fec42b8ffdb0b21f100272e175b554db5864467658af3217f01bb309"} Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.893947 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"076c2b0b-d880-4de5-8e11-796837092802","Type":"ContainerStarted","Data":"a608ae583d8e13871a98025dcbe9ac51b6dce0404717f5adda02a3dbc63a4ea8"} Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.895788 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8041767-cb7e-460e-b5c6-d5de80c5f244","Type":"ContainerStarted","Data":"7a90d40eb2938e9d0c7c2f4adc7f9f2f1e4cf7495e66d1ea21b6629290ee56e9"} Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.904498 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.77:8774/\": read tcp 10.217.0.2:44434->10.217.1.77:8774: read: connection reset by peer" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.904755 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.77:8774/\": read tcp 10.217.0.2:44450->10.217.1.77:8774: read: connection reset by peer" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.927260 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9272382179999998 podStartE2EDuration="1.927238218s" podCreationTimestamp="2026-01-09 00:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:56:59.91797536 +0000 UTC m=+6090.229134306" watchObservedRunningTime="2026-01-09 00:56:59.927238218 +0000 UTC m=+6090.238397164" Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.963423 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:56:59 crc kubenswrapper[4945]: I0109 00:56:59.963928 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="7c618d8e-c04a-4e54-ac99-03c2f2e963d7" containerName="nova-cell0-conductor-conductor" containerID="cri-o://2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741" gracePeriod=30 Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.066559 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0300748c-8b02-462f-8601-89536212a820" path="/var/lib/kubelet/pods/0300748c-8b02-462f-8601-89536212a820/volumes" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.067323 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32828769-35bc-4ff6-b715-62d67c23e6e2" path="/var/lib/kubelet/pods/32828769-35bc-4ff6-b715-62d67c23e6e2/volumes" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.068052 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b1035d-f717-456a-9a37-a7930dfe5a22" path="/var/lib/kubelet/pods/86b1035d-f717-456a-9a37-a7930dfe5a22/volumes" Jan 09 00:57:00 crc kubenswrapper[4945]: W0109 00:57:00.113807 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda35509ac_ad70_4941_9024_c4bbe22a7497.slice/crio-b5a8555217da9d6885ea56ad982d193de57ae3037a296a7d1980b86bdc04d792 WatchSource:0}: Error finding container b5a8555217da9d6885ea56ad982d193de57ae3037a296a7d1980b86bdc04d792: Status 404 returned error can't find the container with id b5a8555217da9d6885ea56ad982d193de57ae3037a296a7d1980b86bdc04d792 Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.116952 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.420186 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.461106 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.584665 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3572dc12-51d9-4923-ae10-b803485aa49b-logs\") pod \"3572dc12-51d9-4923-ae10-b803485aa49b\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.584719 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqbzm\" (UniqueName: \"kubernetes.io/projected/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-kube-api-access-sqbzm\") pod \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.584790 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-config-data\") pod \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.584835 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rtqj\" (UniqueName: \"kubernetes.io/projected/3572dc12-51d9-4923-ae10-b803485aa49b-kube-api-access-8rtqj\") pod \"3572dc12-51d9-4923-ae10-b803485aa49b\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.584888 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-combined-ca-bundle\") pod \"3572dc12-51d9-4923-ae10-b803485aa49b\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.584928 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-logs\") pod \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.584948 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-combined-ca-bundle\") pod \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\" (UID: \"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7\") " Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.585049 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-config-data\") pod \"3572dc12-51d9-4923-ae10-b803485aa49b\" (UID: \"3572dc12-51d9-4923-ae10-b803485aa49b\") " Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.586196 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-logs" (OuterVolumeSpecName: "logs") pod "e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" (UID: "e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.586477 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3572dc12-51d9-4923-ae10-b803485aa49b-logs" (OuterVolumeSpecName: "logs") pod "3572dc12-51d9-4923-ae10-b803485aa49b" (UID: "3572dc12-51d9-4923-ae10-b803485aa49b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.602726 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3572dc12-51d9-4923-ae10-b803485aa49b-kube-api-access-8rtqj" (OuterVolumeSpecName: "kube-api-access-8rtqj") pod "3572dc12-51d9-4923-ae10-b803485aa49b" (UID: "3572dc12-51d9-4923-ae10-b803485aa49b"). InnerVolumeSpecName "kube-api-access-8rtqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.631269 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-kube-api-access-sqbzm" (OuterVolumeSpecName: "kube-api-access-sqbzm") pod "e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" (UID: "e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7"). InnerVolumeSpecName "kube-api-access-sqbzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.655972 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" (UID: "e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.665872 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-config-data" (OuterVolumeSpecName: "config-data") pod "e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" (UID: "e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.666113 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3572dc12-51d9-4923-ae10-b803485aa49b" (UID: "3572dc12-51d9-4923-ae10-b803485aa49b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.667232 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-config-data" (OuterVolumeSpecName: "config-data") pod "3572dc12-51d9-4923-ae10-b803485aa49b" (UID: "3572dc12-51d9-4923-ae10-b803485aa49b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.688249 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.688291 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.688301 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.688309 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3572dc12-51d9-4923-ae10-b803485aa49b-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.688317 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3572dc12-51d9-4923-ae10-b803485aa49b-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.688326 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqbzm\" (UniqueName: \"kubernetes.io/projected/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-kube-api-access-sqbzm\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.688336 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.688344 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rtqj\" (UniqueName: \"kubernetes.io/projected/3572dc12-51d9-4923-ae10-b803485aa49b-kube-api-access-8rtqj\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.906364 4945 generic.go:334] "Generic (PLEG): container finished" podID="3572dc12-51d9-4923-ae10-b803485aa49b" containerID="f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a" exitCode=0 Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.906438 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3572dc12-51d9-4923-ae10-b803485aa49b","Type":"ContainerDied","Data":"f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a"} Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.906448 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.906469 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3572dc12-51d9-4923-ae10-b803485aa49b","Type":"ContainerDied","Data":"bc4da872d73fdad8b63bc0ed9e953f41bf6b69d4aaa039d548027d2f6769cba6"} Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.906492 4945 scope.go:117] "RemoveContainer" containerID="f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.907713 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a35509ac-ad70-4941-9024-c4bbe22a7497","Type":"ContainerStarted","Data":"81008a15da69ec3abc212239894674df87309ee8e159beed7582615e3602b411"} Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.908478 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a35509ac-ad70-4941-9024-c4bbe22a7497","Type":"ContainerStarted","Data":"b5a8555217da9d6885ea56ad982d193de57ae3037a296a7d1980b86bdc04d792"} Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.908507 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.913681 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8041767-cb7e-460e-b5c6-d5de80c5f244","Type":"ContainerStarted","Data":"b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695"} Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.919327 4945 generic.go:334] "Generic (PLEG): container finished" podID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerID="664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff" exitCode=0 Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.919806 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.920200 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7","Type":"ContainerDied","Data":"664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff"} Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.920227 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7","Type":"ContainerDied","Data":"3cf87b5009311829bc836a0838ad205afea4628e055233c1cff867eac9602ec8"} Jan 09 00:57:00 crc kubenswrapper[4945]: E0109 00:57:00.940364 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.940549 4945 scope.go:117] "RemoveContainer" containerID="566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.943480 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.943466444 podStartE2EDuration="1.943466444s" podCreationTimestamp="2026-01-09 00:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:57:00.937309162 +0000 UTC m=+6091.248468108" watchObservedRunningTime="2026-01-09 00:57:00.943466444 +0000 UTC m=+6091.254625390" Jan 09 00:57:00 crc kubenswrapper[4945]: E0109 00:57:00.948891 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 00:57:00 crc kubenswrapper[4945]: E0109 00:57:00.961474 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 00:57:00 crc kubenswrapper[4945]: E0109 00:57:00.961578 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7c618d8e-c04a-4e54-ac99-03c2f2e963d7" containerName="nova-cell0-conductor-conductor" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.978349 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.979792 4945 scope.go:117] "RemoveContainer" containerID="f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a" Jan 09 00:57:00 crc kubenswrapper[4945]: E0109 00:57:00.980480 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a\": container with ID starting with f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a not found: ID does not exist" containerID="f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.980509 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a"} err="failed to get container status \"f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a\": rpc error: code = NotFound desc = could not find container \"f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a\": container with ID starting with f46b50520b36bfe2e5dcec7684cee9c20b6ac2bcba9178c7a48cbe1ab967285a not found: ID does not exist" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.980530 4945 scope.go:117] "RemoveContainer" containerID="566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6" Jan 09 00:57:00 crc kubenswrapper[4945]: E0109 00:57:00.981035 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6\": container with ID starting with 566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6 not found: ID does not exist" containerID="566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.981072 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6"} err="failed to get container status \"566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6\": rpc error: code = NotFound desc = could not find container \"566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6\": container with ID starting with 566b4942bf60106a263cd33197255f513717019b4725b00b9bf9222cc90505a6 not found: ID does not exist" Jan 09 00:57:00 crc kubenswrapper[4945]: I0109 00:57:00.981089 4945 scope.go:117] "RemoveContainer" containerID="664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.029024 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.048747 4945 scope.go:117] "RemoveContainer" containerID="8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063046 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:57:01 crc kubenswrapper[4945]: E0109 00:57:01.063455 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-metadata" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063469 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-metadata" Jan 09 00:57:01 crc kubenswrapper[4945]: E0109 00:57:01.063500 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-log" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063508 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-log" Jan 09 00:57:01 crc kubenswrapper[4945]: E0109 00:57:01.063519 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-api" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063526 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-api" Jan 09 00:57:01 crc kubenswrapper[4945]: E0109 00:57:01.063540 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-log" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063548 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-log" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063738 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-log" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063757 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" containerName="nova-api-api" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063770 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-log" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.063780 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" containerName="nova-metadata-metadata" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.064748 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.078926 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.088867 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.088842161 podStartE2EDuration="3.088842161s" podCreationTimestamp="2026-01-09 00:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:57:01.010233227 +0000 UTC m=+6091.321392173" watchObservedRunningTime="2026-01-09 00:57:01.088842161 +0000 UTC m=+6091.400001117" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.091065 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.107557 4945 scope.go:117] "RemoveContainer" containerID="664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff" Jan 09 00:57:01 crc kubenswrapper[4945]: E0109 00:57:01.116952 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff\": container with ID starting with 664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff not found: ID does not exist" containerID="664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.117018 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff"} err="failed to get container status \"664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff\": rpc error: code = NotFound desc = could not find container \"664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff\": container with ID starting with 664b755a44727ae00b8e44a22f3782602e7fd09dc5e16f41b6e133a3b1d13dff not found: ID does not exist" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.117047 4945 scope.go:117] "RemoveContainer" containerID="8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7" Jan 09 00:57:01 crc kubenswrapper[4945]: E0109 00:57:01.117900 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7\": container with ID starting with 8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7 not found: ID does not exist" containerID="8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.117944 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7"} err="failed to get container status \"8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7\": rpc error: code = NotFound desc = could not find container \"8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7\": container with ID starting with 8e2e016877059bd5020fc6726e36fce6c91890aa2564c54f6d99324288d9fae7 not found: ID does not exist" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.122044 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.137085 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.143045 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.145363 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.153245 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.156868 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.197742 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5kf\" (UniqueName: \"kubernetes.io/projected/bf0638cb-7d95-4120-9e0f-f14212f84368-kube-api-access-zl5kf\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.197834 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0638cb-7d95-4120-9e0f-f14212f84368-logs\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.197903 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-config-data\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.197930 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.299233 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.299287 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-config-data\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.299344 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-config-data\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.299392 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2594cf9d-f20a-4554-96c6-54fe285cc3a4-logs\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.299504 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.299660 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5kf\" (UniqueName: \"kubernetes.io/projected/bf0638cb-7d95-4120-9e0f-f14212f84368-kube-api-access-zl5kf\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.299862 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7tq8\" (UniqueName: \"kubernetes.io/projected/2594cf9d-f20a-4554-96c6-54fe285cc3a4-kube-api-access-w7tq8\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.300101 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0638cb-7d95-4120-9e0f-f14212f84368-logs\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.300731 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0638cb-7d95-4120-9e0f-f14212f84368-logs\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.305279 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.314306 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-config-data\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.326517 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5kf\" (UniqueName: \"kubernetes.io/projected/bf0638cb-7d95-4120-9e0f-f14212f84368-kube-api-access-zl5kf\") pod \"nova-metadata-0\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.412178 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.412481 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-config-data\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.412521 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2594cf9d-f20a-4554-96c6-54fe285cc3a4-logs\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.412634 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7tq8\" (UniqueName: \"kubernetes.io/projected/2594cf9d-f20a-4554-96c6-54fe285cc3a4-kube-api-access-w7tq8\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.413566 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2594cf9d-f20a-4554-96c6-54fe285cc3a4-logs\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.416661 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.418088 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.418666 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-config-data\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.432841 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7tq8\" (UniqueName: \"kubernetes.io/projected/2594cf9d-f20a-4554-96c6-54fe285cc3a4-kube-api-access-w7tq8\") pod \"nova-api-0\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.473911 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.897862 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 00:57:01 crc kubenswrapper[4945]: W0109 00:57:01.901253 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0638cb_7d95_4120_9e0f_f14212f84368.slice/crio-e9edc4cde644e22ec06e1336f03aa93933a5c799540ac1422cb97675f477b9d6 WatchSource:0}: Error finding container e9edc4cde644e22ec06e1336f03aa93933a5c799540ac1422cb97675f477b9d6: Status 404 returned error can't find the container with id e9edc4cde644e22ec06e1336f03aa93933a5c799540ac1422cb97675f477b9d6 Jan 09 00:57:01 crc kubenswrapper[4945]: I0109 00:57:01.939870 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0638cb-7d95-4120-9e0f-f14212f84368","Type":"ContainerStarted","Data":"e9edc4cde644e22ec06e1336f03aa93933a5c799540ac1422cb97675f477b9d6"} Jan 09 00:57:02 crc kubenswrapper[4945]: I0109 00:57:02.021570 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3572dc12-51d9-4923-ae10-b803485aa49b" path="/var/lib/kubelet/pods/3572dc12-51d9-4923-ae10-b803485aa49b/volumes" Jan 09 00:57:02 crc kubenswrapper[4945]: I0109 00:57:02.025894 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7" path="/var/lib/kubelet/pods/e59854ee-c9e8-4dcb-a2e3-43c3adebe5a7/volumes" Jan 09 00:57:02 crc kubenswrapper[4945]: I0109 00:57:02.032696 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 00:57:02 crc kubenswrapper[4945]: I0109 00:57:02.973636 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2594cf9d-f20a-4554-96c6-54fe285cc3a4","Type":"ContainerStarted","Data":"092598a02dc4722cc8ed8429dfc50031f8ad429f4fc3e3be781a87ca319cdf8d"} Jan 09 00:57:02 crc kubenswrapper[4945]: I0109 00:57:02.973989 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2594cf9d-f20a-4554-96c6-54fe285cc3a4","Type":"ContainerStarted","Data":"a98790eb60522635b3ca0872529a216ea87ce416eccb56d689ace88fcfeb2536"} Jan 09 00:57:02 crc kubenswrapper[4945]: I0109 00:57:02.974026 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2594cf9d-f20a-4554-96c6-54fe285cc3a4","Type":"ContainerStarted","Data":"a74d0b3a1070fcd2f3ca4ac77c841c30d69933821fe3d7a7059cd9fffdc93bca"} Jan 09 00:57:02 crc kubenswrapper[4945]: I0109 00:57:02.976375 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0638cb-7d95-4120-9e0f-f14212f84368","Type":"ContainerStarted","Data":"ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4"} Jan 09 00:57:02 crc kubenswrapper[4945]: I0109 00:57:02.976398 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0638cb-7d95-4120-9e0f-f14212f84368","Type":"ContainerStarted","Data":"3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502"} Jan 09 00:57:03 crc kubenswrapper[4945]: I0109 00:57:03.000666 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.000642147 podStartE2EDuration="2.000642147s" podCreationTimestamp="2026-01-09 00:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:57:02.998458503 +0000 UTC m=+6093.309617459" watchObservedRunningTime="2026-01-09 00:57:03.000642147 +0000 UTC m=+6093.311801093" Jan 09 00:57:03 crc kubenswrapper[4945]: I0109 00:57:03.037340 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.037317169 podStartE2EDuration="3.037317169s" podCreationTimestamp="2026-01-09 00:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:57:03.033123436 +0000 UTC m=+6093.344282402" watchObservedRunningTime="2026-01-09 00:57:03.037317169 +0000 UTC m=+6093.348476115" Jan 09 00:57:03 crc kubenswrapper[4945]: I0109 00:57:03.608569 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:57:04 crc kubenswrapper[4945]: I0109 00:57:04.306660 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 00:57:04 crc kubenswrapper[4945]: I0109 00:57:04.969961 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.004511 4945 generic.go:334] "Generic (PLEG): container finished" podID="7c618d8e-c04a-4e54-ac99-03c2f2e963d7" containerID="2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741" exitCode=0 Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.004636 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7c618d8e-c04a-4e54-ac99-03c2f2e963d7","Type":"ContainerDied","Data":"2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741"} Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.004708 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7c618d8e-c04a-4e54-ac99-03c2f2e963d7","Type":"ContainerDied","Data":"f83858314f2de6214e993c72f12720572dad2889af9a62f7ad8c2bb455761711"} Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.004750 4945 scope.go:117] "RemoveContainer" containerID="2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.004927 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.029657 4945 scope.go:117] "RemoveContainer" containerID="2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741" Jan 09 00:57:05 crc kubenswrapper[4945]: E0109 00:57:05.034767 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741\": container with ID starting with 2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741 not found: ID does not exist" containerID="2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.034819 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741"} err="failed to get container status \"2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741\": rpc error: code = NotFound desc = could not find container \"2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741\": container with ID starting with 2b9f95246e2f9bd3db437a3ef1b40b3b9655bcb64ceb1c60024721e80ea1a741 not found: ID does not exist" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.086637 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-combined-ca-bundle\") pod \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.086797 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-config-data\") pod \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.086879 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r56hf\" (UniqueName: \"kubernetes.io/projected/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-kube-api-access-r56hf\") pod \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\" (UID: \"7c618d8e-c04a-4e54-ac99-03c2f2e963d7\") " Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.092524 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-kube-api-access-r56hf" (OuterVolumeSpecName: "kube-api-access-r56hf") pod "7c618d8e-c04a-4e54-ac99-03c2f2e963d7" (UID: "7c618d8e-c04a-4e54-ac99-03c2f2e963d7"). InnerVolumeSpecName "kube-api-access-r56hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.119333 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c618d8e-c04a-4e54-ac99-03c2f2e963d7" (UID: "7c618d8e-c04a-4e54-ac99-03c2f2e963d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.120366 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-config-data" (OuterVolumeSpecName: "config-data") pod "7c618d8e-c04a-4e54-ac99-03c2f2e963d7" (UID: "7c618d8e-c04a-4e54-ac99-03c2f2e963d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.189478 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.189505 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r56hf\" (UniqueName: \"kubernetes.io/projected/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-kube-api-access-r56hf\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.189515 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c618d8e-c04a-4e54-ac99-03c2f2e963d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.348895 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.353276 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.366847 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:57:05 crc kubenswrapper[4945]: E0109 00:57:05.367359 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c618d8e-c04a-4e54-ac99-03c2f2e963d7" containerName="nova-cell0-conductor-conductor" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.367417 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c618d8e-c04a-4e54-ac99-03c2f2e963d7" containerName="nova-cell0-conductor-conductor" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.367642 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c618d8e-c04a-4e54-ac99-03c2f2e963d7" containerName="nova-cell0-conductor-conductor" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.368298 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.372659 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.377308 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.494578 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.494719 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsclf\" (UniqueName: \"kubernetes.io/projected/1523f74d-4bdd-4d29-b779-1ff30d782fed-kube-api-access-wsclf\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.494776 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.597254 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsclf\" (UniqueName: \"kubernetes.io/projected/1523f74d-4bdd-4d29-b779-1ff30d782fed-kube-api-access-wsclf\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.597323 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.597399 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.601554 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.605688 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.618877 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsclf\" (UniqueName: \"kubernetes.io/projected/1523f74d-4bdd-4d29-b779-1ff30d782fed-kube-api-access-wsclf\") pod \"nova-cell0-conductor-0\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:05 crc kubenswrapper[4945]: I0109 00:57:05.690013 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:06 crc kubenswrapper[4945]: I0109 00:57:06.019650 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c618d8e-c04a-4e54-ac99-03c2f2e963d7" path="/var/lib/kubelet/pods/7c618d8e-c04a-4e54-ac99-03c2f2e963d7/volumes" Jan 09 00:57:06 crc kubenswrapper[4945]: I0109 00:57:06.156490 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 00:57:06 crc kubenswrapper[4945]: W0109 00:57:06.161693 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1523f74d_4bdd_4d29_b779_1ff30d782fed.slice/crio-bae8b370027468bf3f78af62019edba079d7da09396355818f353269c8709994 WatchSource:0}: Error finding container bae8b370027468bf3f78af62019edba079d7da09396355818f353269c8709994: Status 404 returned error can't find the container with id bae8b370027468bf3f78af62019edba079d7da09396355818f353269c8709994 Jan 09 00:57:06 crc kubenswrapper[4945]: I0109 00:57:06.418584 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 00:57:06 crc kubenswrapper[4945]: I0109 00:57:06.418903 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 00:57:07 crc kubenswrapper[4945]: I0109 00:57:07.029560 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1523f74d-4bdd-4d29-b779-1ff30d782fed","Type":"ContainerStarted","Data":"b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203"} Jan 09 00:57:07 crc kubenswrapper[4945]: I0109 00:57:07.029611 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1523f74d-4bdd-4d29-b779-1ff30d782fed","Type":"ContainerStarted","Data":"bae8b370027468bf3f78af62019edba079d7da09396355818f353269c8709994"} Jan 09 00:57:07 crc kubenswrapper[4945]: I0109 00:57:07.029749 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:07 crc kubenswrapper[4945]: I0109 00:57:07.049895 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.049869108 podStartE2EDuration="2.049869108s" podCreationTimestamp="2026-01-09 00:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:57:07.046804072 +0000 UTC m=+6097.357963098" watchObservedRunningTime="2026-01-09 00:57:07.049869108 +0000 UTC m=+6097.361028054" Jan 09 00:57:08 crc kubenswrapper[4945]: I0109 00:57:08.609321 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:57:08 crc kubenswrapper[4945]: I0109 00:57:08.623533 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:57:09 crc kubenswrapper[4945]: I0109 00:57:09.059468 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 09 00:57:09 crc kubenswrapper[4945]: I0109 00:57:09.305963 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 00:57:09 crc kubenswrapper[4945]: I0109 00:57:09.349063 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 00:57:09 crc kubenswrapper[4945]: I0109 00:57:09.620057 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 09 00:57:10 crc kubenswrapper[4945]: I0109 00:57:10.081061 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 00:57:11 crc kubenswrapper[4945]: I0109 00:57:11.419229 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 00:57:11 crc kubenswrapper[4945]: I0109 00:57:11.419317 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 00:57:11 crc kubenswrapper[4945]: I0109 00:57:11.474704 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 00:57:11 crc kubenswrapper[4945]: I0109 00:57:11.476069 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 00:57:12 crc kubenswrapper[4945]: I0109 00:57:12.501235 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.89:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:57:12 crc kubenswrapper[4945]: I0109 00:57:12.501244 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.89:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:57:12 crc kubenswrapper[4945]: I0109 00:57:12.583328 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:57:12 crc kubenswrapper[4945]: I0109 00:57:12.583685 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.90:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.724458 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.748256 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.750825 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.754429 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.768526 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.899011 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.899061 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.899119 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-scripts\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.899507 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.899747 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:15 crc kubenswrapper[4945]: I0109 00:57:15.899824 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnj92\" (UniqueName: \"kubernetes.io/projected/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-kube-api-access-wnj92\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.009983 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.010070 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.010098 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnj92\" (UniqueName: \"kubernetes.io/projected/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-kube-api-access-wnj92\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.010129 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.010149 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.010517 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-scripts\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.010364 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.016772 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.018743 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.019556 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-scripts\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.035138 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.035511 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnj92\" (UniqueName: \"kubernetes.io/projected/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-kube-api-access-wnj92\") pod \"cinder-scheduler-0\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.074880 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 00:57:16 crc kubenswrapper[4945]: I0109 00:57:16.542426 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:17 crc kubenswrapper[4945]: I0109 00:57:17.135217 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b86b0700-4264-4d8a-89ce-b8e82d0a2e19","Type":"ContainerStarted","Data":"c947b28a1570a19362047afb02adbf33a0ca7a98de28ea611aa9d94ff591e66f"} Jan 09 00:57:17 crc kubenswrapper[4945]: I0109 00:57:17.518898 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:57:17 crc kubenswrapper[4945]: I0109 00:57:17.519243 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api-log" containerID="cri-o://3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902" gracePeriod=30 Jan 09 00:57:17 crc kubenswrapper[4945]: I0109 00:57:17.519351 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api" containerID="cri-o://4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02" gracePeriod=30 Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.152458 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b86b0700-4264-4d8a-89ce-b8e82d0a2e19","Type":"ContainerStarted","Data":"2fb16142759a5920f817d6ac12abf737701d005cf01cfa6f6a3b7156a998eb59"} Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.152827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b86b0700-4264-4d8a-89ce-b8e82d0a2e19","Type":"ContainerStarted","Data":"00f13d88b90f8ec0d908c9b68727cf744c690e752806455cea730cb7ae2319e2"} Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.167325 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerID="3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902" exitCode=143 Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.167419 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e12d04f7-b525-4969-9683-f2dc3e7a6466","Type":"ContainerDied","Data":"3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902"} Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.184362 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.1843421100000002 podStartE2EDuration="3.18434211s" podCreationTimestamp="2026-01-09 00:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:57:18.17863223 +0000 UTC m=+6108.489791176" watchObservedRunningTime="2026-01-09 00:57:18.18434211 +0000 UTC m=+6108.495501056" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.256640 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.258271 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.261191 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.293102 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358066 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-run\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358428 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358491 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358534 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358627 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358659 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358710 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358738 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vd2\" (UniqueName: \"kubernetes.io/projected/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-kube-api-access-82vd2\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358774 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358826 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358867 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358898 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358924 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358950 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.358977 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.359090 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460735 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460790 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460831 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460850 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82vd2\" (UniqueName: \"kubernetes.io/projected/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-kube-api-access-82vd2\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460868 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460902 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460918 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460932 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460949 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460967 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.460984 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461042 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461066 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-run\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461091 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461114 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461134 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461235 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461277 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461298 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.461320 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.463173 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.463396 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.463483 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.463493 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-run\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.463510 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.463670 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.468207 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.469224 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.469846 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.472945 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.476636 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.480428 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82vd2\" (UniqueName: \"kubernetes.io/projected/a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7-kube-api-access-82vd2\") pod \"cinder-volume-volume1-0\" (UID: \"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7\") " pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.583094 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.854486 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.856797 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.861502 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.871845 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974505 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974570 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-run\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974597 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974633 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974696 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-dev\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974773 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-config-data\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974818 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974847 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.974893 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-lib-modules\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.975011 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.975094 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3ed788db-6586-4da2-896b-3efbc2ee48a9-ceph\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.975129 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ss7\" (UniqueName: \"kubernetes.io/projected/3ed788db-6586-4da2-896b-3efbc2ee48a9-kube-api-access-b9ss7\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.975216 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.975276 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-scripts\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.975351 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:18 crc kubenswrapper[4945]: I0109 00:57:18.975374 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-sys\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077431 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-scripts\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077526 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077558 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-sys\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077588 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077615 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-run\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077656 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077674 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077691 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077715 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-run\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077719 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-dev\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077749 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-dev\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077786 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077742 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-sys\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077808 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-config-data\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077853 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077877 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077906 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-lib-modules\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077948 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.077980 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3ed788db-6586-4da2-896b-3efbc2ee48a9-ceph\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.078020 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9ss7\" (UniqueName: \"kubernetes.io/projected/3ed788db-6586-4da2-896b-3efbc2ee48a9-kube-api-access-b9ss7\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.078061 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.078812 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.078888 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.078922 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-lib-modules\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.078970 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.079225 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ed788db-6586-4da2-896b-3efbc2ee48a9-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.082843 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.082868 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.084532 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3ed788db-6586-4da2-896b-3efbc2ee48a9-ceph\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.089360 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-config-data\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.095011 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ed788db-6586-4da2-896b-3efbc2ee48a9-scripts\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.099244 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9ss7\" (UniqueName: \"kubernetes.io/projected/3ed788db-6586-4da2-896b-3efbc2ee48a9-kube-api-access-b9ss7\") pod \"cinder-backup-0\" (UID: \"3ed788db-6586-4da2-896b-3efbc2ee48a9\") " pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.124428 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 09 00:57:19 crc kubenswrapper[4945]: W0109 00:57:19.130834 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0efa3a6_9dd3_49ec_a33d_c68d8ff474b7.slice/crio-ba02b57b70958a1fc00d863fcff0aeb4d4da9ba62d9ef867e552581a66c085cb WatchSource:0}: Error finding container ba02b57b70958a1fc00d863fcff0aeb4d4da9ba62d9ef867e552581a66c085cb: Status 404 returned error can't find the container with id ba02b57b70958a1fc00d863fcff0aeb4d4da9ba62d9ef867e552581a66c085cb Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.179493 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7","Type":"ContainerStarted","Data":"ba02b57b70958a1fc00d863fcff0aeb4d4da9ba62d9ef867e552581a66c085cb"} Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.186359 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 09 00:57:19 crc kubenswrapper[4945]: I0109 00:57:19.818907 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 09 00:57:20 crc kubenswrapper[4945]: I0109 00:57:20.194305 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3ed788db-6586-4da2-896b-3efbc2ee48a9","Type":"ContainerStarted","Data":"31d39fd9e5f0a56ecc3f146899b4d8f50a666276e684dc6fe9bda8c72faba0d3"} Jan 09 00:57:20 crc kubenswrapper[4945]: I0109 00:57:20.676459 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.85:8776/healthcheck\": read tcp 10.217.0.2:49884->10.217.1.85:8776: read: connection reset by peer" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.062385 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.078967 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.120215 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-scripts\") pod \"e12d04f7-b525-4969-9683-f2dc3e7a6466\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.120580 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47g8k\" (UniqueName: \"kubernetes.io/projected/e12d04f7-b525-4969-9683-f2dc3e7a6466-kube-api-access-47g8k\") pod \"e12d04f7-b525-4969-9683-f2dc3e7a6466\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.120718 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e12d04f7-b525-4969-9683-f2dc3e7a6466-etc-machine-id\") pod \"e12d04f7-b525-4969-9683-f2dc3e7a6466\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.120866 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data\") pod \"e12d04f7-b525-4969-9683-f2dc3e7a6466\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.121047 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e12d04f7-b525-4969-9683-f2dc3e7a6466-logs\") pod \"e12d04f7-b525-4969-9683-f2dc3e7a6466\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.121285 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data-custom\") pod \"e12d04f7-b525-4969-9683-f2dc3e7a6466\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.121533 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-combined-ca-bundle\") pod \"e12d04f7-b525-4969-9683-f2dc3e7a6466\" (UID: \"e12d04f7-b525-4969-9683-f2dc3e7a6466\") " Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.121118 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e12d04f7-b525-4969-9683-f2dc3e7a6466-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e12d04f7-b525-4969-9683-f2dc3e7a6466" (UID: "e12d04f7-b525-4969-9683-f2dc3e7a6466"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.122262 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e12d04f7-b525-4969-9683-f2dc3e7a6466-logs" (OuterVolumeSpecName: "logs") pod "e12d04f7-b525-4969-9683-f2dc3e7a6466" (UID: "e12d04f7-b525-4969-9683-f2dc3e7a6466"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.124143 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e12d04f7-b525-4969-9683-f2dc3e7a6466-logs\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.124195 4945 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e12d04f7-b525-4969-9683-f2dc3e7a6466-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.126816 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-scripts" (OuterVolumeSpecName: "scripts") pod "e12d04f7-b525-4969-9683-f2dc3e7a6466" (UID: "e12d04f7-b525-4969-9683-f2dc3e7a6466"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.126864 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12d04f7-b525-4969-9683-f2dc3e7a6466-kube-api-access-47g8k" (OuterVolumeSpecName: "kube-api-access-47g8k") pod "e12d04f7-b525-4969-9683-f2dc3e7a6466" (UID: "e12d04f7-b525-4969-9683-f2dc3e7a6466"). InnerVolumeSpecName "kube-api-access-47g8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.131106 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e12d04f7-b525-4969-9683-f2dc3e7a6466" (UID: "e12d04f7-b525-4969-9683-f2dc3e7a6466"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.186029 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e12d04f7-b525-4969-9683-f2dc3e7a6466" (UID: "e12d04f7-b525-4969-9683-f2dc3e7a6466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.211406 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3ed788db-6586-4da2-896b-3efbc2ee48a9","Type":"ContainerStarted","Data":"29d1a2c36f3bb463b76a65db8e61eb84f4a8373bb332869df0c814d3ad1bd5fd"} Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.213086 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7","Type":"ContainerStarted","Data":"d4bf84b14dcafa929ded4bac64ffb8cdbf0b7890d09fd45970b8de2847279a86"} Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.213109 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7","Type":"ContainerStarted","Data":"771a14997fdb72942066f876efda2771f4dfc2c148a15cf44ca2fb60cb5222fc"} Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.214899 4945 generic.go:334] "Generic (PLEG): container finished" podID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerID="4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02" exitCode=0 Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.214931 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e12d04f7-b525-4969-9683-f2dc3e7a6466","Type":"ContainerDied","Data":"4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02"} Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.214950 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e12d04f7-b525-4969-9683-f2dc3e7a6466","Type":"ContainerDied","Data":"3485d7d1cc9cf463fb0131f66ead9864de7438bdf739268eb503361407fc1abf"} Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.214971 4945 scope.go:117] "RemoveContainer" containerID="4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.215152 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.226512 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.226543 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.226557 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.226569 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47g8k\" (UniqueName: \"kubernetes.io/projected/e12d04f7-b525-4969-9683-f2dc3e7a6466-kube-api-access-47g8k\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.232504 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data" (OuterVolumeSpecName: "config-data") pod "e12d04f7-b525-4969-9683-f2dc3e7a6466" (UID: "e12d04f7-b525-4969-9683-f2dc3e7a6466"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.251520 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.327779065 podStartE2EDuration="3.251501516s" podCreationTimestamp="2026-01-09 00:57:18 +0000 UTC" firstStartedPulling="2026-01-09 00:57:19.133203669 +0000 UTC m=+6109.444362615" lastFinishedPulling="2026-01-09 00:57:20.05692612 +0000 UTC m=+6110.368085066" observedRunningTime="2026-01-09 00:57:21.237510061 +0000 UTC m=+6111.548669007" watchObservedRunningTime="2026-01-09 00:57:21.251501516 +0000 UTC m=+6111.562660462" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.257488 4945 scope.go:117] "RemoveContainer" containerID="3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.280953 4945 scope.go:117] "RemoveContainer" containerID="4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02" Jan 09 00:57:21 crc kubenswrapper[4945]: E0109 00:57:21.281754 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02\": container with ID starting with 4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02 not found: ID does not exist" containerID="4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.281814 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02"} err="failed to get container status \"4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02\": rpc error: code = NotFound desc = could not find container \"4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02\": container with ID starting with 4aded64ae4c74c78824f3d096f921eb50e9d817b9d4cf65bc38cfc8b25016b02 not found: ID does not exist" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.281847 4945 scope.go:117] "RemoveContainer" containerID="3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902" Jan 09 00:57:21 crc kubenswrapper[4945]: E0109 00:57:21.282386 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902\": container with ID starting with 3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902 not found: ID does not exist" containerID="3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.282418 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902"} err="failed to get container status \"3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902\": rpc error: code = NotFound desc = could not find container \"3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902\": container with ID starting with 3314717e5d65ec3fbb9f3a33862e6398e630a6a50957e0e3c029a0f20dc03902 not found: ID does not exist" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.328896 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12d04f7-b525-4969-9683-f2dc3e7a6466-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.422070 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.422533 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.425848 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.426115 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.498342 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.498628 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.499146 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.499221 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.526810 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.538954 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.597104 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.619248 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.630668 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:57:21 crc kubenswrapper[4945]: E0109 00:57:21.631390 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api-log" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.631471 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api-log" Jan 09 00:57:21 crc kubenswrapper[4945]: E0109 00:57:21.631579 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.631667 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.631907 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api-log" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.632006 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" containerName="cinder-api" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.633270 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.639600 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.683232 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.742603 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdjkq\" (UniqueName: \"kubernetes.io/projected/e6823cec-9eb1-4888-9870-a60c2be2e698-kube-api-access-hdjkq\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.742729 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6823cec-9eb1-4888-9870-a60c2be2e698-logs\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.742774 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6823cec-9eb1-4888-9870-a60c2be2e698-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.742811 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-config-data\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.742836 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.742936 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-scripts\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.742976 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.844541 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6823cec-9eb1-4888-9870-a60c2be2e698-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.844595 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-config-data\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.844628 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.844688 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-scripts\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.844684 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6823cec-9eb1-4888-9870-a60c2be2e698-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.844728 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.844764 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdjkq\" (UniqueName: \"kubernetes.io/projected/e6823cec-9eb1-4888-9870-a60c2be2e698-kube-api-access-hdjkq\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.844818 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6823cec-9eb1-4888-9870-a60c2be2e698-logs\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.845604 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6823cec-9eb1-4888-9870-a60c2be2e698-logs\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.850228 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.851018 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.851612 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-scripts\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.856323 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6823cec-9eb1-4888-9870-a60c2be2e698-config-data\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:21 crc kubenswrapper[4945]: I0109 00:57:21.865209 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdjkq\" (UniqueName: \"kubernetes.io/projected/e6823cec-9eb1-4888-9870-a60c2be2e698-kube-api-access-hdjkq\") pod \"cinder-api-0\" (UID: \"e6823cec-9eb1-4888-9870-a60c2be2e698\") " pod="openstack/cinder-api-0" Jan 09 00:57:22 crc kubenswrapper[4945]: I0109 00:57:22.010009 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 09 00:57:22 crc kubenswrapper[4945]: I0109 00:57:22.016213 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12d04f7-b525-4969-9683-f2dc3e7a6466" path="/var/lib/kubelet/pods/e12d04f7-b525-4969-9683-f2dc3e7a6466/volumes" Jan 09 00:57:22 crc kubenswrapper[4945]: I0109 00:57:22.230651 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3ed788db-6586-4da2-896b-3efbc2ee48a9","Type":"ContainerStarted","Data":"f469570c1f0efa56dbccc3c942510a4adc5d33040fc05852a5e6d1e9ee38dfdf"} Jan 09 00:57:22 crc kubenswrapper[4945]: I0109 00:57:22.267552 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.429968626 podStartE2EDuration="4.267507566s" podCreationTimestamp="2026-01-09 00:57:18 +0000 UTC" firstStartedPulling="2026-01-09 00:57:19.827555095 +0000 UTC m=+6110.138714031" lastFinishedPulling="2026-01-09 00:57:20.665094025 +0000 UTC m=+6110.976252971" observedRunningTime="2026-01-09 00:57:22.259008127 +0000 UTC m=+6112.570167073" watchObservedRunningTime="2026-01-09 00:57:22.267507566 +0000 UTC m=+6112.578666512" Jan 09 00:57:22 crc kubenswrapper[4945]: I0109 00:57:22.534583 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 09 00:57:23 crc kubenswrapper[4945]: I0109 00:57:23.245047 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6823cec-9eb1-4888-9870-a60c2be2e698","Type":"ContainerStarted","Data":"8801869be5fe944ebf1f55e284ea2dd7560a6470ca91eab936f4329a703b895f"} Jan 09 00:57:23 crc kubenswrapper[4945]: I0109 00:57:23.245561 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6823cec-9eb1-4888-9870-a60c2be2e698","Type":"ContainerStarted","Data":"d2fa73fc3c8f384ecd1771c5c024377af5cf08046d01117c5499badc6cd9ca14"} Jan 09 00:57:23 crc kubenswrapper[4945]: I0109 00:57:23.583369 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:24 crc kubenswrapper[4945]: I0109 00:57:24.187416 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 09 00:57:24 crc kubenswrapper[4945]: I0109 00:57:24.258027 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6823cec-9eb1-4888-9870-a60c2be2e698","Type":"ContainerStarted","Data":"c9817eef850b70b3ef9fea5d5d29fc1249c439e8a0a8c6571c012c31bcc1f9da"} Jan 09 00:57:24 crc kubenswrapper[4945]: I0109 00:57:24.283737 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.283713121 podStartE2EDuration="3.283713121s" podCreationTimestamp="2026-01-09 00:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:57:24.281149308 +0000 UTC m=+6114.592308264" watchObservedRunningTime="2026-01-09 00:57:24.283713121 +0000 UTC m=+6114.594872067" Jan 09 00:57:25 crc kubenswrapper[4945]: I0109 00:57:25.266802 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 09 00:57:26 crc kubenswrapper[4945]: I0109 00:57:26.257113 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 09 00:57:26 crc kubenswrapper[4945]: I0109 00:57:26.325282 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:26 crc kubenswrapper[4945]: I0109 00:57:26.325529 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerName="cinder-scheduler" containerID="cri-o://00f13d88b90f8ec0d908c9b68727cf744c690e752806455cea730cb7ae2319e2" gracePeriod=30 Jan 09 00:57:26 crc kubenswrapper[4945]: I0109 00:57:26.326034 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerName="probe" containerID="cri-o://2fb16142759a5920f817d6ac12abf737701d005cf01cfa6f6a3b7156a998eb59" gracePeriod=30 Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.290370 4945 generic.go:334] "Generic (PLEG): container finished" podID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerID="2fb16142759a5920f817d6ac12abf737701d005cf01cfa6f6a3b7156a998eb59" exitCode=0 Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.290411 4945 generic.go:334] "Generic (PLEG): container finished" podID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerID="00f13d88b90f8ec0d908c9b68727cf744c690e752806455cea730cb7ae2319e2" exitCode=0 Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.290433 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b86b0700-4264-4d8a-89ce-b8e82d0a2e19","Type":"ContainerDied","Data":"2fb16142759a5920f817d6ac12abf737701d005cf01cfa6f6a3b7156a998eb59"} Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.290464 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b86b0700-4264-4d8a-89ce-b8e82d0a2e19","Type":"ContainerDied","Data":"00f13d88b90f8ec0d908c9b68727cf744c690e752806455cea730cb7ae2319e2"} Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.549757 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.588888 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data-custom\") pod \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.588946 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnj92\" (UniqueName: \"kubernetes.io/projected/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-kube-api-access-wnj92\") pod \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.589008 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-etc-machine-id\") pod \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.589077 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-scripts\") pod \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.589093 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data\") pod \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.589130 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-combined-ca-bundle\") pod \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\" (UID: \"b86b0700-4264-4d8a-89ce-b8e82d0a2e19\") " Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.589250 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b86b0700-4264-4d8a-89ce-b8e82d0a2e19" (UID: "b86b0700-4264-4d8a-89ce-b8e82d0a2e19"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.589502 4945 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.599053 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-kube-api-access-wnj92" (OuterVolumeSpecName: "kube-api-access-wnj92") pod "b86b0700-4264-4d8a-89ce-b8e82d0a2e19" (UID: "b86b0700-4264-4d8a-89ce-b8e82d0a2e19"). InnerVolumeSpecName "kube-api-access-wnj92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.599918 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-scripts" (OuterVolumeSpecName: "scripts") pod "b86b0700-4264-4d8a-89ce-b8e82d0a2e19" (UID: "b86b0700-4264-4d8a-89ce-b8e82d0a2e19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.601138 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b86b0700-4264-4d8a-89ce-b8e82d0a2e19" (UID: "b86b0700-4264-4d8a-89ce-b8e82d0a2e19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.644217 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b86b0700-4264-4d8a-89ce-b8e82d0a2e19" (UID: "b86b0700-4264-4d8a-89ce-b8e82d0a2e19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.691869 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.691932 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.691949 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.691963 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnj92\" (UniqueName: \"kubernetes.io/projected/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-kube-api-access-wnj92\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.698569 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data" (OuterVolumeSpecName: "config-data") pod "b86b0700-4264-4d8a-89ce-b8e82d0a2e19" (UID: "b86b0700-4264-4d8a-89ce-b8e82d0a2e19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 00:57:27 crc kubenswrapper[4945]: I0109 00:57:27.793375 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b86b0700-4264-4d8a-89ce-b8e82d0a2e19-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.306902 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b86b0700-4264-4d8a-89ce-b8e82d0a2e19","Type":"ContainerDied","Data":"c947b28a1570a19362047afb02adbf33a0ca7a98de28ea611aa9d94ff591e66f"} Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.307412 4945 scope.go:117] "RemoveContainer" containerID="2fb16142759a5920f817d6ac12abf737701d005cf01cfa6f6a3b7156a998eb59" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.307626 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.337760 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.343828 4945 scope.go:117] "RemoveContainer" containerID="00f13d88b90f8ec0d908c9b68727cf744c690e752806455cea730cb7ae2319e2" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.354767 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.372814 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:28 crc kubenswrapper[4945]: E0109 00:57:28.373952 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerName="cinder-scheduler" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.373979 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerName="cinder-scheduler" Jan 09 00:57:28 crc kubenswrapper[4945]: E0109 00:57:28.374023 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerName="probe" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.374032 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerName="probe" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.374274 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerName="probe" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.374290 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" containerName="cinder-scheduler" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.378729 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.381252 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.409807 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.507316 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.507437 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.507493 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.507551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zd57\" (UniqueName: \"kubernetes.io/projected/e920ac41-940e-4b3e-a210-edddd414ec3f-kube-api-access-2zd57\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.507575 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.507595 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e920ac41-940e-4b3e-a210-edddd414ec3f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.609435 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zd57\" (UniqueName: \"kubernetes.io/projected/e920ac41-940e-4b3e-a210-edddd414ec3f-kube-api-access-2zd57\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.609506 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.609542 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e920ac41-940e-4b3e-a210-edddd414ec3f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.609614 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.609680 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.609747 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.609942 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e920ac41-940e-4b3e-a210-edddd414ec3f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.617510 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.618329 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.621651 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.625751 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e920ac41-940e-4b3e-a210-edddd414ec3f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.640606 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zd57\" (UniqueName: \"kubernetes.io/projected/e920ac41-940e-4b3e-a210-edddd414ec3f-kube-api-access-2zd57\") pod \"cinder-scheduler-0\" (UID: \"e920ac41-940e-4b3e-a210-edddd414ec3f\") " pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.741745 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 09 00:57:28 crc kubenswrapper[4945]: I0109 00:57:28.966587 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 09 00:57:29 crc kubenswrapper[4945]: I0109 00:57:29.186268 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 09 00:57:29 crc kubenswrapper[4945]: I0109 00:57:29.326879 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e920ac41-940e-4b3e-a210-edddd414ec3f","Type":"ContainerStarted","Data":"8b85a5144cbcb40b7261c90cce4d101da620397d7a2be4933183f40599151975"} Jan 09 00:57:29 crc kubenswrapper[4945]: I0109 00:57:29.449111 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 09 00:57:30 crc kubenswrapper[4945]: I0109 00:57:30.019911 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b86b0700-4264-4d8a-89ce-b8e82d0a2e19" path="/var/lib/kubelet/pods/b86b0700-4264-4d8a-89ce-b8e82d0a2e19/volumes" Jan 09 00:57:30 crc kubenswrapper[4945]: I0109 00:57:30.350881 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e920ac41-940e-4b3e-a210-edddd414ec3f","Type":"ContainerStarted","Data":"a6df709cadc82a7b117de3dbbaf97e3292223faf72dd2c8bc6d08b5f4772af72"} Jan 09 00:57:31 crc kubenswrapper[4945]: I0109 00:57:31.366241 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e920ac41-940e-4b3e-a210-edddd414ec3f","Type":"ContainerStarted","Data":"2e52f3cc1ea668b0ec6f5f36a8075c5b28da2d3c332bd523bd3498ec1fef544b"} Jan 09 00:57:31 crc kubenswrapper[4945]: I0109 00:57:31.399248 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.3992170760000002 podStartE2EDuration="3.399217076s" podCreationTimestamp="2026-01-09 00:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:57:31.391870495 +0000 UTC m=+6121.703029621" watchObservedRunningTime="2026-01-09 00:57:31.399217076 +0000 UTC m=+6121.710376062" Jan 09 00:57:33 crc kubenswrapper[4945]: I0109 00:57:33.742457 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 09 00:57:33 crc kubenswrapper[4945]: I0109 00:57:33.986713 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 09 00:57:38 crc kubenswrapper[4945]: I0109 00:57:38.947411 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 09 00:58:13 crc kubenswrapper[4945]: I0109 00:58:13.577926 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:58:13 crc kubenswrapper[4945]: I0109 00:58:13.578418 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:58:43 crc kubenswrapper[4945]: I0109 00:58:43.577625 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:58:43 crc kubenswrapper[4945]: I0109 00:58:43.579205 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:59:13 crc kubenswrapper[4945]: I0109 00:59:13.578303 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 00:59:13 crc kubenswrapper[4945]: I0109 00:59:13.578801 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 00:59:13 crc kubenswrapper[4945]: I0109 00:59:13.578850 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 00:59:13 crc kubenswrapper[4945]: I0109 00:59:13.579670 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eccb70fe4fbdfd4fa48564790297f305ce79c1abeb00e0431dd2d34f92ff2a95"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 00:59:13 crc kubenswrapper[4945]: I0109 00:59:13.579726 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://eccb70fe4fbdfd4fa48564790297f305ce79c1abeb00e0431dd2d34f92ff2a95" gracePeriod=600 Jan 09 00:59:14 crc kubenswrapper[4945]: I0109 00:59:14.515633 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="eccb70fe4fbdfd4fa48564790297f305ce79c1abeb00e0431dd2d34f92ff2a95" exitCode=0 Jan 09 00:59:14 crc kubenswrapper[4945]: I0109 00:59:14.515790 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"eccb70fe4fbdfd4fa48564790297f305ce79c1abeb00e0431dd2d34f92ff2a95"} Jan 09 00:59:14 crc kubenswrapper[4945]: I0109 00:59:14.516449 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31"} Jan 09 00:59:14 crc kubenswrapper[4945]: I0109 00:59:14.516488 4945 scope.go:117] "RemoveContainer" containerID="b9c9795c654163a280807f5a31ed3c7063c75bc2cca08ce789a5d0365af6ffdb" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.003158 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ht5ml"] Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.012492 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.015467 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-8ph5z" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.015534 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.016354 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l579m"] Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.017788 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.026344 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l579m"] Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.035886 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ht5ml"] Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.161475 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nq96\" (UniqueName: \"kubernetes.io/projected/bf226c11-b1c7-48c1-938e-2f6e96678644-kube-api-access-5nq96\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.161526 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-lib\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.161571 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-log\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.161855 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-run\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.162423 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-scripts\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.162562 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf226c11-b1c7-48c1-938e-2f6e96678644-scripts\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.162597 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-log-ovn\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.162735 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-run\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.162817 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-etc-ovs\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.162927 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-run-ovn\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.163014 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5frq\" (UniqueName: \"kubernetes.io/projected/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-kube-api-access-c5frq\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264292 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-run\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264383 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-scripts\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264412 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf226c11-b1c7-48c1-938e-2f6e96678644-scripts\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264433 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-log-ovn\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264483 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-run\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264516 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-etc-ovs\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264555 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-run-ovn\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264592 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5frq\" (UniqueName: \"kubernetes.io/projected/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-kube-api-access-c5frq\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264639 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nq96\" (UniqueName: \"kubernetes.io/projected/bf226c11-b1c7-48c1-938e-2f6e96678644-kube-api-access-5nq96\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264660 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-lib\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264688 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-log\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264723 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-run\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264811 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-log\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264849 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-etc-ovs\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.264870 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-run-ovn\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.265081 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bf226c11-b1c7-48c1-938e-2f6e96678644-var-log-ovn\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.265143 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-lib\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.265233 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-var-run\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.266821 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-scripts\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.267508 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf226c11-b1c7-48c1-938e-2f6e96678644-scripts\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.284551 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nq96\" (UniqueName: \"kubernetes.io/projected/bf226c11-b1c7-48c1-938e-2f6e96678644-kube-api-access-5nq96\") pod \"ovn-controller-l579m\" (UID: \"bf226c11-b1c7-48c1-938e-2f6e96678644\") " pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.285916 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5frq\" (UniqueName: \"kubernetes.io/projected/6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d-kube-api-access-c5frq\") pod \"ovn-controller-ovs-ht5ml\" (UID: \"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d\") " pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.350574 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.362171 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l579m" Jan 09 00:59:21 crc kubenswrapper[4945]: I0109 00:59:21.853406 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l579m"] Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.217501 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ht5ml"] Jan 09 00:59:22 crc kubenswrapper[4945]: W0109 00:59:22.218622 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b8f4cd7_57cf_4ab5_a1a7_55e2cfc3be1d.slice/crio-15c86973502efdf0c38e0b630e75fca075488cce007a3baf1b7ba6a69026ed6b WatchSource:0}: Error finding container 15c86973502efdf0c38e0b630e75fca075488cce007a3baf1b7ba6a69026ed6b: Status 404 returned error can't find the container with id 15c86973502efdf0c38e0b630e75fca075488cce007a3baf1b7ba6a69026ed6b Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.629680 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht5ml" event={"ID":"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d","Type":"ContainerStarted","Data":"91331e3e96f7073ca3e36648ee420ca5712da681188018e614c0db407ca079e6"} Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.629950 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht5ml" event={"ID":"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d","Type":"ContainerStarted","Data":"15c86973502efdf0c38e0b630e75fca075488cce007a3baf1b7ba6a69026ed6b"} Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.634770 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l579m" event={"ID":"bf226c11-b1c7-48c1-938e-2f6e96678644","Type":"ContainerStarted","Data":"9d6aabb9ca84ba40c4d83a18ef8d6484e1e38eb668de01f8de4308e1685102df"} Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.634813 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l579m" event={"ID":"bf226c11-b1c7-48c1-938e-2f6e96678644","Type":"ContainerStarted","Data":"2842ebe8867f0c4aa5b595dcdbc305a0bf21c5c36c6c87810c8b455a76c56a69"} Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.634924 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-l579m" Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.663354 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-l579m" podStartSLOduration=2.663331033 podStartE2EDuration="2.663331033s" podCreationTimestamp="2026-01-09 00:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:59:22.660048472 +0000 UTC m=+6232.971207418" watchObservedRunningTime="2026-01-09 00:59:22.663331033 +0000 UTC m=+6232.974489979" Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.810821 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-bxwnn"] Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.811915 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.822731 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-bxwnn"] Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.897739 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efab81a-6b40-47bd-b236-8039779c4933-operator-scripts\") pod \"octavia-db-create-bxwnn\" (UID: \"4efab81a-6b40-47bd-b236-8039779c4933\") " pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:22 crc kubenswrapper[4945]: I0109 00:59:22.898143 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqmn\" (UniqueName: \"kubernetes.io/projected/4efab81a-6b40-47bd-b236-8039779c4933-kube-api-access-njqmn\") pod \"octavia-db-create-bxwnn\" (UID: \"4efab81a-6b40-47bd-b236-8039779c4933\") " pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.000597 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efab81a-6b40-47bd-b236-8039779c4933-operator-scripts\") pod \"octavia-db-create-bxwnn\" (UID: \"4efab81a-6b40-47bd-b236-8039779c4933\") " pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.000636 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efab81a-6b40-47bd-b236-8039779c4933-operator-scripts\") pod \"octavia-db-create-bxwnn\" (UID: \"4efab81a-6b40-47bd-b236-8039779c4933\") " pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.000687 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqmn\" (UniqueName: \"kubernetes.io/projected/4efab81a-6b40-47bd-b236-8039779c4933-kube-api-access-njqmn\") pod \"octavia-db-create-bxwnn\" (UID: \"4efab81a-6b40-47bd-b236-8039779c4933\") " pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.030732 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqmn\" (UniqueName: \"kubernetes.io/projected/4efab81a-6b40-47bd-b236-8039779c4933-kube-api-access-njqmn\") pod \"octavia-db-create-bxwnn\" (UID: \"4efab81a-6b40-47bd-b236-8039779c4933\") " pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.127585 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.486732 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xvwk7"] Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.489123 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.493569 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.507259 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xvwk7"] Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.582430 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-bxwnn"] Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.623249 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/873fe3a7-08d5-4c2f-866b-da5d92ee950a-ovn-rundir\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.623318 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd6rc\" (UniqueName: \"kubernetes.io/projected/873fe3a7-08d5-4c2f-866b-da5d92ee950a-kube-api-access-fd6rc\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.623573 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/873fe3a7-08d5-4c2f-866b-da5d92ee950a-config\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.623817 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/873fe3a7-08d5-4c2f-866b-da5d92ee950a-ovs-rundir\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.654514 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-bxwnn" event={"ID":"4efab81a-6b40-47bd-b236-8039779c4933","Type":"ContainerStarted","Data":"80457004cb6e7499e8805ef22bda884c0d37a836d3ecb9c629083e3eb0a2aa02"} Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.657056 4945 generic.go:334] "Generic (PLEG): container finished" podID="6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d" containerID="91331e3e96f7073ca3e36648ee420ca5712da681188018e614c0db407ca079e6" exitCode=0 Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.657136 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht5ml" event={"ID":"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d","Type":"ContainerDied","Data":"91331e3e96f7073ca3e36648ee420ca5712da681188018e614c0db407ca079e6"} Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.727245 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/873fe3a7-08d5-4c2f-866b-da5d92ee950a-config\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.728161 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/873fe3a7-08d5-4c2f-866b-da5d92ee950a-config\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.728520 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/873fe3a7-08d5-4c2f-866b-da5d92ee950a-ovs-rundir\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.728771 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/873fe3a7-08d5-4c2f-866b-da5d92ee950a-ovn-rundir\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.728812 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd6rc\" (UniqueName: \"kubernetes.io/projected/873fe3a7-08d5-4c2f-866b-da5d92ee950a-kube-api-access-fd6rc\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.729231 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/873fe3a7-08d5-4c2f-866b-da5d92ee950a-ovs-rundir\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.729679 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/873fe3a7-08d5-4c2f-866b-da5d92ee950a-ovn-rundir\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.751838 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd6rc\" (UniqueName: \"kubernetes.io/projected/873fe3a7-08d5-4c2f-866b-da5d92ee950a-kube-api-access-fd6rc\") pod \"ovn-controller-metrics-xvwk7\" (UID: \"873fe3a7-08d5-4c2f-866b-da5d92ee950a\") " pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:23 crc kubenswrapper[4945]: I0109 00:59:23.824295 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xvwk7" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.052018 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wtdj8"] Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.063909 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wtdj8"] Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.210700 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-f2a5-account-create-update-25zgt"] Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.213146 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.217030 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.224906 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-f2a5-account-create-update-25zgt"] Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.290978 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xvwk7"] Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.341464 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6vr\" (UniqueName: \"kubernetes.io/projected/e1f69a96-868a-468d-9e07-53445536bc34-kube-api-access-rn6vr\") pod \"octavia-f2a5-account-create-update-25zgt\" (UID: \"e1f69a96-868a-468d-9e07-53445536bc34\") " pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.341524 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f69a96-868a-468d-9e07-53445536bc34-operator-scripts\") pod \"octavia-f2a5-account-create-update-25zgt\" (UID: \"e1f69a96-868a-468d-9e07-53445536bc34\") " pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.442845 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6vr\" (UniqueName: \"kubernetes.io/projected/e1f69a96-868a-468d-9e07-53445536bc34-kube-api-access-rn6vr\") pod \"octavia-f2a5-account-create-update-25zgt\" (UID: \"e1f69a96-868a-468d-9e07-53445536bc34\") " pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.442908 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f69a96-868a-468d-9e07-53445536bc34-operator-scripts\") pod \"octavia-f2a5-account-create-update-25zgt\" (UID: \"e1f69a96-868a-468d-9e07-53445536bc34\") " pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.443921 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f69a96-868a-468d-9e07-53445536bc34-operator-scripts\") pod \"octavia-f2a5-account-create-update-25zgt\" (UID: \"e1f69a96-868a-468d-9e07-53445536bc34\") " pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.465750 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6vr\" (UniqueName: \"kubernetes.io/projected/e1f69a96-868a-468d-9e07-53445536bc34-kube-api-access-rn6vr\") pod \"octavia-f2a5-account-create-update-25zgt\" (UID: \"e1f69a96-868a-468d-9e07-53445536bc34\") " pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.552588 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.676715 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xvwk7" event={"ID":"873fe3a7-08d5-4c2f-866b-da5d92ee950a","Type":"ContainerStarted","Data":"2c68023ce02b88d1ee621b7590cb23d934b1e58ccf7ffccbd0e2abe611d2d8db"} Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.676792 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xvwk7" event={"ID":"873fe3a7-08d5-4c2f-866b-da5d92ee950a","Type":"ContainerStarted","Data":"1f717c3abd6431b4a12a80780b3291bdc912fa206a5c5f467c541a88e2d87fd5"} Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.683306 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht5ml" event={"ID":"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d","Type":"ContainerStarted","Data":"3861ad297615f81b451e8f35f7112120be7a1466d7b79edcc6f38e6ea803b09e"} Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.684583 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ht5ml" event={"ID":"6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d","Type":"ContainerStarted","Data":"79264cfcfaadb0729c786786f6203e0e230158dc0a424530101ea0865e452851"} Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.684693 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.684775 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.686312 4945 generic.go:334] "Generic (PLEG): container finished" podID="4efab81a-6b40-47bd-b236-8039779c4933" containerID="2bea1279cb454f01584a896a6aae361c992fe4f7d329b8f6c949b50c490f1e6a" exitCode=0 Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.686437 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-bxwnn" event={"ID":"4efab81a-6b40-47bd-b236-8039779c4933","Type":"ContainerDied","Data":"2bea1279cb454f01584a896a6aae361c992fe4f7d329b8f6c949b50c490f1e6a"} Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.712798 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xvwk7" podStartSLOduration=1.712774024 podStartE2EDuration="1.712774024s" podCreationTimestamp="2026-01-09 00:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:59:24.701592379 +0000 UTC m=+6235.012751325" watchObservedRunningTime="2026-01-09 00:59:24.712774024 +0000 UTC m=+6235.023932970" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.858969 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ht5ml" podStartSLOduration=4.8589473210000005 podStartE2EDuration="4.858947321s" podCreationTimestamp="2026-01-09 00:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 00:59:24.844251109 +0000 UTC m=+6235.155410055" watchObservedRunningTime="2026-01-09 00:59:24.858947321 +0000 UTC m=+6235.170106267" Jan 09 00:59:24 crc kubenswrapper[4945]: I0109 00:59:24.985544 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-f2a5-account-create-update-25zgt"] Jan 09 00:59:25 crc kubenswrapper[4945]: I0109 00:59:25.099232 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d074-account-create-update-6jvq2"] Jan 09 00:59:25 crc kubenswrapper[4945]: I0109 00:59:25.112082 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d074-account-create-update-6jvq2"] Jan 09 00:59:25 crc kubenswrapper[4945]: I0109 00:59:25.699440 4945 generic.go:334] "Generic (PLEG): container finished" podID="e1f69a96-868a-468d-9e07-53445536bc34" containerID="f4101c5d14e8a616e90b73822f5e90a04b1a8fcea1230909ab61ce112c26279f" exitCode=0 Jan 09 00:59:25 crc kubenswrapper[4945]: I0109 00:59:25.699858 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-f2a5-account-create-update-25zgt" event={"ID":"e1f69a96-868a-468d-9e07-53445536bc34","Type":"ContainerDied","Data":"f4101c5d14e8a616e90b73822f5e90a04b1a8fcea1230909ab61ce112c26279f"} Jan 09 00:59:25 crc kubenswrapper[4945]: I0109 00:59:25.700338 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-f2a5-account-create-update-25zgt" event={"ID":"e1f69a96-868a-468d-9e07-53445536bc34","Type":"ContainerStarted","Data":"45ae7c298bc3d4f5c489bfd7668eca4e3fe79d6ef50860d97bb440e3d27e6564"} Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.010674 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3de133-2843-4b80-98a5-a59edc83e4f5" path="/var/lib/kubelet/pods/3e3de133-2843-4b80-98a5-a59edc83e4f5/volumes" Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.012164 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2db8755-cab9-49ac-a3af-52c1cbe036a1" path="/var/lib/kubelet/pods/a2db8755-cab9-49ac-a3af-52c1cbe036a1/volumes" Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.035220 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.094100 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njqmn\" (UniqueName: \"kubernetes.io/projected/4efab81a-6b40-47bd-b236-8039779c4933-kube-api-access-njqmn\") pod \"4efab81a-6b40-47bd-b236-8039779c4933\" (UID: \"4efab81a-6b40-47bd-b236-8039779c4933\") " Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.094319 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efab81a-6b40-47bd-b236-8039779c4933-operator-scripts\") pod \"4efab81a-6b40-47bd-b236-8039779c4933\" (UID: \"4efab81a-6b40-47bd-b236-8039779c4933\") " Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.095840 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efab81a-6b40-47bd-b236-8039779c4933-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4efab81a-6b40-47bd-b236-8039779c4933" (UID: "4efab81a-6b40-47bd-b236-8039779c4933"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.100533 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efab81a-6b40-47bd-b236-8039779c4933-kube-api-access-njqmn" (OuterVolumeSpecName: "kube-api-access-njqmn") pod "4efab81a-6b40-47bd-b236-8039779c4933" (UID: "4efab81a-6b40-47bd-b236-8039779c4933"). InnerVolumeSpecName "kube-api-access-njqmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.196083 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4efab81a-6b40-47bd-b236-8039779c4933-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.196118 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njqmn\" (UniqueName: \"kubernetes.io/projected/4efab81a-6b40-47bd-b236-8039779c4933-kube-api-access-njqmn\") on node \"crc\" DevicePath \"\"" Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.714912 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-bxwnn" event={"ID":"4efab81a-6b40-47bd-b236-8039779c4933","Type":"ContainerDied","Data":"80457004cb6e7499e8805ef22bda884c0d37a836d3ecb9c629083e3eb0a2aa02"} Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.715390 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80457004cb6e7499e8805ef22bda884c0d37a836d3ecb9c629083e3eb0a2aa02" Jan 09 00:59:26 crc kubenswrapper[4945]: I0109 00:59:26.715253 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-bxwnn" Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.064634 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.111126 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f69a96-868a-468d-9e07-53445536bc34-operator-scripts\") pod \"e1f69a96-868a-468d-9e07-53445536bc34\" (UID: \"e1f69a96-868a-468d-9e07-53445536bc34\") " Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.111352 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn6vr\" (UniqueName: \"kubernetes.io/projected/e1f69a96-868a-468d-9e07-53445536bc34-kube-api-access-rn6vr\") pod \"e1f69a96-868a-468d-9e07-53445536bc34\" (UID: \"e1f69a96-868a-468d-9e07-53445536bc34\") " Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.112454 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f69a96-868a-468d-9e07-53445536bc34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1f69a96-868a-468d-9e07-53445536bc34" (UID: "e1f69a96-868a-468d-9e07-53445536bc34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.117215 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f69a96-868a-468d-9e07-53445536bc34-kube-api-access-rn6vr" (OuterVolumeSpecName: "kube-api-access-rn6vr") pod "e1f69a96-868a-468d-9e07-53445536bc34" (UID: "e1f69a96-868a-468d-9e07-53445536bc34"). InnerVolumeSpecName "kube-api-access-rn6vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.213618 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn6vr\" (UniqueName: \"kubernetes.io/projected/e1f69a96-868a-468d-9e07-53445536bc34-kube-api-access-rn6vr\") on node \"crc\" DevicePath \"\"" Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.213650 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1f69a96-868a-468d-9e07-53445536bc34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.725503 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-f2a5-account-create-update-25zgt" event={"ID":"e1f69a96-868a-468d-9e07-53445536bc34","Type":"ContainerDied","Data":"45ae7c298bc3d4f5c489bfd7668eca4e3fe79d6ef50860d97bb440e3d27e6564"} Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.726820 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45ae7c298bc3d4f5c489bfd7668eca4e3fe79d6ef50860d97bb440e3d27e6564" Jan 09 00:59:27 crc kubenswrapper[4945]: I0109 00:59:27.725587 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-f2a5-account-create-update-25zgt" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.841341 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-l6hdm"] Jan 09 00:59:29 crc kubenswrapper[4945]: E0109 00:59:29.842100 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f69a96-868a-468d-9e07-53445536bc34" containerName="mariadb-account-create-update" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.842114 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f69a96-868a-468d-9e07-53445536bc34" containerName="mariadb-account-create-update" Jan 09 00:59:29 crc kubenswrapper[4945]: E0109 00:59:29.842127 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efab81a-6b40-47bd-b236-8039779c4933" containerName="mariadb-database-create" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.842134 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efab81a-6b40-47bd-b236-8039779c4933" containerName="mariadb-database-create" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.842351 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f69a96-868a-468d-9e07-53445536bc34" containerName="mariadb-account-create-update" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.842380 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efab81a-6b40-47bd-b236-8039779c4933" containerName="mariadb-database-create" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.843099 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.849817 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-l6hdm"] Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.877688 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrkc6\" (UniqueName: \"kubernetes.io/projected/a9c3b70e-849b-45cb-8ced-891e7755cda5-kube-api-access-hrkc6\") pod \"octavia-persistence-db-create-l6hdm\" (UID: \"a9c3b70e-849b-45cb-8ced-891e7755cda5\") " pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.877777 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c3b70e-849b-45cb-8ced-891e7755cda5-operator-scripts\") pod \"octavia-persistence-db-create-l6hdm\" (UID: \"a9c3b70e-849b-45cb-8ced-891e7755cda5\") " pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.979906 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c3b70e-849b-45cb-8ced-891e7755cda5-operator-scripts\") pod \"octavia-persistence-db-create-l6hdm\" (UID: \"a9c3b70e-849b-45cb-8ced-891e7755cda5\") " pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.980223 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrkc6\" (UniqueName: \"kubernetes.io/projected/a9c3b70e-849b-45cb-8ced-891e7755cda5-kube-api-access-hrkc6\") pod \"octavia-persistence-db-create-l6hdm\" (UID: \"a9c3b70e-849b-45cb-8ced-891e7755cda5\") " pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:29 crc kubenswrapper[4945]: I0109 00:59:29.980809 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c3b70e-849b-45cb-8ced-891e7755cda5-operator-scripts\") pod \"octavia-persistence-db-create-l6hdm\" (UID: \"a9c3b70e-849b-45cb-8ced-891e7755cda5\") " pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:29.999118 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrkc6\" (UniqueName: \"kubernetes.io/projected/a9c3b70e-849b-45cb-8ced-891e7755cda5-kube-api-access-hrkc6\") pod \"octavia-persistence-db-create-l6hdm\" (UID: \"a9c3b70e-849b-45cb-8ced-891e7755cda5\") " pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.207870 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:30 crc kubenswrapper[4945]: W0109 00:59:30.630983 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9c3b70e_849b_45cb_8ced_891e7755cda5.slice/crio-06e21d095ae1986e6673176157c6f31a8be4be8e124fdd741eadd56fd3aaf347 WatchSource:0}: Error finding container 06e21d095ae1986e6673176157c6f31a8be4be8e124fdd741eadd56fd3aaf347: Status 404 returned error can't find the container with id 06e21d095ae1986e6673176157c6f31a8be4be8e124fdd741eadd56fd3aaf347 Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.639744 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-l6hdm"] Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.762832 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-l6hdm" event={"ID":"a9c3b70e-849b-45cb-8ced-891e7755cda5","Type":"ContainerStarted","Data":"06e21d095ae1986e6673176157c6f31a8be4be8e124fdd741eadd56fd3aaf347"} Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.857007 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db3e-account-create-update-hkwdz"] Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.858975 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.862340 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.868712 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db3e-account-create-update-hkwdz"] Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.899346 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442f9cc-03ad-40e5-b649-d397e7938e98-operator-scripts\") pod \"octavia-db3e-account-create-update-hkwdz\" (UID: \"4442f9cc-03ad-40e5-b649-d397e7938e98\") " pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:30 crc kubenswrapper[4945]: I0109 00:59:30.899412 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q99gw\" (UniqueName: \"kubernetes.io/projected/4442f9cc-03ad-40e5-b649-d397e7938e98-kube-api-access-q99gw\") pod \"octavia-db3e-account-create-update-hkwdz\" (UID: \"4442f9cc-03ad-40e5-b649-d397e7938e98\") " pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.001274 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442f9cc-03ad-40e5-b649-d397e7938e98-operator-scripts\") pod \"octavia-db3e-account-create-update-hkwdz\" (UID: \"4442f9cc-03ad-40e5-b649-d397e7938e98\") " pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.001393 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q99gw\" (UniqueName: \"kubernetes.io/projected/4442f9cc-03ad-40e5-b649-d397e7938e98-kube-api-access-q99gw\") pod \"octavia-db3e-account-create-update-hkwdz\" (UID: \"4442f9cc-03ad-40e5-b649-d397e7938e98\") " pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.002079 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442f9cc-03ad-40e5-b649-d397e7938e98-operator-scripts\") pod \"octavia-db3e-account-create-update-hkwdz\" (UID: \"4442f9cc-03ad-40e5-b649-d397e7938e98\") " pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.030482 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q99gw\" (UniqueName: \"kubernetes.io/projected/4442f9cc-03ad-40e5-b649-d397e7938e98-kube-api-access-q99gw\") pod \"octavia-db3e-account-create-update-hkwdz\" (UID: \"4442f9cc-03ad-40e5-b649-d397e7938e98\") " pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.041406 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pdj57"] Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.050909 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pdj57"] Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.188446 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.643128 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db3e-account-create-update-hkwdz"] Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.772607 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db3e-account-create-update-hkwdz" event={"ID":"4442f9cc-03ad-40e5-b649-d397e7938e98","Type":"ContainerStarted","Data":"0c0be393e9283628889ecb17c843e6f2fd1f0457d4782524c75f0861c59fb94d"} Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.773849 4945 generic.go:334] "Generic (PLEG): container finished" podID="a9c3b70e-849b-45cb-8ced-891e7755cda5" containerID="a0fb92e6c3f6c28d07bd9953eeff44ffea3391ae8ebe5c317c1024cdfc8ef3b4" exitCode=0 Jan 09 00:59:31 crc kubenswrapper[4945]: I0109 00:59:31.773879 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-l6hdm" event={"ID":"a9c3b70e-849b-45cb-8ced-891e7755cda5","Type":"ContainerDied","Data":"a0fb92e6c3f6c28d07bd9953eeff44ffea3391ae8ebe5c317c1024cdfc8ef3b4"} Jan 09 00:59:32 crc kubenswrapper[4945]: I0109 00:59:32.012026 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c3356b2-a1e4-444c-83eb-8ce5b717d99c" path="/var/lib/kubelet/pods/9c3356b2-a1e4-444c-83eb-8ce5b717d99c/volumes" Jan 09 00:59:32 crc kubenswrapper[4945]: I0109 00:59:32.786822 4945 generic.go:334] "Generic (PLEG): container finished" podID="4442f9cc-03ad-40e5-b649-d397e7938e98" containerID="7f1c62919922dd47c242aa63d0d8290af7c95f99b731affc6df736504ad4b96e" exitCode=0 Jan 09 00:59:32 crc kubenswrapper[4945]: I0109 00:59:32.787125 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db3e-account-create-update-hkwdz" event={"ID":"4442f9cc-03ad-40e5-b649-d397e7938e98","Type":"ContainerDied","Data":"7f1c62919922dd47c242aa63d0d8290af7c95f99b731affc6df736504ad4b96e"} Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.169921 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.349911 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrkc6\" (UniqueName: \"kubernetes.io/projected/a9c3b70e-849b-45cb-8ced-891e7755cda5-kube-api-access-hrkc6\") pod \"a9c3b70e-849b-45cb-8ced-891e7755cda5\" (UID: \"a9c3b70e-849b-45cb-8ced-891e7755cda5\") " Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.349984 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c3b70e-849b-45cb-8ced-891e7755cda5-operator-scripts\") pod \"a9c3b70e-849b-45cb-8ced-891e7755cda5\" (UID: \"a9c3b70e-849b-45cb-8ced-891e7755cda5\") " Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.350724 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c3b70e-849b-45cb-8ced-891e7755cda5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9c3b70e-849b-45cb-8ced-891e7755cda5" (UID: "a9c3b70e-849b-45cb-8ced-891e7755cda5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.356224 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c3b70e-849b-45cb-8ced-891e7755cda5-kube-api-access-hrkc6" (OuterVolumeSpecName: "kube-api-access-hrkc6") pod "a9c3b70e-849b-45cb-8ced-891e7755cda5" (UID: "a9c3b70e-849b-45cb-8ced-891e7755cda5"). InnerVolumeSpecName "kube-api-access-hrkc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.451684 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrkc6\" (UniqueName: \"kubernetes.io/projected/a9c3b70e-849b-45cb-8ced-891e7755cda5-kube-api-access-hrkc6\") on node \"crc\" DevicePath \"\"" Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.451714 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c3b70e-849b-45cb-8ced-891e7755cda5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.796918 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-l6hdm" Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.796938 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-l6hdm" event={"ID":"a9c3b70e-849b-45cb-8ced-891e7755cda5","Type":"ContainerDied","Data":"06e21d095ae1986e6673176157c6f31a8be4be8e124fdd741eadd56fd3aaf347"} Jan 09 00:59:33 crc kubenswrapper[4945]: I0109 00:59:33.798355 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e21d095ae1986e6673176157c6f31a8be4be8e124fdd741eadd56fd3aaf347" Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.184390 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.264734 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442f9cc-03ad-40e5-b649-d397e7938e98-operator-scripts\") pod \"4442f9cc-03ad-40e5-b649-d397e7938e98\" (UID: \"4442f9cc-03ad-40e5-b649-d397e7938e98\") " Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.264806 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q99gw\" (UniqueName: \"kubernetes.io/projected/4442f9cc-03ad-40e5-b649-d397e7938e98-kube-api-access-q99gw\") pod \"4442f9cc-03ad-40e5-b649-d397e7938e98\" (UID: \"4442f9cc-03ad-40e5-b649-d397e7938e98\") " Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.265472 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4442f9cc-03ad-40e5-b649-d397e7938e98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4442f9cc-03ad-40e5-b649-d397e7938e98" (UID: "4442f9cc-03ad-40e5-b649-d397e7938e98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.265974 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4442f9cc-03ad-40e5-b649-d397e7938e98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.271342 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4442f9cc-03ad-40e5-b649-d397e7938e98-kube-api-access-q99gw" (OuterVolumeSpecName: "kube-api-access-q99gw") pod "4442f9cc-03ad-40e5-b649-d397e7938e98" (UID: "4442f9cc-03ad-40e5-b649-d397e7938e98"). InnerVolumeSpecName "kube-api-access-q99gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.368096 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q99gw\" (UniqueName: \"kubernetes.io/projected/4442f9cc-03ad-40e5-b649-d397e7938e98-kube-api-access-q99gw\") on node \"crc\" DevicePath \"\"" Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.809643 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db3e-account-create-update-hkwdz" event={"ID":"4442f9cc-03ad-40e5-b649-d397e7938e98","Type":"ContainerDied","Data":"0c0be393e9283628889ecb17c843e6f2fd1f0457d4782524c75f0861c59fb94d"} Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.810101 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c0be393e9283628889ecb17c843e6f2fd1f0457d4782524c75f0861c59fb94d" Jan 09 00:59:34 crc kubenswrapper[4945]: I0109 00:59:34.809855 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db3e-account-create-update-hkwdz" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.572429 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-ff95db4fd-q2s54"] Jan 09 00:59:36 crc kubenswrapper[4945]: E0109 00:59:36.573273 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4442f9cc-03ad-40e5-b649-d397e7938e98" containerName="mariadb-account-create-update" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.573334 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="4442f9cc-03ad-40e5-b649-d397e7938e98" containerName="mariadb-account-create-update" Jan 09 00:59:36 crc kubenswrapper[4945]: E0109 00:59:36.573349 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c3b70e-849b-45cb-8ced-891e7755cda5" containerName="mariadb-database-create" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.573391 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c3b70e-849b-45cb-8ced-891e7755cda5" containerName="mariadb-database-create" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.573624 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c3b70e-849b-45cb-8ced-891e7755cda5" containerName="mariadb-database-create" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.573644 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="4442f9cc-03ad-40e5-b649-d397e7938e98" containerName="mariadb-account-create-update" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.576484 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.580393 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-lj7r4" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.580584 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.581157 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.584793 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-ff95db4fd-q2s54"] Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.713487 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-octavia-run\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.713561 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-scripts\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.713633 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-config-data-merged\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.713746 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-config-data\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.713765 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-combined-ca-bundle\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.815948 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-config-data\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.816027 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-combined-ca-bundle\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.816074 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-octavia-run\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.816112 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-scripts\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.816160 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-config-data-merged\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.816739 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-config-data-merged\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.817826 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-octavia-run\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.824702 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-scripts\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.825718 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-combined-ca-bundle\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.826632 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca-config-data\") pod \"octavia-api-ff95db4fd-q2s54\" (UID: \"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca\") " pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:36 crc kubenswrapper[4945]: I0109 00:59:36.899870 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.299068 4945 scope.go:117] "RemoveContainer" containerID="ef925930dc2ed1bd687004c42f36bf352c11f26edd4b91c8ea76ecab6370c4c4" Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.342738 4945 scope.go:117] "RemoveContainer" containerID="6a786fd11a87d6a7c70da8b8956ec908c33aa6d7991a43e9945b4f384743abd8" Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.368062 4945 scope.go:117] "RemoveContainer" containerID="cafb20538cddae04d90a76d3df111677b68b93a947de5b5c5591056e5bcaeb37" Jan 09 00:59:37 crc kubenswrapper[4945]: W0109 00:59:37.389838 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcecf9e4_c4c6_4f3f_86ec_dea0483b7cca.slice/crio-0add63f4ab40edc5a05724e288d58a380616b923c2669df36ca9423df7d69132 WatchSource:0}: Error finding container 0add63f4ab40edc5a05724e288d58a380616b923c2669df36ca9423df7d69132: Status 404 returned error can't find the container with id 0add63f4ab40edc5a05724e288d58a380616b923c2669df36ca9423df7d69132 Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.399556 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-ff95db4fd-q2s54"] Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.400878 4945 scope.go:117] "RemoveContainer" containerID="0ebc2045cf9e61afbc84cfbda428fc3b5ead9e557ca6caaa7c1744fc4e3ae9ec" Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.427295 4945 scope.go:117] "RemoveContainer" containerID="a260d15f0505773d413361720fa02a27eeaf7dbb3a7a9c7c684870a85f088119" Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.454619 4945 scope.go:117] "RemoveContainer" containerID="2edb7dd17ec4fb6d700535f89b1f45ccada92ebdf7e89df342cfbcbb2ba37495" Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.479962 4945 scope.go:117] "RemoveContainer" containerID="f6122dc6d483ef326d4eda4b50dcff4e35b61065e44057638acce42df270bd3c" Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.506705 4945 scope.go:117] "RemoveContainer" containerID="e744d3468035c5aea92db4594223484f3a895027641640c5464ae2348d457def" Jan 09 00:59:37 crc kubenswrapper[4945]: I0109 00:59:37.840507 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-ff95db4fd-q2s54" event={"ID":"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca","Type":"ContainerStarted","Data":"0add63f4ab40edc5a05724e288d58a380616b923c2669df36ca9423df7d69132"} Jan 09 00:59:45 crc kubenswrapper[4945]: I0109 00:59:45.045058 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-65469"] Jan 09 00:59:45 crc kubenswrapper[4945]: I0109 00:59:45.055766 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-65469"] Jan 09 00:59:46 crc kubenswrapper[4945]: I0109 00:59:46.015110 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18c95da-e9f3-4f04-9d79-a6760f6faa97" path="/var/lib/kubelet/pods/d18c95da-e9f3-4f04-9d79-a6760f6faa97/volumes" Jan 09 00:59:46 crc kubenswrapper[4945]: I0109 00:59:46.920761 4945 generic.go:334] "Generic (PLEG): container finished" podID="dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca" containerID="8496eec3583422e908d43f2ccd70865f15abe9f3ea06e4e3389b7f4eaa31074f" exitCode=0 Jan 09 00:59:46 crc kubenswrapper[4945]: I0109 00:59:46.920825 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-ff95db4fd-q2s54" event={"ID":"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca","Type":"ContainerDied","Data":"8496eec3583422e908d43f2ccd70865f15abe9f3ea06e4e3389b7f4eaa31074f"} Jan 09 00:59:47 crc kubenswrapper[4945]: I0109 00:59:47.938525 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-ff95db4fd-q2s54" event={"ID":"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca","Type":"ContainerStarted","Data":"5f37a86178e0c30d3808eee887712eb065b4401bb58feebd253de16a90e61901"} Jan 09 00:59:47 crc kubenswrapper[4945]: I0109 00:59:47.939248 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-ff95db4fd-q2s54" event={"ID":"dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca","Type":"ContainerStarted","Data":"631696a1c89a0bdf22913c0a14813ceb4ef7d5e95ca74d5446681deaa3ad3d28"} Jan 09 00:59:47 crc kubenswrapper[4945]: I0109 00:59:47.939314 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:47 crc kubenswrapper[4945]: I0109 00:59:47.939340 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 00:59:47 crc kubenswrapper[4945]: I0109 00:59:47.961531 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-ff95db4fd-q2s54" podStartSLOduration=3.461764712 podStartE2EDuration="11.96150473s" podCreationTimestamp="2026-01-09 00:59:36 +0000 UTC" firstStartedPulling="2026-01-09 00:59:37.400828877 +0000 UTC m=+6247.711987833" lastFinishedPulling="2026-01-09 00:59:45.900568905 +0000 UTC m=+6256.211727851" observedRunningTime="2026-01-09 00:59:47.958194818 +0000 UTC m=+6258.269353784" watchObservedRunningTime="2026-01-09 00:59:47.96150473 +0000 UTC m=+6258.272663676" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.453585 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-s964b"] Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.456553 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.459229 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.459414 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.459584 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.470697 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-s964b"] Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.584838 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e00cd5b2-f064-4088-8d6a-7ad028fc7147-scripts\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.584931 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e00cd5b2-f064-4088-8d6a-7ad028fc7147-config-data\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.584958 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e00cd5b2-f064-4088-8d6a-7ad028fc7147-config-data-merged\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.585100 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e00cd5b2-f064-4088-8d6a-7ad028fc7147-hm-ports\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.686291 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e00cd5b2-f064-4088-8d6a-7ad028fc7147-hm-ports\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.686533 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e00cd5b2-f064-4088-8d6a-7ad028fc7147-scripts\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.686590 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e00cd5b2-f064-4088-8d6a-7ad028fc7147-config-data\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.687020 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e00cd5b2-f064-4088-8d6a-7ad028fc7147-config-data-merged\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.687454 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e00cd5b2-f064-4088-8d6a-7ad028fc7147-config-data-merged\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.687959 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e00cd5b2-f064-4088-8d6a-7ad028fc7147-hm-ports\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.691695 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e00cd5b2-f064-4088-8d6a-7ad028fc7147-config-data\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.692357 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e00cd5b2-f064-4088-8d6a-7ad028fc7147-scripts\") pod \"octavia-rsyslog-s964b\" (UID: \"e00cd5b2-f064-4088-8d6a-7ad028fc7147\") " pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:55 crc kubenswrapper[4945]: I0109 00:59:55.776513 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-s964b" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.058949 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vrpv8"] Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.060918 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.064308 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.070113 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vrpv8"] Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.196888 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-amphora-image\") pod \"octavia-image-upload-59f8cff499-vrpv8\" (UID: \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\") " pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.197000 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-httpd-config\") pod \"octavia-image-upload-59f8cff499-vrpv8\" (UID: \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\") " pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.298937 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-amphora-image\") pod \"octavia-image-upload-59f8cff499-vrpv8\" (UID: \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\") " pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.299106 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-httpd-config\") pod \"octavia-image-upload-59f8cff499-vrpv8\" (UID: \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\") " pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.299579 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-amphora-image\") pod \"octavia-image-upload-59f8cff499-vrpv8\" (UID: \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\") " pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.319570 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-httpd-config\") pod \"octavia-image-upload-59f8cff499-vrpv8\" (UID: \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\") " pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.347624 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-s964b"] Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.386420 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.400613 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-l579m" podUID="bf226c11-b1c7-48c1-938e-2f6e96678644" containerName="ovn-controller" probeResult="failure" output=< Jan 09 00:59:56 crc kubenswrapper[4945]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 09 00:59:56 crc kubenswrapper[4945]: > Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.402220 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.414885 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ht5ml" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.537466 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l579m-config-xr8cl"] Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.538746 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.543450 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.570101 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l579m-config-xr8cl"] Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.726248 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.726636 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278xh\" (UniqueName: \"kubernetes.io/projected/974b6e96-245d-4018-84b6-beed0b612b9c-kube-api-access-278xh\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.726698 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-log-ovn\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.726751 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-scripts\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.726777 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-additional-scripts\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.726793 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run-ovn\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.831252 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278xh\" (UniqueName: \"kubernetes.io/projected/974b6e96-245d-4018-84b6-beed0b612b9c-kube-api-access-278xh\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.831365 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-log-ovn\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.831449 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-scripts\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.831486 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-additional-scripts\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.831511 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run-ovn\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.832189 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.832530 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.832946 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-log-ovn\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.835741 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-additional-scripts\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.835774 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-scripts\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.836126 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run-ovn\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.864961 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278xh\" (UniqueName: \"kubernetes.io/projected/974b6e96-245d-4018-84b6-beed0b612b9c-kube-api-access-278xh\") pod \"ovn-controller-l579m-config-xr8cl\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.889495 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 00:59:56 crc kubenswrapper[4945]: I0109 00:59:56.975050 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vrpv8"] Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.028109 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" event={"ID":"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb","Type":"ContainerStarted","Data":"3a52b62be7014673c9dacd0fb5708e5c3e9859b45f5e66a2b8bc89454b14ff43"} Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.030172 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-s964b" event={"ID":"e00cd5b2-f064-4088-8d6a-7ad028fc7147","Type":"ContainerStarted","Data":"dd545d5e04682e85fd2619ffe66f3604ce5633c324eb9b3cc278d64afb56bb93"} Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.420560 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l579m-config-xr8cl"] Jan 09 00:59:57 crc kubenswrapper[4945]: W0109 00:59:57.425420 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod974b6e96_245d_4018_84b6_beed0b612b9c.slice/crio-03f8abf14573ef8c145d173d5f3d58e2a81b204d3bf601fff291b6fbdd6400d1 WatchSource:0}: Error finding container 03f8abf14573ef8c145d173d5f3d58e2a81b204d3bf601fff291b6fbdd6400d1: Status 404 returned error can't find the container with id 03f8abf14573ef8c145d173d5f3d58e2a81b204d3bf601fff291b6fbdd6400d1 Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.779828 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-4qtv7"] Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.783329 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.785630 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.796748 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-4qtv7"] Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.963134 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-scripts\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.963282 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d93bb14f-1fda-48f9-9251-e23763538847-config-data-merged\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.963415 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-config-data\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:57 crc kubenswrapper[4945]: I0109 00:59:57.963457 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-combined-ca-bundle\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.050869 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l579m-config-xr8cl" event={"ID":"974b6e96-245d-4018-84b6-beed0b612b9c","Type":"ContainerStarted","Data":"03f8abf14573ef8c145d173d5f3d58e2a81b204d3bf601fff291b6fbdd6400d1"} Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.065140 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d93bb14f-1fda-48f9-9251-e23763538847-config-data-merged\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.065280 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-config-data\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.065321 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-combined-ca-bundle\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.065393 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-scripts\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.065954 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d93bb14f-1fda-48f9-9251-e23763538847-config-data-merged\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.071757 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-scripts\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.086481 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-combined-ca-bundle\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.091922 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-config-data\") pod \"octavia-db-sync-4qtv7\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.107370 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4qtv7" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.632081 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wqs5v"] Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.635129 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.648193 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqs5v"] Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.663845 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-4qtv7"] Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.706479 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-catalog-content\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.706575 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-utilities\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.706647 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c62dw\" (UniqueName: \"kubernetes.io/projected/a331fb09-da5e-4f56-baa2-44bee5cc48be-kube-api-access-c62dw\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.807536 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-utilities\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.807672 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c62dw\" (UniqueName: \"kubernetes.io/projected/a331fb09-da5e-4f56-baa2-44bee5cc48be-kube-api-access-c62dw\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.807754 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-catalog-content\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.808393 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-utilities\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.808475 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-catalog-content\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.826434 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c62dw\" (UniqueName: \"kubernetes.io/projected/a331fb09-da5e-4f56-baa2-44bee5cc48be-kube-api-access-c62dw\") pod \"certified-operators-wqs5v\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:58 crc kubenswrapper[4945]: W0109 00:59:58.906360 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd93bb14f_1fda_48f9_9251_e23763538847.slice/crio-32fc119ab353b0a8ede031b1d84a1fd3986869c2d4c9cd630fd92f7c125e10f2 WatchSource:0}: Error finding container 32fc119ab353b0a8ede031b1d84a1fd3986869c2d4c9cd630fd92f7c125e10f2: Status 404 returned error can't find the container with id 32fc119ab353b0a8ede031b1d84a1fd3986869c2d4c9cd630fd92f7c125e10f2 Jan 09 00:59:58 crc kubenswrapper[4945]: I0109 00:59:58.956753 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 00:59:59 crc kubenswrapper[4945]: I0109 00:59:59.065538 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4qtv7" event={"ID":"d93bb14f-1fda-48f9-9251-e23763538847","Type":"ContainerStarted","Data":"32fc119ab353b0a8ede031b1d84a1fd3986869c2d4c9cd630fd92f7c125e10f2"} Jan 09 00:59:59 crc kubenswrapper[4945]: I0109 00:59:59.069252 4945 generic.go:334] "Generic (PLEG): container finished" podID="974b6e96-245d-4018-84b6-beed0b612b9c" containerID="92087555576f9f158e9c44ce0b79c033c186af919d669c86b70cb3027f1369c5" exitCode=0 Jan 09 00:59:59 crc kubenswrapper[4945]: I0109 00:59:59.069715 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l579m-config-xr8cl" event={"ID":"974b6e96-245d-4018-84b6-beed0b612b9c","Type":"ContainerDied","Data":"92087555576f9f158e9c44ce0b79c033c186af919d669c86b70cb3027f1369c5"} Jan 09 00:59:59 crc kubenswrapper[4945]: I0109 00:59:59.072901 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-s964b" event={"ID":"e00cd5b2-f064-4088-8d6a-7ad028fc7147","Type":"ContainerStarted","Data":"d624805b0eca65a1c1abb8397066aa0d43d454ef27ab00459d05b87e962cceef"} Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.161109 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q"] Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.163477 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.166964 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.167497 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.171051 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q"] Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.344780 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579e9f90-c898-4c0d-aa7b-6d6bde49e872-config-volume\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.344886 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579e9f90-c898-4c0d-aa7b-6d6bde49e872-secret-volume\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.344954 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsjdg\" (UniqueName: \"kubernetes.io/projected/579e9f90-c898-4c0d-aa7b-6d6bde49e872-kube-api-access-jsjdg\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.434177 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqs5v"] Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.451608 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579e9f90-c898-4c0d-aa7b-6d6bde49e872-config-volume\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.451662 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579e9f90-c898-4c0d-aa7b-6d6bde49e872-secret-volume\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.451698 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsjdg\" (UniqueName: \"kubernetes.io/projected/579e9f90-c898-4c0d-aa7b-6d6bde49e872-kube-api-access-jsjdg\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.453573 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579e9f90-c898-4c0d-aa7b-6d6bde49e872-config-volume\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.468467 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579e9f90-c898-4c0d-aa7b-6d6bde49e872-secret-volume\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.472434 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsjdg\" (UniqueName: \"kubernetes.io/projected/579e9f90-c898-4c0d-aa7b-6d6bde49e872-kube-api-access-jsjdg\") pod \"collect-profiles-29465340-kvj5q\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.515507 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.556687 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.757686 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-scripts\") pod \"974b6e96-245d-4018-84b6-beed0b612b9c\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758154 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run\") pod \"974b6e96-245d-4018-84b6-beed0b612b9c\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758288 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-additional-scripts\") pod \"974b6e96-245d-4018-84b6-beed0b612b9c\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758295 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run" (OuterVolumeSpecName: "var-run") pod "974b6e96-245d-4018-84b6-beed0b612b9c" (UID: "974b6e96-245d-4018-84b6-beed0b612b9c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758338 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-278xh\" (UniqueName: \"kubernetes.io/projected/974b6e96-245d-4018-84b6-beed0b612b9c-kube-api-access-278xh\") pod \"974b6e96-245d-4018-84b6-beed0b612b9c\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758390 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run-ovn\") pod \"974b6e96-245d-4018-84b6-beed0b612b9c\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758438 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-log-ovn\") pod \"974b6e96-245d-4018-84b6-beed0b612b9c\" (UID: \"974b6e96-245d-4018-84b6-beed0b612b9c\") " Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758522 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "974b6e96-245d-4018-84b6-beed0b612b9c" (UID: "974b6e96-245d-4018-84b6-beed0b612b9c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758597 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "974b6e96-245d-4018-84b6-beed0b612b9c" (UID: "974b6e96-245d-4018-84b6-beed0b612b9c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758936 4945 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758957 4945 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.758970 4945 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/974b6e96-245d-4018-84b6-beed0b612b9c-var-run\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.759221 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "974b6e96-245d-4018-84b6-beed0b612b9c" (UID: "974b6e96-245d-4018-84b6-beed0b612b9c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.759403 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-scripts" (OuterVolumeSpecName: "scripts") pod "974b6e96-245d-4018-84b6-beed0b612b9c" (UID: "974b6e96-245d-4018-84b6-beed0b612b9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.762935 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974b6e96-245d-4018-84b6-beed0b612b9c-kube-api-access-278xh" (OuterVolumeSpecName: "kube-api-access-278xh") pod "974b6e96-245d-4018-84b6-beed0b612b9c" (UID: "974b6e96-245d-4018-84b6-beed0b612b9c"). InnerVolumeSpecName "kube-api-access-278xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.869678 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.869716 4945 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/974b6e96-245d-4018-84b6-beed0b612b9c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:00 crc kubenswrapper[4945]: I0109 01:00:00.869731 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-278xh\" (UniqueName: \"kubernetes.io/projected/974b6e96-245d-4018-84b6-beed0b612b9c-kube-api-access-278xh\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.043976 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q"] Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.095628 4945 generic.go:334] "Generic (PLEG): container finished" podID="d93bb14f-1fda-48f9-9251-e23763538847" containerID="79bf42946599a67dccd52f20998cd7c9c10aebbc44232b1cfdfafa899f7dc311" exitCode=0 Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.095805 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4qtv7" event={"ID":"d93bb14f-1fda-48f9-9251-e23763538847","Type":"ContainerDied","Data":"79bf42946599a67dccd52f20998cd7c9c10aebbc44232b1cfdfafa899f7dc311"} Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.098380 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" event={"ID":"579e9f90-c898-4c0d-aa7b-6d6bde49e872","Type":"ContainerStarted","Data":"cc5d9debbb138c5eeafd4620800a81b2bd33ac181348ce31bae9239b4d4f4e72"} Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.104686 4945 generic.go:334] "Generic (PLEG): container finished" podID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerID="2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174" exitCode=0 Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.104784 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqs5v" event={"ID":"a331fb09-da5e-4f56-baa2-44bee5cc48be","Type":"ContainerDied","Data":"2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174"} Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.104822 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqs5v" event={"ID":"a331fb09-da5e-4f56-baa2-44bee5cc48be","Type":"ContainerStarted","Data":"4e5ac432fdf53f5a91ff25a9cfc2ee780ac3b2b79db6f2e16268815dd7a17e8b"} Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.109436 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l579m-config-xr8cl" event={"ID":"974b6e96-245d-4018-84b6-beed0b612b9c","Type":"ContainerDied","Data":"03f8abf14573ef8c145d173d5f3d58e2a81b204d3bf601fff291b6fbdd6400d1"} Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.109494 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f8abf14573ef8c145d173d5f3d58e2a81b204d3bf601fff291b6fbdd6400d1" Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.109521 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l579m-config-xr8cl" Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.447201 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-l579m" Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.656473 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-l579m-config-xr8cl"] Jan 09 01:00:01 crc kubenswrapper[4945]: I0109 01:00:01.669622 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-l579m-config-xr8cl"] Jan 09 01:00:02 crc kubenswrapper[4945]: I0109 01:00:02.017014 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="974b6e96-245d-4018-84b6-beed0b612b9c" path="/var/lib/kubelet/pods/974b6e96-245d-4018-84b6-beed0b612b9c/volumes" Jan 09 01:00:02 crc kubenswrapper[4945]: I0109 01:00:02.129219 4945 generic.go:334] "Generic (PLEG): container finished" podID="e00cd5b2-f064-4088-8d6a-7ad028fc7147" containerID="d624805b0eca65a1c1abb8397066aa0d43d454ef27ab00459d05b87e962cceef" exitCode=0 Jan 09 01:00:02 crc kubenswrapper[4945]: I0109 01:00:02.129287 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-s964b" event={"ID":"e00cd5b2-f064-4088-8d6a-7ad028fc7147","Type":"ContainerDied","Data":"d624805b0eca65a1c1abb8397066aa0d43d454ef27ab00459d05b87e962cceef"} Jan 09 01:00:02 crc kubenswrapper[4945]: I0109 01:00:02.132564 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4qtv7" event={"ID":"d93bb14f-1fda-48f9-9251-e23763538847","Type":"ContainerStarted","Data":"5a15fa223f5684d35ba51017a32814d722494dbbae6fd01eb5e8b9d2087ddf0b"} Jan 09 01:00:02 crc kubenswrapper[4945]: I0109 01:00:02.135839 4945 generic.go:334] "Generic (PLEG): container finished" podID="579e9f90-c898-4c0d-aa7b-6d6bde49e872" containerID="b65b71aee3155240fd11d465613fbc02ae05d6c4eb032b83013b7898994f372b" exitCode=0 Jan 09 01:00:02 crc kubenswrapper[4945]: I0109 01:00:02.135875 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" event={"ID":"579e9f90-c898-4c0d-aa7b-6d6bde49e872","Type":"ContainerDied","Data":"b65b71aee3155240fd11d465613fbc02ae05d6c4eb032b83013b7898994f372b"} Jan 09 01:00:02 crc kubenswrapper[4945]: I0109 01:00:02.175887 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-4qtv7" podStartSLOduration=5.17580968 podStartE2EDuration="5.17580968s" podCreationTimestamp="2026-01-09 00:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:00:02.171937984 +0000 UTC m=+6272.483096940" watchObservedRunningTime="2026-01-09 01:00:02.17580968 +0000 UTC m=+6272.486968646" Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.515311 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.628363 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsjdg\" (UniqueName: \"kubernetes.io/projected/579e9f90-c898-4c0d-aa7b-6d6bde49e872-kube-api-access-jsjdg\") pod \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.628565 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579e9f90-c898-4c0d-aa7b-6d6bde49e872-secret-volume\") pod \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.628647 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579e9f90-c898-4c0d-aa7b-6d6bde49e872-config-volume\") pod \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\" (UID: \"579e9f90-c898-4c0d-aa7b-6d6bde49e872\") " Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.629500 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/579e9f90-c898-4c0d-aa7b-6d6bde49e872-config-volume" (OuterVolumeSpecName: "config-volume") pod "579e9f90-c898-4c0d-aa7b-6d6bde49e872" (UID: "579e9f90-c898-4c0d-aa7b-6d6bde49e872"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.635163 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579e9f90-c898-4c0d-aa7b-6d6bde49e872-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "579e9f90-c898-4c0d-aa7b-6d6bde49e872" (UID: "579e9f90-c898-4c0d-aa7b-6d6bde49e872"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.637174 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579e9f90-c898-4c0d-aa7b-6d6bde49e872-kube-api-access-jsjdg" (OuterVolumeSpecName: "kube-api-access-jsjdg") pod "579e9f90-c898-4c0d-aa7b-6d6bde49e872" (UID: "579e9f90-c898-4c0d-aa7b-6d6bde49e872"). InnerVolumeSpecName "kube-api-access-jsjdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.778093 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579e9f90-c898-4c0d-aa7b-6d6bde49e872-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.778138 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579e9f90-c898-4c0d-aa7b-6d6bde49e872-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:03 crc kubenswrapper[4945]: I0109 01:00:03.778150 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsjdg\" (UniqueName: \"kubernetes.io/projected/579e9f90-c898-4c0d-aa7b-6d6bde49e872-kube-api-access-jsjdg\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:04 crc kubenswrapper[4945]: I0109 01:00:04.154400 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" event={"ID":"579e9f90-c898-4c0d-aa7b-6d6bde49e872","Type":"ContainerDied","Data":"cc5d9debbb138c5eeafd4620800a81b2bd33ac181348ce31bae9239b4d4f4e72"} Jan 09 01:00:04 crc kubenswrapper[4945]: I0109 01:00:04.154443 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc5d9debbb138c5eeafd4620800a81b2bd33ac181348ce31bae9239b4d4f4e72" Jan 09 01:00:04 crc kubenswrapper[4945]: I0109 01:00:04.154512 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q" Jan 09 01:00:04 crc kubenswrapper[4945]: I0109 01:00:04.159433 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqs5v" event={"ID":"a331fb09-da5e-4f56-baa2-44bee5cc48be","Type":"ContainerStarted","Data":"c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5"} Jan 09 01:00:04 crc kubenswrapper[4945]: I0109 01:00:04.583411 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5"] Jan 09 01:00:04 crc kubenswrapper[4945]: I0109 01:00:04.593809 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465295-xq7r5"] Jan 09 01:00:05 crc kubenswrapper[4945]: I0109 01:00:05.170386 4945 generic.go:334] "Generic (PLEG): container finished" podID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerID="c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5" exitCode=0 Jan 09 01:00:05 crc kubenswrapper[4945]: I0109 01:00:05.170432 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqs5v" event={"ID":"a331fb09-da5e-4f56-baa2-44bee5cc48be","Type":"ContainerDied","Data":"c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5"} Jan 09 01:00:06 crc kubenswrapper[4945]: I0109 01:00:06.016767 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b4e739-91f2-48c0-a139-209dddd53a22" path="/var/lib/kubelet/pods/e2b4e739-91f2-48c0-a139-209dddd53a22/volumes" Jan 09 01:00:06 crc kubenswrapper[4945]: I0109 01:00:06.182327 4945 generic.go:334] "Generic (PLEG): container finished" podID="d93bb14f-1fda-48f9-9251-e23763538847" containerID="5a15fa223f5684d35ba51017a32814d722494dbbae6fd01eb5e8b9d2087ddf0b" exitCode=0 Jan 09 01:00:06 crc kubenswrapper[4945]: I0109 01:00:06.182385 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4qtv7" event={"ID":"d93bb14f-1fda-48f9-9251-e23763538847","Type":"ContainerDied","Data":"5a15fa223f5684d35ba51017a32814d722494dbbae6fd01eb5e8b9d2087ddf0b"} Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.610034 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4qtv7" Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.797161 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-scripts\") pod \"d93bb14f-1fda-48f9-9251-e23763538847\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.797580 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-config-data\") pod \"d93bb14f-1fda-48f9-9251-e23763538847\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.797746 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d93bb14f-1fda-48f9-9251-e23763538847-config-data-merged\") pod \"d93bb14f-1fda-48f9-9251-e23763538847\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.797781 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-combined-ca-bundle\") pod \"d93bb14f-1fda-48f9-9251-e23763538847\" (UID: \"d93bb14f-1fda-48f9-9251-e23763538847\") " Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.802892 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-scripts" (OuterVolumeSpecName: "scripts") pod "d93bb14f-1fda-48f9-9251-e23763538847" (UID: "d93bb14f-1fda-48f9-9251-e23763538847"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.802904 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-config-data" (OuterVolumeSpecName: "config-data") pod "d93bb14f-1fda-48f9-9251-e23763538847" (UID: "d93bb14f-1fda-48f9-9251-e23763538847"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.823438 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d93bb14f-1fda-48f9-9251-e23763538847-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "d93bb14f-1fda-48f9-9251-e23763538847" (UID: "d93bb14f-1fda-48f9-9251-e23763538847"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.826948 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d93bb14f-1fda-48f9-9251-e23763538847" (UID: "d93bb14f-1fda-48f9-9251-e23763538847"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.900366 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.900392 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.900403 4945 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d93bb14f-1fda-48f9-9251-e23763538847-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:09 crc kubenswrapper[4945]: I0109 01:00:09.900412 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93bb14f-1fda-48f9-9251-e23763538847-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:10 crc kubenswrapper[4945]: I0109 01:00:10.230844 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqs5v" event={"ID":"a331fb09-da5e-4f56-baa2-44bee5cc48be","Type":"ContainerStarted","Data":"47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2"} Jan 09 01:00:10 crc kubenswrapper[4945]: I0109 01:00:10.240836 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-s964b" event={"ID":"e00cd5b2-f064-4088-8d6a-7ad028fc7147","Type":"ContainerStarted","Data":"61825d7431a99c601d63424620e17460c8225e2f9fe825b6b5373f42f92d870f"} Jan 09 01:00:10 crc kubenswrapper[4945]: I0109 01:00:10.241245 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-s964b" Jan 09 01:00:10 crc kubenswrapper[4945]: I0109 01:00:10.245393 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-4qtv7" event={"ID":"d93bb14f-1fda-48f9-9251-e23763538847","Type":"ContainerDied","Data":"32fc119ab353b0a8ede031b1d84a1fd3986869c2d4c9cd630fd92f7c125e10f2"} Jan 09 01:00:10 crc kubenswrapper[4945]: I0109 01:00:10.245422 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32fc119ab353b0a8ede031b1d84a1fd3986869c2d4c9cd630fd92f7c125e10f2" Jan 09 01:00:10 crc kubenswrapper[4945]: I0109 01:00:10.245475 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-4qtv7" Jan 09 01:00:10 crc kubenswrapper[4945]: I0109 01:00:10.256789 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wqs5v" podStartSLOduration=3.355908953 podStartE2EDuration="12.256765112s" podCreationTimestamp="2026-01-09 00:59:58 +0000 UTC" firstStartedPulling="2026-01-09 01:00:01.107903331 +0000 UTC m=+6271.419062277" lastFinishedPulling="2026-01-09 01:00:10.00875949 +0000 UTC m=+6280.319918436" observedRunningTime="2026-01-09 01:00:10.246497919 +0000 UTC m=+6280.557656865" watchObservedRunningTime="2026-01-09 01:00:10.256765112 +0000 UTC m=+6280.567924058" Jan 09 01:00:10 crc kubenswrapper[4945]: I0109 01:00:10.271548 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-s964b" podStartSLOduration=1.7109524120000001 podStartE2EDuration="15.271519375s" podCreationTimestamp="2026-01-09 00:59:55 +0000 UTC" firstStartedPulling="2026-01-09 00:59:56.361491413 +0000 UTC m=+6266.672650359" lastFinishedPulling="2026-01-09 01:00:09.922058376 +0000 UTC m=+6280.233217322" observedRunningTime="2026-01-09 01:00:10.271140596 +0000 UTC m=+6280.582299542" watchObservedRunningTime="2026-01-09 01:00:10.271519375 +0000 UTC m=+6280.582678311" Jan 09 01:00:11 crc kubenswrapper[4945]: I0109 01:00:11.171354 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 01:00:11 crc kubenswrapper[4945]: I0109 01:00:11.197734 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-ff95db4fd-q2s54" Jan 09 01:00:11 crc kubenswrapper[4945]: I0109 01:00:11.270866 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" event={"ID":"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb","Type":"ContainerStarted","Data":"1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a"} Jan 09 01:00:14 crc kubenswrapper[4945]: I0109 01:00:14.306043 4945 generic.go:334] "Generic (PLEG): container finished" podID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" containerID="1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a" exitCode=0 Jan 09 01:00:14 crc kubenswrapper[4945]: I0109 01:00:14.306210 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" event={"ID":"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb","Type":"ContainerDied","Data":"1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a"} Jan 09 01:00:18 crc kubenswrapper[4945]: I0109 01:00:18.958172 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 01:00:18 crc kubenswrapper[4945]: I0109 01:00:18.958662 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 01:00:19 crc kubenswrapper[4945]: I0109 01:00:19.023710 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 01:00:19 crc kubenswrapper[4945]: I0109 01:00:19.355859 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" event={"ID":"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb","Type":"ContainerStarted","Data":"d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e"} Jan 09 01:00:19 crc kubenswrapper[4945]: I0109 01:00:19.380092 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" podStartSLOduration=1.777227911 podStartE2EDuration="23.380070155s" podCreationTimestamp="2026-01-09 00:59:56 +0000 UTC" firstStartedPulling="2026-01-09 00:59:57.009306284 +0000 UTC m=+6267.320465220" lastFinishedPulling="2026-01-09 01:00:18.612148518 +0000 UTC m=+6288.923307464" observedRunningTime="2026-01-09 01:00:19.374493538 +0000 UTC m=+6289.685652504" watchObservedRunningTime="2026-01-09 01:00:19.380070155 +0000 UTC m=+6289.691229101" Jan 09 01:00:19 crc kubenswrapper[4945]: I0109 01:00:19.411296 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 01:00:19 crc kubenswrapper[4945]: I0109 01:00:19.464717 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wqs5v"] Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.377232 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wqs5v" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerName="registry-server" containerID="cri-o://47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2" gracePeriod=2 Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.852262 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.867669 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c62dw\" (UniqueName: \"kubernetes.io/projected/a331fb09-da5e-4f56-baa2-44bee5cc48be-kube-api-access-c62dw\") pod \"a331fb09-da5e-4f56-baa2-44bee5cc48be\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.867770 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-catalog-content\") pod \"a331fb09-da5e-4f56-baa2-44bee5cc48be\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.867842 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-utilities\") pod \"a331fb09-da5e-4f56-baa2-44bee5cc48be\" (UID: \"a331fb09-da5e-4f56-baa2-44bee5cc48be\") " Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.868775 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-utilities" (OuterVolumeSpecName: "utilities") pod "a331fb09-da5e-4f56-baa2-44bee5cc48be" (UID: "a331fb09-da5e-4f56-baa2-44bee5cc48be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.876803 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a331fb09-da5e-4f56-baa2-44bee5cc48be-kube-api-access-c62dw" (OuterVolumeSpecName: "kube-api-access-c62dw") pod "a331fb09-da5e-4f56-baa2-44bee5cc48be" (UID: "a331fb09-da5e-4f56-baa2-44bee5cc48be"). InnerVolumeSpecName "kube-api-access-c62dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.927607 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a331fb09-da5e-4f56-baa2-44bee5cc48be" (UID: "a331fb09-da5e-4f56-baa2-44bee5cc48be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.969252 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.969505 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c62dw\" (UniqueName: \"kubernetes.io/projected/a331fb09-da5e-4f56-baa2-44bee5cc48be-kube-api-access-c62dw\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:21 crc kubenswrapper[4945]: I0109 01:00:21.969572 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331fb09-da5e-4f56-baa2-44bee5cc48be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.395427 4945 generic.go:334] "Generic (PLEG): container finished" podID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerID="47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2" exitCode=0 Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.395756 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqs5v" event={"ID":"a331fb09-da5e-4f56-baa2-44bee5cc48be","Type":"ContainerDied","Data":"47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2"} Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.395981 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqs5v" event={"ID":"a331fb09-da5e-4f56-baa2-44bee5cc48be","Type":"ContainerDied","Data":"4e5ac432fdf53f5a91ff25a9cfc2ee780ac3b2b79db6f2e16268815dd7a17e8b"} Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.396046 4945 scope.go:117] "RemoveContainer" containerID="47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.395808 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqs5v" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.438272 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wqs5v"] Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.445450 4945 scope.go:117] "RemoveContainer" containerID="c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.449017 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wqs5v"] Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.472460 4945 scope.go:117] "RemoveContainer" containerID="2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.512502 4945 scope.go:117] "RemoveContainer" containerID="47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2" Jan 09 01:00:22 crc kubenswrapper[4945]: E0109 01:00:22.513132 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2\": container with ID starting with 47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2 not found: ID does not exist" containerID="47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.513183 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2"} err="failed to get container status \"47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2\": rpc error: code = NotFound desc = could not find container \"47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2\": container with ID starting with 47872555332ba38f91f44dd1a86ce374318f22ddfc6cb73436d0450662302fa2 not found: ID does not exist" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.513213 4945 scope.go:117] "RemoveContainer" containerID="c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5" Jan 09 01:00:22 crc kubenswrapper[4945]: E0109 01:00:22.513689 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5\": container with ID starting with c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5 not found: ID does not exist" containerID="c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.513757 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5"} err="failed to get container status \"c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5\": rpc error: code = NotFound desc = could not find container \"c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5\": container with ID starting with c376c5cf0cb36e68a3d5e9d420259cca9216e5641bd2a7baf7a835162596b1a5 not found: ID does not exist" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.513794 4945 scope.go:117] "RemoveContainer" containerID="2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174" Jan 09 01:00:22 crc kubenswrapper[4945]: E0109 01:00:22.514216 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174\": container with ID starting with 2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174 not found: ID does not exist" containerID="2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174" Jan 09 01:00:22 crc kubenswrapper[4945]: I0109 01:00:22.514248 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174"} err="failed to get container status \"2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174\": rpc error: code = NotFound desc = could not find container \"2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174\": container with ID starting with 2e4ca57ed9aacb90a237a52937b04d26c41de859c4948c68f6b75b3cbefeb174 not found: ID does not exist" Jan 09 01:00:24 crc kubenswrapper[4945]: I0109 01:00:24.013548 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" path="/var/lib/kubelet/pods/a331fb09-da5e-4f56-baa2-44bee5cc48be/volumes" Jan 09 01:00:25 crc kubenswrapper[4945]: I0109 01:00:25.807326 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-s964b" Jan 09 01:00:37 crc kubenswrapper[4945]: I0109 01:00:37.631203 4945 scope.go:117] "RemoveContainer" containerID="61715a64ea6f47dc27d07dd5b66ac0e8b674db25a5b3d2b41f21ba9bdb47234f" Jan 09 01:00:37 crc kubenswrapper[4945]: I0109 01:00:37.656243 4945 scope.go:117] "RemoveContainer" containerID="6fc9752a0f70bd62b30e47f44faed8bb7c3ef2a5778362db9059b8ec8ff808a2" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.020199 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vrpv8"] Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.021051 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" podUID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" containerName="octavia-amphora-httpd" containerID="cri-o://d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e" gracePeriod=30 Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.683626 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.767314 4945 generic.go:334] "Generic (PLEG): container finished" podID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" containerID="d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e" exitCode=0 Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.767368 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" event={"ID":"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb","Type":"ContainerDied","Data":"d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e"} Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.767414 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" event={"ID":"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb","Type":"ContainerDied","Data":"3a52b62be7014673c9dacd0fb5708e5c3e9859b45f5e66a2b8bc89454b14ff43"} Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.767438 4945 scope.go:117] "RemoveContainer" containerID="d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.767599 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-vrpv8" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.781802 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-httpd-config\") pod \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\" (UID: \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\") " Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.781857 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-amphora-image\") pod \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\" (UID: \"c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb\") " Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.802779 4945 scope.go:117] "RemoveContainer" containerID="1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.834891 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" (UID: "c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.878063 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" (UID: "c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.884653 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.884680 4945 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.896816 4945 scope.go:117] "RemoveContainer" containerID="d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e" Jan 09 01:00:55 crc kubenswrapper[4945]: E0109 01:00:55.897530 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e\": container with ID starting with d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e not found: ID does not exist" containerID="d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.897586 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e"} err="failed to get container status \"d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e\": rpc error: code = NotFound desc = could not find container \"d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e\": container with ID starting with d8b4540fe8df895ee9e20b6db21e6839285b368085bf16bdf0f210aeee09c76e not found: ID does not exist" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.897619 4945 scope.go:117] "RemoveContainer" containerID="1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a" Jan 09 01:00:55 crc kubenswrapper[4945]: E0109 01:00:55.898018 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a\": container with ID starting with 1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a not found: ID does not exist" containerID="1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a" Jan 09 01:00:55 crc kubenswrapper[4945]: I0109 01:00:55.898138 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a"} err="failed to get container status \"1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a\": rpc error: code = NotFound desc = could not find container \"1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a\": container with ID starting with 1b07eb2750cb99560a9190d37e6518fce8da307f4546c27f00a86cac4d5be19a not found: ID does not exist" Jan 09 01:00:56 crc kubenswrapper[4945]: I0109 01:00:56.090476 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vrpv8"] Jan 09 01:00:56 crc kubenswrapper[4945]: I0109 01:00:56.098942 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-vrpv8"] Jan 09 01:00:58 crc kubenswrapper[4945]: I0109 01:00:58.011577 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" path="/var/lib/kubelet/pods/c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb/volumes" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.147134 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29465341-bb2bt"] Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150137 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" containerName="init" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150187 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" containerName="init" Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150227 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579e9f90-c898-4c0d-aa7b-6d6bde49e872" containerName="collect-profiles" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150242 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="579e9f90-c898-4c0d-aa7b-6d6bde49e872" containerName="collect-profiles" Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150284 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerName="extract-utilities" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150293 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerName="extract-utilities" Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150314 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerName="extract-content" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150323 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerName="extract-content" Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150350 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" containerName="octavia-amphora-httpd" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150359 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" containerName="octavia-amphora-httpd" Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150387 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93bb14f-1fda-48f9-9251-e23763538847" containerName="octavia-db-sync" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150395 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93bb14f-1fda-48f9-9251-e23763538847" containerName="octavia-db-sync" Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150405 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93bb14f-1fda-48f9-9251-e23763538847" containerName="init" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150413 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93bb14f-1fda-48f9-9251-e23763538847" containerName="init" Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150444 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerName="registry-server" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150452 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerName="registry-server" Jan 09 01:01:00 crc kubenswrapper[4945]: E0109 01:01:00.150466 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974b6e96-245d-4018-84b6-beed0b612b9c" containerName="ovn-config" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.150483 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="974b6e96-245d-4018-84b6-beed0b612b9c" containerName="ovn-config" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.151173 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="974b6e96-245d-4018-84b6-beed0b612b9c" containerName="ovn-config" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.151207 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a331fb09-da5e-4f56-baa2-44bee5cc48be" containerName="registry-server" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.151230 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="579e9f90-c898-4c0d-aa7b-6d6bde49e872" containerName="collect-profiles" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.151246 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93bb14f-1fda-48f9-9251-e23763538847" containerName="octavia-db-sync" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.151272 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3412a24-c3cc-4a39-a4ca-97b3eb40c4cb" containerName="octavia-amphora-httpd" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.153074 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.196073 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-tgb9d"] Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.199566 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.210330 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29465341-bb2bt"] Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.216849 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.217203 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.221131 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.243274 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-tgb9d"] Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266390 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-combined-ca-bundle\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266438 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-config-data-merged\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266463 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-config-data\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266499 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj5f7\" (UniqueName: \"kubernetes.io/projected/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-kube-api-access-xj5f7\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266520 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-scripts\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266546 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-combined-ca-bundle\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266566 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-hm-ports\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266616 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-fernet-keys\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266660 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-config-data\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.266696 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-amphora-certs\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.368852 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-config-data\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.369203 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-amphora-certs\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.369344 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-combined-ca-bundle\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.369448 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-config-data-merged\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.369566 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-config-data\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.369685 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj5f7\" (UniqueName: \"kubernetes.io/projected/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-kube-api-access-xj5f7\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.369792 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-scripts\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.370375 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-combined-ca-bundle\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.370499 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-hm-ports\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.370645 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-fernet-keys\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.370194 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-config-data-merged\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.372598 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-hm-ports\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.375540 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-config-data\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.376029 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-combined-ca-bundle\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.376270 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-combined-ca-bundle\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.376543 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-fernet-keys\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.376737 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-scripts\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.377200 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-config-data\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.385965 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/8d60e34d-3237-4db9-86c1-b9b7ade05a0c-amphora-certs\") pod \"octavia-healthmanager-tgb9d\" (UID: \"8d60e34d-3237-4db9-86c1-b9b7ade05a0c\") " pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.398962 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj5f7\" (UniqueName: \"kubernetes.io/projected/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-kube-api-access-xj5f7\") pod \"keystone-cron-29465341-bb2bt\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.485617 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.556083 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:00 crc kubenswrapper[4945]: I0109 01:01:00.999533 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29465341-bb2bt"] Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.191882 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-tgb9d"] Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.803406 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-9ql7c"] Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.815443 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.816499 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-9ql7c"] Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.821799 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.822013 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.837786 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29465341-bb2bt" event={"ID":"b9f78d1d-dc0b-4aac-9898-2c507f6944e1","Type":"ContainerStarted","Data":"8303d2f9d0f4810b4981fb2fe161238764283c8fce97ea969d4239ece52b1ce7"} Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.837823 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29465341-bb2bt" event={"ID":"b9f78d1d-dc0b-4aac-9898-2c507f6944e1","Type":"ContainerStarted","Data":"820e9f55b9202176a7cf7ccdd8626fb7a0be782e47ffa40ed1c1052672b39d6f"} Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.849794 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-tgb9d" event={"ID":"8d60e34d-3237-4db9-86c1-b9b7ade05a0c","Type":"ContainerStarted","Data":"0738f34f8ca7c3abc8a9cc5d23c67092f4769b6a2de2e1cd01ad94291934ac29"} Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.849840 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-tgb9d" event={"ID":"8d60e34d-3237-4db9-86c1-b9b7ade05a0c","Type":"ContainerStarted","Data":"3d81da85a11a4d33dcd97842d4306ef9fa1d97533c8278ef336bbbf609bf70ed"} Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.900900 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29465341-bb2bt" podStartSLOduration=1.900876342 podStartE2EDuration="1.900876342s" podCreationTimestamp="2026-01-09 01:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:01:01.896459793 +0000 UTC m=+6332.207618739" watchObservedRunningTime="2026-01-09 01:01:01.900876342 +0000 UTC m=+6332.212035288" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.902467 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1208c3c7-070c-4387-9dc7-fbfda06186fa-config-data-merged\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.902594 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-amphora-certs\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.902655 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-config-data\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.902821 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-combined-ca-bundle\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.902906 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-scripts\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:01 crc kubenswrapper[4945]: I0109 01:01:01.902982 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1208c3c7-070c-4387-9dc7-fbfda06186fa-hm-ports\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.004108 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-scripts\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.004161 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1208c3c7-070c-4387-9dc7-fbfda06186fa-hm-ports\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.004180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1208c3c7-070c-4387-9dc7-fbfda06186fa-config-data-merged\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.004226 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-amphora-certs\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.004259 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-config-data\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.004349 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-combined-ca-bundle\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.006220 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1208c3c7-070c-4387-9dc7-fbfda06186fa-config-data-merged\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.006728 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1208c3c7-070c-4387-9dc7-fbfda06186fa-hm-ports\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.011251 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-scripts\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.012422 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-config-data\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.012668 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-combined-ca-bundle\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.022744 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1208c3c7-070c-4387-9dc7-fbfda06186fa-amphora-certs\") pod \"octavia-housekeeping-9ql7c\" (UID: \"1208c3c7-070c-4387-9dc7-fbfda06186fa\") " pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.142141 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.678169 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-9ql7c"] Jan 09 01:01:02 crc kubenswrapper[4945]: I0109 01:01:02.861827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9ql7c" event={"ID":"1208c3c7-070c-4387-9dc7-fbfda06186fa","Type":"ContainerStarted","Data":"a69f5f2502b7dfb27c3d54d56b31ddf9915b58af5c79d35c1a1ffc9c42fe0a65"} Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.055089 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-8mx9s"] Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.056808 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.059262 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.060256 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.089042 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-8mx9s"] Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.127347 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-amphora-certs\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.127394 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-config-data\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.127425 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/39f06cf9-0b65-4c87-958c-8d14482890ac-config-data-merged\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.127518 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-combined-ca-bundle\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.127578 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-scripts\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.127598 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/39f06cf9-0b65-4c87-958c-8d14482890ac-hm-ports\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.229113 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-combined-ca-bundle\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.230253 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-scripts\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.230293 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/39f06cf9-0b65-4c87-958c-8d14482890ac-hm-ports\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.230377 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-amphora-certs\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.230400 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-config-data\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.230437 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/39f06cf9-0b65-4c87-958c-8d14482890ac-config-data-merged\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.230911 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/39f06cf9-0b65-4c87-958c-8d14482890ac-config-data-merged\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.232969 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/39f06cf9-0b65-4c87-958c-8d14482890ac-hm-ports\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.238071 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-combined-ca-bundle\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.238293 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-config-data\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.238455 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-scripts\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.239077 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/39f06cf9-0b65-4c87-958c-8d14482890ac-amphora-certs\") pod \"octavia-worker-8mx9s\" (UID: \"39f06cf9-0b65-4c87-958c-8d14482890ac\") " pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.385457 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.894455 4945 generic.go:334] "Generic (PLEG): container finished" podID="b9f78d1d-dc0b-4aac-9898-2c507f6944e1" containerID="8303d2f9d0f4810b4981fb2fe161238764283c8fce97ea969d4239ece52b1ce7" exitCode=0 Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.894713 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29465341-bb2bt" event={"ID":"b9f78d1d-dc0b-4aac-9898-2c507f6944e1","Type":"ContainerDied","Data":"8303d2f9d0f4810b4981fb2fe161238764283c8fce97ea969d4239ece52b1ce7"} Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.897101 4945 generic.go:334] "Generic (PLEG): container finished" podID="8d60e34d-3237-4db9-86c1-b9b7ade05a0c" containerID="0738f34f8ca7c3abc8a9cc5d23c67092f4769b6a2de2e1cd01ad94291934ac29" exitCode=0 Jan 09 01:01:03 crc kubenswrapper[4945]: I0109 01:01:03.897132 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-tgb9d" event={"ID":"8d60e34d-3237-4db9-86c1-b9b7ade05a0c","Type":"ContainerDied","Data":"0738f34f8ca7c3abc8a9cc5d23c67092f4769b6a2de2e1cd01ad94291934ac29"} Jan 09 01:01:04 crc kubenswrapper[4945]: I0109 01:01:04.035353 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-8mx9s"] Jan 09 01:01:04 crc kubenswrapper[4945]: W0109 01:01:04.048963 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39f06cf9_0b65_4c87_958c_8d14482890ac.slice/crio-a188918caa773db8c95300f88002ff8f18b05d3a06f49ae06ede550e83b787a0 WatchSource:0}: Error finding container a188918caa773db8c95300f88002ff8f18b05d3a06f49ae06ede550e83b787a0: Status 404 returned error can't find the container with id a188918caa773db8c95300f88002ff8f18b05d3a06f49ae06ede550e83b787a0 Jan 09 01:01:04 crc kubenswrapper[4945]: I0109 01:01:04.912768 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-tgb9d" event={"ID":"8d60e34d-3237-4db9-86c1-b9b7ade05a0c","Type":"ContainerStarted","Data":"f9020231cdffbb1ca5119479b41041650343186a8e2ce74a954154d0c1cbaab5"} Jan 09 01:01:04 crc kubenswrapper[4945]: I0109 01:01:04.913194 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:04 crc kubenswrapper[4945]: I0109 01:01:04.913787 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-8mx9s" event={"ID":"39f06cf9-0b65-4c87-958c-8d14482890ac","Type":"ContainerStarted","Data":"a188918caa773db8c95300f88002ff8f18b05d3a06f49ae06ede550e83b787a0"} Jan 09 01:01:04 crc kubenswrapper[4945]: I0109 01:01:04.942687 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-tgb9d" podStartSLOduration=4.942660823 podStartE2EDuration="4.942660823s" podCreationTimestamp="2026-01-09 01:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:01:04.930013981 +0000 UTC m=+6335.241172927" watchObservedRunningTime="2026-01-09 01:01:04.942660823 +0000 UTC m=+6335.253819769" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.380807 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.479242 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-combined-ca-bundle\") pod \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.479311 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj5f7\" (UniqueName: \"kubernetes.io/projected/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-kube-api-access-xj5f7\") pod \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.479530 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-config-data\") pod \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.479574 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-fernet-keys\") pod \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\" (UID: \"b9f78d1d-dc0b-4aac-9898-2c507f6944e1\") " Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.485335 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b9f78d1d-dc0b-4aac-9898-2c507f6944e1" (UID: "b9f78d1d-dc0b-4aac-9898-2c507f6944e1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.486047 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-kube-api-access-xj5f7" (OuterVolumeSpecName: "kube-api-access-xj5f7") pod "b9f78d1d-dc0b-4aac-9898-2c507f6944e1" (UID: "b9f78d1d-dc0b-4aac-9898-2c507f6944e1"). InnerVolumeSpecName "kube-api-access-xj5f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.519787 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9f78d1d-dc0b-4aac-9898-2c507f6944e1" (UID: "b9f78d1d-dc0b-4aac-9898-2c507f6944e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.540216 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-config-data" (OuterVolumeSpecName: "config-data") pod "b9f78d1d-dc0b-4aac-9898-2c507f6944e1" (UID: "b9f78d1d-dc0b-4aac-9898-2c507f6944e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.581677 4945 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.581711 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.581722 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj5f7\" (UniqueName: \"kubernetes.io/projected/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-kube-api-access-xj5f7\") on node \"crc\" DevicePath \"\"" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.581731 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9f78d1d-dc0b-4aac-9898-2c507f6944e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.923181 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29465341-bb2bt" event={"ID":"b9f78d1d-dc0b-4aac-9898-2c507f6944e1","Type":"ContainerDied","Data":"820e9f55b9202176a7cf7ccdd8626fb7a0be782e47ffa40ed1c1052672b39d6f"} Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.923241 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820e9f55b9202176a7cf7ccdd8626fb7a0be782e47ffa40ed1c1052672b39d6f" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.924258 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29465341-bb2bt" Jan 09 01:01:05 crc kubenswrapper[4945]: I0109 01:01:05.924867 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9ql7c" event={"ID":"1208c3c7-070c-4387-9dc7-fbfda06186fa","Type":"ContainerStarted","Data":"1fbc18aff59cc51e73fcda6d07cfa6bc96e05399f1336fc4bef72993bda62327"} Jan 09 01:01:06 crc kubenswrapper[4945]: I0109 01:01:06.935093 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-8mx9s" event={"ID":"39f06cf9-0b65-4c87-958c-8d14482890ac","Type":"ContainerStarted","Data":"6449cbf6ed6817b06408d59f32c6dce8bd180a6d0f53df9751a0251bd7c57683"} Jan 09 01:01:06 crc kubenswrapper[4945]: I0109 01:01:06.936550 4945 generic.go:334] "Generic (PLEG): container finished" podID="1208c3c7-070c-4387-9dc7-fbfda06186fa" containerID="1fbc18aff59cc51e73fcda6d07cfa6bc96e05399f1336fc4bef72993bda62327" exitCode=0 Jan 09 01:01:06 crc kubenswrapper[4945]: I0109 01:01:06.936593 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9ql7c" event={"ID":"1208c3c7-070c-4387-9dc7-fbfda06186fa","Type":"ContainerDied","Data":"1fbc18aff59cc51e73fcda6d07cfa6bc96e05399f1336fc4bef72993bda62327"} Jan 09 01:01:07 crc kubenswrapper[4945]: I0109 01:01:07.950226 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-9ql7c" event={"ID":"1208c3c7-070c-4387-9dc7-fbfda06186fa","Type":"ContainerStarted","Data":"bb20f5db04e952cba69a202b9bbb89570bad6d276bbb46ba8d84890b524cae1d"} Jan 09 01:01:07 crc kubenswrapper[4945]: I0109 01:01:07.950829 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:07 crc kubenswrapper[4945]: I0109 01:01:07.952004 4945 generic.go:334] "Generic (PLEG): container finished" podID="39f06cf9-0b65-4c87-958c-8d14482890ac" containerID="6449cbf6ed6817b06408d59f32c6dce8bd180a6d0f53df9751a0251bd7c57683" exitCode=0 Jan 09 01:01:07 crc kubenswrapper[4945]: I0109 01:01:07.952044 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-8mx9s" event={"ID":"39f06cf9-0b65-4c87-958c-8d14482890ac","Type":"ContainerDied","Data":"6449cbf6ed6817b06408d59f32c6dce8bd180a6d0f53df9751a0251bd7c57683"} Jan 09 01:01:08 crc kubenswrapper[4945]: I0109 01:01:08.024493 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-9ql7c" podStartSLOduration=4.934303485 podStartE2EDuration="7.024465959s" podCreationTimestamp="2026-01-09 01:01:01 +0000 UTC" firstStartedPulling="2026-01-09 01:01:02.690315458 +0000 UTC m=+6333.001474404" lastFinishedPulling="2026-01-09 01:01:04.780477932 +0000 UTC m=+6335.091636878" observedRunningTime="2026-01-09 01:01:08.013647532 +0000 UTC m=+6338.324806488" watchObservedRunningTime="2026-01-09 01:01:08.024465959 +0000 UTC m=+6338.335624905" Jan 09 01:01:08 crc kubenswrapper[4945]: I0109 01:01:08.963670 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-8mx9s" event={"ID":"39f06cf9-0b65-4c87-958c-8d14482890ac","Type":"ContainerStarted","Data":"c472d632e53bdcf38dd400e91ddec784019222b0432bb8bd44d4de80397726a7"} Jan 09 01:01:08 crc kubenswrapper[4945]: I0109 01:01:08.964056 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:08 crc kubenswrapper[4945]: I0109 01:01:08.989515 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-8mx9s" podStartSLOduration=4.164253441 podStartE2EDuration="5.989485346s" podCreationTimestamp="2026-01-09 01:01:03 +0000 UTC" firstStartedPulling="2026-01-09 01:01:04.061821407 +0000 UTC m=+6334.372980353" lastFinishedPulling="2026-01-09 01:01:05.887053312 +0000 UTC m=+6336.198212258" observedRunningTime="2026-01-09 01:01:08.981428187 +0000 UTC m=+6339.292587143" watchObservedRunningTime="2026-01-09 01:01:08.989485346 +0000 UTC m=+6339.300644292" Jan 09 01:01:13 crc kubenswrapper[4945]: I0109 01:01:13.579121 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:01:13 crc kubenswrapper[4945]: I0109 01:01:13.580115 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:01:15 crc kubenswrapper[4945]: I0109 01:01:15.587564 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-tgb9d" Jan 09 01:01:17 crc kubenswrapper[4945]: I0109 01:01:17.172272 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-9ql7c" Jan 09 01:01:18 crc kubenswrapper[4945]: I0109 01:01:18.413147 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-8mx9s" Jan 09 01:01:43 crc kubenswrapper[4945]: I0109 01:01:43.578388 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:01:43 crc kubenswrapper[4945]: I0109 01:01:43.578945 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.047388 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xndkb"] Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.057788 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xndkb"] Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.498664 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7787b6b4f7-q4bq4"] Jan 09 01:02:02 crc kubenswrapper[4945]: E0109 01:02:02.499138 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f78d1d-dc0b-4aac-9898-2c507f6944e1" containerName="keystone-cron" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.499155 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f78d1d-dc0b-4aac-9898-2c507f6944e1" containerName="keystone-cron" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.499343 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f78d1d-dc0b-4aac-9898-2c507f6944e1" containerName="keystone-cron" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.500303 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.503126 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.503446 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.503731 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.503958 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-k2lnc" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.517737 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-config-data\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.518084 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5hcl\" (UniqueName: \"kubernetes.io/projected/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-kube-api-access-w5hcl\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.518177 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-scripts\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.518441 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-horizon-secret-key\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.518530 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-logs\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.534179 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7787b6b4f7-q4bq4"] Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.562905 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.563171 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-log" containerID="cri-o://b2d16e33ca8e8efaf612f28f00a905906de2cafffe582802fa261eaad17bec12" gracePeriod=30 Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.563639 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-httpd" containerID="cri-o://f252d062fb044fd7fd3e8fa5d3469a53eb5f7a03242007b3f49f4933733053e5" gracePeriod=30 Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.621147 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-logs\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.621490 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-config-data\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.621580 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5hcl\" (UniqueName: \"kubernetes.io/projected/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-kube-api-access-w5hcl\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.621651 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-logs\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.621741 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-scripts\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.622913 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-config-data\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.623144 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-horizon-secret-key\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.623308 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-scripts\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.644104 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.644387 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-log" containerID="cri-o://6dbaf0cb334f1566b9c1d4b0bf61478f38132811abe5f2f0caa0009026d298a6" gracePeriod=30 Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.644706 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-httpd" containerID="cri-o://6572178f0dbd6a0e003afeaaf61cbf996f5991c8c8a96ca87e81dfb3367f9bb2" gracePeriod=30 Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.650638 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-horizon-secret-key\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.663369 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5hcl\" (UniqueName: \"kubernetes.io/projected/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-kube-api-access-w5hcl\") pod \"horizon-7787b6b4f7-q4bq4\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.684240 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-695d9c5545-6txjz"] Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.686152 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.704161 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-695d9c5545-6txjz"] Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.724859 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rq7h\" (UniqueName: \"kubernetes.io/projected/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-kube-api-access-7rq7h\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.724981 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-config-data\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.725060 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-scripts\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.725089 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-logs\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.725214 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-horizon-secret-key\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.827141 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-config-data\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.827468 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-scripts\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.827498 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-logs\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.827554 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-horizon-secret-key\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.827603 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rq7h\" (UniqueName: \"kubernetes.io/projected/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-kube-api-access-7rq7h\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.828128 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-logs\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.828219 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-scripts\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.828742 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-config-data\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.830945 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-horizon-secret-key\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.845901 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rq7h\" (UniqueName: \"kubernetes.io/projected/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-kube-api-access-7rq7h\") pod \"horizon-695d9c5545-6txjz\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:02 crc kubenswrapper[4945]: I0109 01:02:02.865584 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.040141 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3f2a-account-create-update-xtdp9"] Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.063068 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3f2a-account-create-update-xtdp9"] Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.108257 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.324157 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-695d9c5545-6txjz"] Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.351332 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8694df6bfc-2hqjt"] Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.353305 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.370510 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8694df6bfc-2hqjt"] Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.392047 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7787b6b4f7-q4bq4"] Jan 09 01:02:03 crc kubenswrapper[4945]: W0109 01:02:03.404858 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode152a1d4_b6dc_4626_b45e_e2a29dcb10b0.slice/crio-9d1cbf19cd2d63c0dd0cf099c87e0d5ec9e2d0e656ce06581f6215dd559ea27b WatchSource:0}: Error finding container 9d1cbf19cd2d63c0dd0cf099c87e0d5ec9e2d0e656ce06581f6215dd559ea27b: Status 404 returned error can't find the container with id 9d1cbf19cd2d63c0dd0cf099c87e0d5ec9e2d0e656ce06581f6215dd559ea27b Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.409977 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.437528 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-config-data\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.437664 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-scripts\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.437702 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5107d597-feb6-4d70-9587-1b0f23041c5d-horizon-secret-key\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.510077 4945 generic.go:334] "Generic (PLEG): container finished" podID="fef5d863-fecd-47da-9486-0329c4a00c31" containerID="6dbaf0cb334f1566b9c1d4b0bf61478f38132811abe5f2f0caa0009026d298a6" exitCode=143 Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.510191 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fef5d863-fecd-47da-9486-0329c4a00c31","Type":"ContainerDied","Data":"6dbaf0cb334f1566b9c1d4b0bf61478f38132811abe5f2f0caa0009026d298a6"} Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.516243 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787b6b4f7-q4bq4" event={"ID":"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0","Type":"ContainerStarted","Data":"9d1cbf19cd2d63c0dd0cf099c87e0d5ec9e2d0e656ce06581f6215dd559ea27b"} Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.519170 4945 generic.go:334] "Generic (PLEG): container finished" podID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerID="b2d16e33ca8e8efaf612f28f00a905906de2cafffe582802fa261eaad17bec12" exitCode=143 Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.519219 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2","Type":"ContainerDied","Data":"b2d16e33ca8e8efaf612f28f00a905906de2cafffe582802fa261eaad17bec12"} Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.539249 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-config-data\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.539322 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5d27\" (UniqueName: \"kubernetes.io/projected/5107d597-feb6-4d70-9587-1b0f23041c5d-kube-api-access-p5d27\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.539384 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5107d597-feb6-4d70-9587-1b0f23041c5d-logs\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.539421 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-scripts\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.539485 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5107d597-feb6-4d70-9587-1b0f23041c5d-horizon-secret-key\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.540248 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-scripts\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.540592 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-config-data\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.544382 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5107d597-feb6-4d70-9587-1b0f23041c5d-horizon-secret-key\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.585355 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-695d9c5545-6txjz"] Jan 09 01:02:03 crc kubenswrapper[4945]: W0109 01:02:03.586575 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ee786c9_d90a_4aea_a8c4_3a4a7d8fa7a2.slice/crio-daa7418c508b445fcc0ce6a2b83eb617563dc09e51b411f7143ad6b304a445ee WatchSource:0}: Error finding container daa7418c508b445fcc0ce6a2b83eb617563dc09e51b411f7143ad6b304a445ee: Status 404 returned error can't find the container with id daa7418c508b445fcc0ce6a2b83eb617563dc09e51b411f7143ad6b304a445ee Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.641554 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5d27\" (UniqueName: \"kubernetes.io/projected/5107d597-feb6-4d70-9587-1b0f23041c5d-kube-api-access-p5d27\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.641664 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5107d597-feb6-4d70-9587-1b0f23041c5d-logs\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.642188 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5107d597-feb6-4d70-9587-1b0f23041c5d-logs\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.657051 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5d27\" (UniqueName: \"kubernetes.io/projected/5107d597-feb6-4d70-9587-1b0f23041c5d-kube-api-access-p5d27\") pod \"horizon-8694df6bfc-2hqjt\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:03 crc kubenswrapper[4945]: I0109 01:02:03.682595 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:04 crc kubenswrapper[4945]: I0109 01:02:04.011313 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b819698e-6f53-4cab-94ed-b8cf4ab3602c" path="/var/lib/kubelet/pods/b819698e-6f53-4cab-94ed-b8cf4ab3602c/volumes" Jan 09 01:02:04 crc kubenswrapper[4945]: I0109 01:02:04.012315 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23d7277-7424-4f34-a337-23ed0b080c65" path="/var/lib/kubelet/pods/c23d7277-7424-4f34-a337-23ed0b080c65/volumes" Jan 09 01:02:04 crc kubenswrapper[4945]: I0109 01:02:04.177841 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8694df6bfc-2hqjt"] Jan 09 01:02:04 crc kubenswrapper[4945]: I0109 01:02:04.528619 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695d9c5545-6txjz" event={"ID":"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2","Type":"ContainerStarted","Data":"daa7418c508b445fcc0ce6a2b83eb617563dc09e51b411f7143ad6b304a445ee"} Jan 09 01:02:04 crc kubenswrapper[4945]: I0109 01:02:04.530418 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8694df6bfc-2hqjt" event={"ID":"5107d597-feb6-4d70-9587-1b0f23041c5d","Type":"ContainerStarted","Data":"b1006e864328107cc77f2f27b6a58e63e15f507bb864e4846fbc4837471b94fe"} Jan 09 01:02:06 crc kubenswrapper[4945]: I0109 01:02:06.551723 4945 generic.go:334] "Generic (PLEG): container finished" podID="fef5d863-fecd-47da-9486-0329c4a00c31" containerID="6572178f0dbd6a0e003afeaaf61cbf996f5991c8c8a96ca87e81dfb3367f9bb2" exitCode=0 Jan 09 01:02:06 crc kubenswrapper[4945]: I0109 01:02:06.551824 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fef5d863-fecd-47da-9486-0329c4a00c31","Type":"ContainerDied","Data":"6572178f0dbd6a0e003afeaaf61cbf996f5991c8c8a96ca87e81dfb3367f9bb2"} Jan 09 01:02:06 crc kubenswrapper[4945]: I0109 01:02:06.554697 4945 generic.go:334] "Generic (PLEG): container finished" podID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerID="f252d062fb044fd7fd3e8fa5d3469a53eb5f7a03242007b3f49f4933733053e5" exitCode=0 Jan 09 01:02:06 crc kubenswrapper[4945]: I0109 01:02:06.554746 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2","Type":"ContainerDied","Data":"f252d062fb044fd7fd3e8fa5d3469a53eb5f7a03242007b3f49f4933733053e5"} Jan 09 01:02:07 crc kubenswrapper[4945]: I0109 01:02:07.676982 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.1.49:9292/healthcheck\": dial tcp 10.217.1.49:9292: connect: connection refused" Jan 09 01:02:07 crc kubenswrapper[4945]: I0109 01:02:07.677068 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.1.49:9292/healthcheck\": dial tcp 10.217.1.49:9292: connect: connection refused" Jan 09 01:02:09 crc kubenswrapper[4945]: I0109 01:02:09.029542 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kldbw"] Jan 09 01:02:09 crc kubenswrapper[4945]: I0109 01:02:09.042967 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kldbw"] Jan 09 01:02:09 crc kubenswrapper[4945]: I0109 01:02:09.689049 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.1.50:9292/healthcheck\": dial tcp 10.217.1.50:9292: connect: connection refused" Jan 09 01:02:09 crc kubenswrapper[4945]: I0109 01:02:09.689076 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.1.50:9292/healthcheck\": dial tcp 10.217.1.50:9292: connect: connection refused" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.018168 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54eb183-3ffc-403d-a2bc-dc3e59b5da2d" path="/var/lib/kubelet/pods/e54eb183-3ffc-403d-a2bc-dc3e59b5da2d/volumes" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.184146 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.223273 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280568 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-config-data\") pod \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280626 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjs79\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-kube-api-access-wjs79\") pod \"fef5d863-fecd-47da-9486-0329c4a00c31\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280738 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-scripts\") pod \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280780 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djknf\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-kube-api-access-djknf\") pod \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280813 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-httpd-run\") pod \"fef5d863-fecd-47da-9486-0329c4a00c31\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280850 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-combined-ca-bundle\") pod \"fef5d863-fecd-47da-9486-0329c4a00c31\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280878 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-httpd-run\") pod \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280904 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-ceph\") pod \"fef5d863-fecd-47da-9486-0329c4a00c31\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.280963 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-ceph\") pod \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.281052 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-scripts\") pod \"fef5d863-fecd-47da-9486-0329c4a00c31\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.281115 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-config-data\") pod \"fef5d863-fecd-47da-9486-0329c4a00c31\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.281201 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-logs\") pod \"fef5d863-fecd-47da-9486-0329c4a00c31\" (UID: \"fef5d863-fecd-47da-9486-0329c4a00c31\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.281250 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-logs\") pod \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.281429 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-combined-ca-bundle\") pod \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\" (UID: \"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2\") " Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.283796 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fef5d863-fecd-47da-9486-0329c4a00c31" (UID: "fef5d863-fecd-47da-9486-0329c4a00c31"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.284294 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-logs" (OuterVolumeSpecName: "logs") pod "fef5d863-fecd-47da-9486-0329c4a00c31" (UID: "fef5d863-fecd-47da-9486-0329c4a00c31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.286738 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" (UID: "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.287329 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-logs" (OuterVolumeSpecName: "logs") pod "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" (UID: "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.295832 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-scripts" (OuterVolumeSpecName: "scripts") pod "fef5d863-fecd-47da-9486-0329c4a00c31" (UID: "fef5d863-fecd-47da-9486-0329c4a00c31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.296193 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-ceph" (OuterVolumeSpecName: "ceph") pod "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" (UID: "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.297272 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-scripts" (OuterVolumeSpecName: "scripts") pod "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" (UID: "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.297880 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-ceph" (OuterVolumeSpecName: "ceph") pod "fef5d863-fecd-47da-9486-0329c4a00c31" (UID: "fef5d863-fecd-47da-9486-0329c4a00c31"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.299010 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-kube-api-access-djknf" (OuterVolumeSpecName: "kube-api-access-djknf") pod "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" (UID: "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2"). InnerVolumeSpecName "kube-api-access-djknf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.306268 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-kube-api-access-wjs79" (OuterVolumeSpecName: "kube-api-access-wjs79") pod "fef5d863-fecd-47da-9486-0329c4a00c31" (UID: "fef5d863-fecd-47da-9486-0329c4a00c31"). InnerVolumeSpecName "kube-api-access-wjs79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.343203 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" (UID: "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.375330 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fef5d863-fecd-47da-9486-0329c4a00c31" (UID: "fef5d863-fecd-47da-9486-0329c4a00c31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.377556 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-config-data" (OuterVolumeSpecName: "config-data") pod "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" (UID: "2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385113 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-logs\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385157 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385173 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385186 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjs79\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-kube-api-access-wjs79\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385196 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385206 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djknf\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-kube-api-access-djknf\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385217 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385230 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385239 4945 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385249 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fef5d863-fecd-47da-9486-0329c4a00c31-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385258 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385272 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.385281 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fef5d863-fecd-47da-9486-0329c4a00c31-logs\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.441224 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-config-data" (OuterVolumeSpecName: "config-data") pod "fef5d863-fecd-47da-9486-0329c4a00c31" (UID: "fef5d863-fecd-47da-9486-0329c4a00c31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.487846 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef5d863-fecd-47da-9486-0329c4a00c31-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.593320 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695d9c5545-6txjz" event={"ID":"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2","Type":"ContainerStarted","Data":"7520ac7a93f942478a577dd83cde7a793049e19329433227b40b90bbbaf8772e"} Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.593377 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695d9c5545-6txjz" event={"ID":"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2","Type":"ContainerStarted","Data":"514bf6416e24d77ddf11939d5984ef053244e062747f4e81aed0c1bc41d86ca4"} Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.593421 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-695d9c5545-6txjz" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerName="horizon" containerID="cri-o://7520ac7a93f942478a577dd83cde7a793049e19329433227b40b90bbbaf8772e" gracePeriod=30 Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.593421 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-695d9c5545-6txjz" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerName="horizon-log" containerID="cri-o://514bf6416e24d77ddf11939d5984ef053244e062747f4e81aed0c1bc41d86ca4" gracePeriod=30 Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.597238 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.597213 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2","Type":"ContainerDied","Data":"9f972b77efb19d5d779f2fa963870b3078c163eb2bd0d711700fb1d71518ae0f"} Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.597565 4945 scope.go:117] "RemoveContainer" containerID="f252d062fb044fd7fd3e8fa5d3469a53eb5f7a03242007b3f49f4933733053e5" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.601766 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8694df6bfc-2hqjt" event={"ID":"5107d597-feb6-4d70-9587-1b0f23041c5d","Type":"ContainerStarted","Data":"44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286"} Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.601839 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8694df6bfc-2hqjt" event={"ID":"5107d597-feb6-4d70-9587-1b0f23041c5d","Type":"ContainerStarted","Data":"a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354"} Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.604681 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.604675 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fef5d863-fecd-47da-9486-0329c4a00c31","Type":"ContainerDied","Data":"da7776b31eca6923148c457dea7a34f39182c7f0ea76e2938e6b7eebdb1c99da"} Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.606960 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787b6b4f7-q4bq4" event={"ID":"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0","Type":"ContainerStarted","Data":"e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32"} Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.607017 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787b6b4f7-q4bq4" event={"ID":"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0","Type":"ContainerStarted","Data":"f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a"} Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.632633 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-695d9c5545-6txjz" podStartSLOduration=2.409409648 podStartE2EDuration="8.632607556s" podCreationTimestamp="2026-01-09 01:02:02 +0000 UTC" firstStartedPulling="2026-01-09 01:02:03.588547018 +0000 UTC m=+6393.899705964" lastFinishedPulling="2026-01-09 01:02:09.811744926 +0000 UTC m=+6400.122903872" observedRunningTime="2026-01-09 01:02:10.624065305 +0000 UTC m=+6400.935224261" watchObservedRunningTime="2026-01-09 01:02:10.632607556 +0000 UTC m=+6400.943766502" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.648145 4945 scope.go:117] "RemoveContainer" containerID="b2d16e33ca8e8efaf612f28f00a905906de2cafffe582802fa261eaad17bec12" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.660560 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.680696 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.694298 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 01:02:10 crc kubenswrapper[4945]: E0109 01:02:10.694687 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-httpd" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.694705 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-httpd" Jan 09 01:02:10 crc kubenswrapper[4945]: E0109 01:02:10.694718 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-httpd" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.694725 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-httpd" Jan 09 01:02:10 crc kubenswrapper[4945]: E0109 01:02:10.694742 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-log" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.694748 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-log" Jan 09 01:02:10 crc kubenswrapper[4945]: E0109 01:02:10.694776 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-log" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.694782 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-log" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.694980 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-log" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.695009 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" containerName="glance-httpd" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.695019 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-log" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.695036 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" containerName="glance-httpd" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.695707 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8694df6bfc-2hqjt" podStartSLOduration=2.128306518 podStartE2EDuration="7.695687118s" podCreationTimestamp="2026-01-09 01:02:03 +0000 UTC" firstStartedPulling="2026-01-09 01:02:04.225793499 +0000 UTC m=+6394.536952465" lastFinishedPulling="2026-01-09 01:02:09.793174119 +0000 UTC m=+6400.104333065" observedRunningTime="2026-01-09 01:02:10.673668996 +0000 UTC m=+6400.984827932" watchObservedRunningTime="2026-01-09 01:02:10.695687118 +0000 UTC m=+6401.006846064" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.696360 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.698439 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.698918 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vpr6w" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.699107 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.718639 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.732831 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7787b6b4f7-q4bq4" podStartSLOduration=2.357448259 podStartE2EDuration="8.732800361s" podCreationTimestamp="2026-01-09 01:02:02 +0000 UTC" firstStartedPulling="2026-01-09 01:02:03.409692857 +0000 UTC m=+6393.720851803" lastFinishedPulling="2026-01-09 01:02:09.785044959 +0000 UTC m=+6400.096203905" observedRunningTime="2026-01-09 01:02:10.729801827 +0000 UTC m=+6401.040960793" watchObservedRunningTime="2026-01-09 01:02:10.732800361 +0000 UTC m=+6401.043959307" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.781285 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.790668 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.798812 4945 scope.go:117] "RemoveContainer" containerID="6572178f0dbd6a0e003afeaaf61cbf996f5991c8c8a96ca87e81dfb3367f9bb2" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.831228 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.833351 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.838182 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.841133 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.845933 4945 scope.go:117] "RemoveContainer" containerID="6dbaf0cb334f1566b9c1d4b0bf61478f38132811abe5f2f0caa0009026d298a6" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.897875 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44a2e71f-e372-41f3-b8e3-ba83a769bca6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.897974 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.898066 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.898103 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.898163 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbr7w\" (UniqueName: \"kubernetes.io/projected/44a2e71f-e372-41f3-b8e3-ba83a769bca6-kube-api-access-mbr7w\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.898200 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/44a2e71f-e372-41f3-b8e3-ba83a769bca6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:10 crc kubenswrapper[4945]: I0109 01:02:10.898214 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a2e71f-e372-41f3-b8e3-ba83a769bca6-logs\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.001841 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9nt\" (UniqueName: \"kubernetes.io/projected/641d1452-204c-48c0-89f8-b1065d2288ca-kube-api-access-tr9nt\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.001914 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/641d1452-204c-48c0-89f8-b1065d2288ca-ceph\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.001958 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44a2e71f-e372-41f3-b8e3-ba83a769bca6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002011 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002054 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002157 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002191 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002269 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/641d1452-204c-48c0-89f8-b1065d2288ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002329 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002355 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002411 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/641d1452-204c-48c0-89f8-b1065d2288ca-logs\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002444 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbr7w\" (UniqueName: \"kubernetes.io/projected/44a2e71f-e372-41f3-b8e3-ba83a769bca6-kube-api-access-mbr7w\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002479 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/44a2e71f-e372-41f3-b8e3-ba83a769bca6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.002504 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a2e71f-e372-41f3-b8e3-ba83a769bca6-logs\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.003873 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44a2e71f-e372-41f3-b8e3-ba83a769bca6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.004633 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44a2e71f-e372-41f3-b8e3-ba83a769bca6-logs\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.013115 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.013858 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.014516 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44a2e71f-e372-41f3-b8e3-ba83a769bca6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.014774 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/44a2e71f-e372-41f3-b8e3-ba83a769bca6-ceph\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.020283 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbr7w\" (UniqueName: \"kubernetes.io/projected/44a2e71f-e372-41f3-b8e3-ba83a769bca6-kube-api-access-mbr7w\") pod \"glance-default-internal-api-0\" (UID: \"44a2e71f-e372-41f3-b8e3-ba83a769bca6\") " pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.086603 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.103624 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/641d1452-204c-48c0-89f8-b1065d2288ca-ceph\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.103704 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.103816 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/641d1452-204c-48c0-89f8-b1065d2288ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.103851 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.103877 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.103916 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/641d1452-204c-48c0-89f8-b1065d2288ca-logs\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.104082 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9nt\" (UniqueName: \"kubernetes.io/projected/641d1452-204c-48c0-89f8-b1065d2288ca-kube-api-access-tr9nt\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.105297 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/641d1452-204c-48c0-89f8-b1065d2288ca-logs\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.109246 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/641d1452-204c-48c0-89f8-b1065d2288ca-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.109770 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/641d1452-204c-48c0-89f8-b1065d2288ca-ceph\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.109869 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-scripts\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.110538 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.113211 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641d1452-204c-48c0-89f8-b1065d2288ca-config-data\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.121090 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9nt\" (UniqueName: \"kubernetes.io/projected/641d1452-204c-48c0-89f8-b1065d2288ca-kube-api-access-tr9nt\") pod \"glance-default-external-api-0\" (UID: \"641d1452-204c-48c0-89f8-b1065d2288ca\") " pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.152253 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.643716 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 09 01:02:11 crc kubenswrapper[4945]: W0109 01:02:11.647259 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44a2e71f_e372_41f3_b8e3_ba83a769bca6.slice/crio-5fb9a72a380409f829c0f7c7ed0c3d45119aa14c66af54658324d1bf0b815aa6 WatchSource:0}: Error finding container 5fb9a72a380409f829c0f7c7ed0c3d45119aa14c66af54658324d1bf0b815aa6: Status 404 returned error can't find the container with id 5fb9a72a380409f829c0f7c7ed0c3d45119aa14c66af54658324d1bf0b815aa6 Jan 09 01:02:11 crc kubenswrapper[4945]: I0109 01:02:11.793116 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 09 01:02:11 crc kubenswrapper[4945]: W0109 01:02:11.819087 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod641d1452_204c_48c0_89f8_b1065d2288ca.slice/crio-f2aaf65560caf0b51e23805fe3390a8e625bd0ebc562967f57d08b96ff58763a WatchSource:0}: Error finding container f2aaf65560caf0b51e23805fe3390a8e625bd0ebc562967f57d08b96ff58763a: Status 404 returned error can't find the container with id f2aaf65560caf0b51e23805fe3390a8e625bd0ebc562967f57d08b96ff58763a Jan 09 01:02:12 crc kubenswrapper[4945]: I0109 01:02:12.014399 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2" path="/var/lib/kubelet/pods/2ebdef4c-b9b8-4bb6-a580-48f43bcdbbd2/volumes" Jan 09 01:02:12 crc kubenswrapper[4945]: I0109 01:02:12.015718 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef5d863-fecd-47da-9486-0329c4a00c31" path="/var/lib/kubelet/pods/fef5d863-fecd-47da-9486-0329c4a00c31/volumes" Jan 09 01:02:12 crc kubenswrapper[4945]: I0109 01:02:12.694216 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"641d1452-204c-48c0-89f8-b1065d2288ca","Type":"ContainerStarted","Data":"528fdfabb88883791f4d6ebf39ed75ef9f9d0f5c4486e8b4dffcf476d03ad265"} Jan 09 01:02:12 crc kubenswrapper[4945]: I0109 01:02:12.694570 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"641d1452-204c-48c0-89f8-b1065d2288ca","Type":"ContainerStarted","Data":"f2aaf65560caf0b51e23805fe3390a8e625bd0ebc562967f57d08b96ff58763a"} Jan 09 01:02:12 crc kubenswrapper[4945]: I0109 01:02:12.702327 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44a2e71f-e372-41f3-b8e3-ba83a769bca6","Type":"ContainerStarted","Data":"96ce73c3eb024effc3071370c277fed8a2a030bb5380af96a38162d1a71bf396"} Jan 09 01:02:12 crc kubenswrapper[4945]: I0109 01:02:12.702386 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44a2e71f-e372-41f3-b8e3-ba83a769bca6","Type":"ContainerStarted","Data":"5fb9a72a380409f829c0f7c7ed0c3d45119aa14c66af54658324d1bf0b815aa6"} Jan 09 01:02:12 crc kubenswrapper[4945]: I0109 01:02:12.865877 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:12 crc kubenswrapper[4945]: I0109 01:02:12.865957 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.109419 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.579063 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.579144 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.579203 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.580033 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.580090 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" gracePeriod=600 Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.684072 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.684119 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:13 crc kubenswrapper[4945]: E0109 01:02:13.706844 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.721839 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"44a2e71f-e372-41f3-b8e3-ba83a769bca6","Type":"ContainerStarted","Data":"afcb3a7b0c4b5c46f7941db872aee3c96e043b904625b7eb18e0ff8179ea890b"} Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.728204 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" exitCode=0 Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.728250 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31"} Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.728280 4945 scope.go:117] "RemoveContainer" containerID="eccb70fe4fbdfd4fa48564790297f305ce79c1abeb00e0431dd2d34f92ff2a95" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.729009 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:02:13 crc kubenswrapper[4945]: E0109 01:02:13.729314 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.733600 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"641d1452-204c-48c0-89f8-b1065d2288ca","Type":"ContainerStarted","Data":"6d2c9152da359739eef9fd72405d24fed082ac22fe4b3e0c14cb269f6cd87b17"} Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.772037 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.7720175190000003 podStartE2EDuration="3.772017519s" podCreationTimestamp="2026-01-09 01:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:02:13.742562384 +0000 UTC m=+6404.053721350" watchObservedRunningTime="2026-01-09 01:02:13.772017519 +0000 UTC m=+6404.083176465" Jan 09 01:02:13 crc kubenswrapper[4945]: I0109 01:02:13.802263 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.802237752 podStartE2EDuration="3.802237752s" podCreationTimestamp="2026-01-09 01:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:02:13.788426532 +0000 UTC m=+6404.099585488" watchObservedRunningTime="2026-01-09 01:02:13.802237752 +0000 UTC m=+6404.113396698" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.087847 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.088505 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.121088 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.131000 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.153405 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.153469 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.186314 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.208537 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.810956 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.811021 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.811035 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 09 01:02:21 crc kubenswrapper[4945]: I0109 01:02:21.811045 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:22 crc kubenswrapper[4945]: I0109 01:02:22.868214 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7787b6b4f7-q4bq4" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 09 01:02:23 crc kubenswrapper[4945]: I0109 01:02:23.685904 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8694df6bfc-2hqjt" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.117:8080: connect: connection refused" Jan 09 01:02:23 crc kubenswrapper[4945]: I0109 01:02:23.907841 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:23 crc kubenswrapper[4945]: I0109 01:02:23.907967 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 01:02:23 crc kubenswrapper[4945]: I0109 01:02:23.937109 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 01:02:23 crc kubenswrapper[4945]: I0109 01:02:23.937237 4945 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 09 01:02:23 crc kubenswrapper[4945]: I0109 01:02:23.961571 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 09 01:02:23 crc kubenswrapper[4945]: I0109 01:02:23.974404 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 09 01:02:26 crc kubenswrapper[4945]: I0109 01:02:26.000549 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:02:26 crc kubenswrapper[4945]: E0109 01:02:26.002021 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:02:34 crc kubenswrapper[4945]: I0109 01:02:34.044042 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-728qx"] Jan 09 01:02:34 crc kubenswrapper[4945]: I0109 01:02:34.054805 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4821-account-create-update-jrfjd"] Jan 09 01:02:34 crc kubenswrapper[4945]: I0109 01:02:34.063227 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-728qx"] Jan 09 01:02:34 crc kubenswrapper[4945]: I0109 01:02:34.071942 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4821-account-create-update-jrfjd"] Jan 09 01:02:34 crc kubenswrapper[4945]: I0109 01:02:34.771900 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:35 crc kubenswrapper[4945]: I0109 01:02:35.555132 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:36 crc kubenswrapper[4945]: I0109 01:02:36.011309 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b819af-d0d3-40b2-9775-f88a358e9083" path="/var/lib/kubelet/pods/33b819af-d0d3-40b2-9775-f88a358e9083/volumes" Jan 09 01:02:36 crc kubenswrapper[4945]: I0109 01:02:36.012251 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6" path="/var/lib/kubelet/pods/ea62a4e1-81d5-4e9c-8d1c-e9df2f6bbac6/volumes" Jan 09 01:02:36 crc kubenswrapper[4945]: I0109 01:02:36.447442 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:02:37 crc kubenswrapper[4945]: I0109 01:02:37.249262 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:02:37 crc kubenswrapper[4945]: I0109 01:02:37.309563 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7787b6b4f7-q4bq4"] Jan 09 01:02:37 crc kubenswrapper[4945]: I0109 01:02:37.309800 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7787b6b4f7-q4bq4" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon-log" containerID="cri-o://f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a" gracePeriod=30 Jan 09 01:02:37 crc kubenswrapper[4945]: I0109 01:02:37.309848 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7787b6b4f7-q4bq4" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon" containerID="cri-o://e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32" gracePeriod=30 Jan 09 01:02:37 crc kubenswrapper[4945]: I0109 01:02:37.804048 4945 scope.go:117] "RemoveContainer" containerID="d31ac7b9c59e8b7a00d48f0efe199a07b4eea9fd429dc2b7910c8e968d807c9d" Jan 09 01:02:37 crc kubenswrapper[4945]: I0109 01:02:37.840729 4945 scope.go:117] "RemoveContainer" containerID="f1428fe84dec5d10e47cc7b38cf2fb23c5278741b4aafbfb4ccbf4035a9ea4ed" Jan 09 01:02:37 crc kubenswrapper[4945]: I0109 01:02:37.968977 4945 scope.go:117] "RemoveContainer" containerID="2330f57cc7c5aa4caedcbe55a521d5163e8bea7bd1b9d5e58221cd4fa8911ae6" Jan 09 01:02:37 crc kubenswrapper[4945]: I0109 01:02:37.994683 4945 scope.go:117] "RemoveContainer" containerID="84871ac405ac6a591adb1255fc22f754470245858c2f906762544b04094c0c4f" Jan 09 01:02:38 crc kubenswrapper[4945]: I0109 01:02:38.043201 4945 scope.go:117] "RemoveContainer" containerID="990ec3b04177bedcf2203a933b166932eef7f545c056d09edafea421e62bd709" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:40.999894 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:02:41 crc kubenswrapper[4945]: E0109 01:02:41.000921 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.006818 4945 generic.go:334] "Generic (PLEG): container finished" podID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerID="e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32" exitCode=0 Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.006909 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787b6b4f7-q4bq4" event={"ID":"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0","Type":"ContainerDied","Data":"e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32"} Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.009534 4945 generic.go:334] "Generic (PLEG): container finished" podID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerID="7520ac7a93f942478a577dd83cde7a793049e19329433227b40b90bbbaf8772e" exitCode=137 Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.009559 4945 generic.go:334] "Generic (PLEG): container finished" podID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerID="514bf6416e24d77ddf11939d5984ef053244e062747f4e81aed0c1bc41d86ca4" exitCode=137 Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.009575 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695d9c5545-6txjz" event={"ID":"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2","Type":"ContainerDied","Data":"7520ac7a93f942478a577dd83cde7a793049e19329433227b40b90bbbaf8772e"} Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.009593 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695d9c5545-6txjz" event={"ID":"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2","Type":"ContainerDied","Data":"514bf6416e24d77ddf11939d5984ef053244e062747f4e81aed0c1bc41d86ca4"} Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.009609 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695d9c5545-6txjz" event={"ID":"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2","Type":"ContainerDied","Data":"daa7418c508b445fcc0ce6a2b83eb617563dc09e51b411f7143ad6b304a445ee"} Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.009621 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daa7418c508b445fcc0ce6a2b83eb617563dc09e51b411f7143ad6b304a445ee" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.016949 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.135871 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-scripts\") pod \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.135958 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rq7h\" (UniqueName: \"kubernetes.io/projected/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-kube-api-access-7rq7h\") pod \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.136070 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-config-data\") pod \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.136164 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-logs\") pod \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.136190 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-horizon-secret-key\") pod \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\" (UID: \"9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2\") " Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.136688 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-logs" (OuterVolumeSpecName: "logs") pod "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" (UID: "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.137335 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-logs\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.141857 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-kube-api-access-7rq7h" (OuterVolumeSpecName: "kube-api-access-7rq7h") pod "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" (UID: "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2"). InnerVolumeSpecName "kube-api-access-7rq7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.142326 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" (UID: "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.159810 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-config-data" (OuterVolumeSpecName: "config-data") pod "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" (UID: "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.160266 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-scripts" (OuterVolumeSpecName: "scripts") pod "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" (UID: "9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.239145 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.239191 4945 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.239209 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:41 crc kubenswrapper[4945]: I0109 01:02:41.239225 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rq7h\" (UniqueName: \"kubernetes.io/projected/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2-kube-api-access-7rq7h\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:42 crc kubenswrapper[4945]: I0109 01:02:42.021238 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-695d9c5545-6txjz" Jan 09 01:02:42 crc kubenswrapper[4945]: I0109 01:02:42.040968 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vrmxl"] Jan 09 01:02:42 crc kubenswrapper[4945]: I0109 01:02:42.059474 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vrmxl"] Jan 09 01:02:42 crc kubenswrapper[4945]: I0109 01:02:42.075640 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-695d9c5545-6txjz"] Jan 09 01:02:42 crc kubenswrapper[4945]: I0109 01:02:42.084745 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-695d9c5545-6txjz"] Jan 09 01:02:42 crc kubenswrapper[4945]: I0109 01:02:42.866788 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7787b6b4f7-q4bq4" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 09 01:02:44 crc kubenswrapper[4945]: I0109 01:02:44.021423 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" path="/var/lib/kubelet/pods/9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2/volumes" Jan 09 01:02:44 crc kubenswrapper[4945]: I0109 01:02:44.022716 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8601523-2719-4c56-b8aa-8c61609e91f0" path="/var/lib/kubelet/pods/a8601523-2719-4c56-b8aa-8c61609e91f0/volumes" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.214471 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9b6dcf455-m4j6c"] Jan 09 01:02:45 crc kubenswrapper[4945]: E0109 01:02:45.215209 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerName="horizon" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.215226 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerName="horizon" Jan 09 01:02:45 crc kubenswrapper[4945]: E0109 01:02:45.215245 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerName="horizon-log" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.215250 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerName="horizon-log" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.215443 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerName="horizon" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.215459 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ee786c9-d90a-4aea-a8c4-3a4a7d8fa7a2" containerName="horizon-log" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.216561 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.225046 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9b6dcf455-m4j6c"] Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.326118 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-scripts\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.326192 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-config-data\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.326287 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vl8s\" (UniqueName: \"kubernetes.io/projected/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-kube-api-access-4vl8s\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.326334 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-logs\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.326429 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-horizon-secret-key\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.428091 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-config-data\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.428167 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vl8s\" (UniqueName: \"kubernetes.io/projected/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-kube-api-access-4vl8s\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.428202 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-logs\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.428253 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-horizon-secret-key\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.428335 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-scripts\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.429065 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-scripts\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.430088 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-config-data\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.430656 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-logs\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.441663 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-horizon-secret-key\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.446134 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vl8s\" (UniqueName: \"kubernetes.io/projected/2c838616-0c22-49a2-86fb-8ecedb6c5bfe-kube-api-access-4vl8s\") pod \"horizon-9b6dcf455-m4j6c\" (UID: \"2c838616-0c22-49a2-86fb-8ecedb6c5bfe\") " pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:45 crc kubenswrapper[4945]: I0109 01:02:45.570745 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:46 crc kubenswrapper[4945]: I0109 01:02:46.053665 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9b6dcf455-m4j6c"] Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.088065 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b6dcf455-m4j6c" event={"ID":"2c838616-0c22-49a2-86fb-8ecedb6c5bfe","Type":"ContainerStarted","Data":"3c2513e98ed961aa29ea91857213e26865f236ce84007a745475441b8b9725d8"} Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.089818 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b6dcf455-m4j6c" event={"ID":"2c838616-0c22-49a2-86fb-8ecedb6c5bfe","Type":"ContainerStarted","Data":"6134cf0565f97510f571e630ea917125ea3283c8dff6c52a15077af5d7930756"} Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.089913 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b6dcf455-m4j6c" event={"ID":"2c838616-0c22-49a2-86fb-8ecedb6c5bfe","Type":"ContainerStarted","Data":"f5370b4577ab98e75bb35e9564458e6cf6d2edd9c1442479f68ce023eca7e7a7"} Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.117584 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9b6dcf455-m4j6c" podStartSLOduration=2.117557185 podStartE2EDuration="2.117557185s" podCreationTimestamp="2026-01-09 01:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:02:47.106689257 +0000 UTC m=+6437.417848223" watchObservedRunningTime="2026-01-09 01:02:47.117557185 +0000 UTC m=+6437.428716141" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.197165 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-2bfl4"] Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.198865 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.211473 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-2bfl4"] Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.263262 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9wvv\" (UniqueName: \"kubernetes.io/projected/3e15f948-cd94-4e42-9b32-e7d3e77920a0-kube-api-access-x9wvv\") pod \"heat-db-create-2bfl4\" (UID: \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\") " pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.263903 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e15f948-cd94-4e42-9b32-e7d3e77920a0-operator-scripts\") pod \"heat-db-create-2bfl4\" (UID: \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\") " pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.292007 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-c46f-account-create-update-ngbqv"] Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.293722 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.295351 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.301989 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c46f-account-create-update-ngbqv"] Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.365607 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e15f948-cd94-4e42-9b32-e7d3e77920a0-operator-scripts\") pod \"heat-db-create-2bfl4\" (UID: \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\") " pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.365691 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-operator-scripts\") pod \"heat-c46f-account-create-update-ngbqv\" (UID: \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\") " pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.365781 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbs6h\" (UniqueName: \"kubernetes.io/projected/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-kube-api-access-fbs6h\") pod \"heat-c46f-account-create-update-ngbqv\" (UID: \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\") " pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.365857 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9wvv\" (UniqueName: \"kubernetes.io/projected/3e15f948-cd94-4e42-9b32-e7d3e77920a0-kube-api-access-x9wvv\") pod \"heat-db-create-2bfl4\" (UID: \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\") " pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.367361 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e15f948-cd94-4e42-9b32-e7d3e77920a0-operator-scripts\") pod \"heat-db-create-2bfl4\" (UID: \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\") " pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.385933 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9wvv\" (UniqueName: \"kubernetes.io/projected/3e15f948-cd94-4e42-9b32-e7d3e77920a0-kube-api-access-x9wvv\") pod \"heat-db-create-2bfl4\" (UID: \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\") " pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.467728 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-operator-scripts\") pod \"heat-c46f-account-create-update-ngbqv\" (UID: \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\") " pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.467822 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbs6h\" (UniqueName: \"kubernetes.io/projected/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-kube-api-access-fbs6h\") pod \"heat-c46f-account-create-update-ngbqv\" (UID: \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\") " pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.469181 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-operator-scripts\") pod \"heat-c46f-account-create-update-ngbqv\" (UID: \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\") " pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.484792 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbs6h\" (UniqueName: \"kubernetes.io/projected/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-kube-api-access-fbs6h\") pod \"heat-c46f-account-create-update-ngbqv\" (UID: \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\") " pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.533790 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:47 crc kubenswrapper[4945]: I0109 01:02:47.611068 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:48 crc kubenswrapper[4945]: I0109 01:02:48.056113 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-2bfl4"] Jan 09 01:02:48 crc kubenswrapper[4945]: I0109 01:02:48.100444 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bfl4" event={"ID":"3e15f948-cd94-4e42-9b32-e7d3e77920a0","Type":"ContainerStarted","Data":"fe8ca679b84908158d77321bf3cd4db7b2b2825fc8f06c55c2c8f59544d30e61"} Jan 09 01:02:48 crc kubenswrapper[4945]: I0109 01:02:48.126612 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c46f-account-create-update-ngbqv"] Jan 09 01:02:49 crc kubenswrapper[4945]: I0109 01:02:49.110595 4945 generic.go:334] "Generic (PLEG): container finished" podID="bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d" containerID="49c4b59fb0868de203733b1ff55eab6ef0d934f9be5edd89e5a2147425773346" exitCode=0 Jan 09 01:02:49 crc kubenswrapper[4945]: I0109 01:02:49.110723 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c46f-account-create-update-ngbqv" event={"ID":"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d","Type":"ContainerDied","Data":"49c4b59fb0868de203733b1ff55eab6ef0d934f9be5edd89e5a2147425773346"} Jan 09 01:02:49 crc kubenswrapper[4945]: I0109 01:02:49.110989 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c46f-account-create-update-ngbqv" event={"ID":"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d","Type":"ContainerStarted","Data":"e431d5619a23cb556dd6291c374816e83f91fe803fc70b27213b5432ba039be7"} Jan 09 01:02:49 crc kubenswrapper[4945]: I0109 01:02:49.113128 4945 generic.go:334] "Generic (PLEG): container finished" podID="3e15f948-cd94-4e42-9b32-e7d3e77920a0" containerID="2b427fbe98d03533682547f241f5eafd820bb2b12a59e1892efaebe2d8173550" exitCode=0 Jan 09 01:02:49 crc kubenswrapper[4945]: I0109 01:02:49.113229 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bfl4" event={"ID":"3e15f948-cd94-4e42-9b32-e7d3e77920a0","Type":"ContainerDied","Data":"2b427fbe98d03533682547f241f5eafd820bb2b12a59e1892efaebe2d8173550"} Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.562960 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.571357 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.639905 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbs6h\" (UniqueName: \"kubernetes.io/projected/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-kube-api-access-fbs6h\") pod \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\" (UID: \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\") " Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.640140 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9wvv\" (UniqueName: \"kubernetes.io/projected/3e15f948-cd94-4e42-9b32-e7d3e77920a0-kube-api-access-x9wvv\") pod \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\" (UID: \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\") " Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.640187 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-operator-scripts\") pod \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\" (UID: \"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d\") " Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.640259 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e15f948-cd94-4e42-9b32-e7d3e77920a0-operator-scripts\") pod \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\" (UID: \"3e15f948-cd94-4e42-9b32-e7d3e77920a0\") " Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.640826 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d" (UID: "bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.641077 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e15f948-cd94-4e42-9b32-e7d3e77920a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e15f948-cd94-4e42-9b32-e7d3e77920a0" (UID: "3e15f948-cd94-4e42-9b32-e7d3e77920a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.645989 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e15f948-cd94-4e42-9b32-e7d3e77920a0-kube-api-access-x9wvv" (OuterVolumeSpecName: "kube-api-access-x9wvv") pod "3e15f948-cd94-4e42-9b32-e7d3e77920a0" (UID: "3e15f948-cd94-4e42-9b32-e7d3e77920a0"). InnerVolumeSpecName "kube-api-access-x9wvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.646548 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-kube-api-access-fbs6h" (OuterVolumeSpecName: "kube-api-access-fbs6h") pod "bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d" (UID: "bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d"). InnerVolumeSpecName "kube-api-access-fbs6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.742466 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9wvv\" (UniqueName: \"kubernetes.io/projected/3e15f948-cd94-4e42-9b32-e7d3e77920a0-kube-api-access-x9wvv\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.742504 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.742513 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e15f948-cd94-4e42-9b32-e7d3e77920a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:50 crc kubenswrapper[4945]: I0109 01:02:50.742522 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbs6h\" (UniqueName: \"kubernetes.io/projected/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d-kube-api-access-fbs6h\") on node \"crc\" DevicePath \"\"" Jan 09 01:02:51 crc kubenswrapper[4945]: I0109 01:02:51.136606 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bfl4" event={"ID":"3e15f948-cd94-4e42-9b32-e7d3e77920a0","Type":"ContainerDied","Data":"fe8ca679b84908158d77321bf3cd4db7b2b2825fc8f06c55c2c8f59544d30e61"} Jan 09 01:02:51 crc kubenswrapper[4945]: I0109 01:02:51.136642 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bfl4" Jan 09 01:02:51 crc kubenswrapper[4945]: I0109 01:02:51.136650 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe8ca679b84908158d77321bf3cd4db7b2b2825fc8f06c55c2c8f59544d30e61" Jan 09 01:02:51 crc kubenswrapper[4945]: I0109 01:02:51.138035 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c46f-account-create-update-ngbqv" event={"ID":"bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d","Type":"ContainerDied","Data":"e431d5619a23cb556dd6291c374816e83f91fe803fc70b27213b5432ba039be7"} Jan 09 01:02:51 crc kubenswrapper[4945]: I0109 01:02:51.138060 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e431d5619a23cb556dd6291c374816e83f91fe803fc70b27213b5432ba039be7" Jan 09 01:02:51 crc kubenswrapper[4945]: I0109 01:02:51.138137 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c46f-account-create-update-ngbqv" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.393799 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-hvf5r"] Jan 09 01:02:52 crc kubenswrapper[4945]: E0109 01:02:52.394832 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e15f948-cd94-4e42-9b32-e7d3e77920a0" containerName="mariadb-database-create" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.394852 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e15f948-cd94-4e42-9b32-e7d3e77920a0" containerName="mariadb-database-create" Jan 09 01:02:52 crc kubenswrapper[4945]: E0109 01:02:52.394886 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d" containerName="mariadb-account-create-update" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.394897 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d" containerName="mariadb-account-create-update" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.395161 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e15f948-cd94-4e42-9b32-e7d3e77920a0" containerName="mariadb-database-create" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.395195 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d" containerName="mariadb-account-create-update" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.396142 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.398706 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.399580 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-8zqzb" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.404603 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-hvf5r"] Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.473647 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-combined-ca-bundle\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.473739 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-config-data\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.473807 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24kq\" (UniqueName: \"kubernetes.io/projected/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-kube-api-access-r24kq\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.576319 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-combined-ca-bundle\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.576400 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-config-data\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.576449 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r24kq\" (UniqueName: \"kubernetes.io/projected/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-kube-api-access-r24kq\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.581694 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-combined-ca-bundle\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.581888 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-config-data\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.596744 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r24kq\" (UniqueName: \"kubernetes.io/projected/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-kube-api-access-r24kq\") pod \"heat-db-sync-hvf5r\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.716901 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hvf5r" Jan 09 01:02:52 crc kubenswrapper[4945]: I0109 01:02:52.867662 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7787b6b4f7-q4bq4" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 09 01:02:53 crc kubenswrapper[4945]: I0109 01:02:53.231272 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-hvf5r"] Jan 09 01:02:54 crc kubenswrapper[4945]: I0109 01:02:54.003069 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:02:54 crc kubenswrapper[4945]: E0109 01:02:54.005151 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:02:54 crc kubenswrapper[4945]: I0109 01:02:54.177505 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hvf5r" event={"ID":"702288b9-1bd9-476d-bf96-6bf8f3e73b7c","Type":"ContainerStarted","Data":"e610f7bfbbe44e18ea073ff52e2af0fe278ff8a4fbfa71b419aa8099b9d79ab6"} Jan 09 01:02:55 crc kubenswrapper[4945]: I0109 01:02:55.582851 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:02:55 crc kubenswrapper[4945]: I0109 01:02:55.583236 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:03:01 crc kubenswrapper[4945]: I0109 01:03:01.250309 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hvf5r" event={"ID":"702288b9-1bd9-476d-bf96-6bf8f3e73b7c","Type":"ContainerStarted","Data":"85c5685ce3d0f743a22885a94a9631fdb92a45d6cde0a3eee906955b1d3c09b7"} Jan 09 01:03:01 crc kubenswrapper[4945]: I0109 01:03:01.276300 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-hvf5r" podStartSLOduration=1.66292616 podStartE2EDuration="9.276273457s" podCreationTimestamp="2026-01-09 01:02:52 +0000 UTC" firstStartedPulling="2026-01-09 01:02:53.230704674 +0000 UTC m=+6443.541863640" lastFinishedPulling="2026-01-09 01:03:00.844051991 +0000 UTC m=+6451.155210937" observedRunningTime="2026-01-09 01:03:01.268537606 +0000 UTC m=+6451.579696552" watchObservedRunningTime="2026-01-09 01:03:01.276273457 +0000 UTC m=+6451.587432403" Jan 09 01:03:02 crc kubenswrapper[4945]: I0109 01:03:02.866683 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7787b6b4f7-q4bq4" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.115:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.115:8080: connect: connection refused" Jan 09 01:03:02 crc kubenswrapper[4945]: I0109 01:03:02.867265 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:03:03 crc kubenswrapper[4945]: I0109 01:03:03.288951 4945 generic.go:334] "Generic (PLEG): container finished" podID="702288b9-1bd9-476d-bf96-6bf8f3e73b7c" containerID="85c5685ce3d0f743a22885a94a9631fdb92a45d6cde0a3eee906955b1d3c09b7" exitCode=0 Jan 09 01:03:03 crc kubenswrapper[4945]: I0109 01:03:03.289021 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hvf5r" event={"ID":"702288b9-1bd9-476d-bf96-6bf8f3e73b7c","Type":"ContainerDied","Data":"85c5685ce3d0f743a22885a94a9631fdb92a45d6cde0a3eee906955b1d3c09b7"} Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.653377 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hvf5r" Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.786379 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r24kq\" (UniqueName: \"kubernetes.io/projected/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-kube-api-access-r24kq\") pod \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.786614 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-config-data\") pod \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.786741 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-combined-ca-bundle\") pod \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\" (UID: \"702288b9-1bd9-476d-bf96-6bf8f3e73b7c\") " Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.791589 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-kube-api-access-r24kq" (OuterVolumeSpecName: "kube-api-access-r24kq") pod "702288b9-1bd9-476d-bf96-6bf8f3e73b7c" (UID: "702288b9-1bd9-476d-bf96-6bf8f3e73b7c"). InnerVolumeSpecName "kube-api-access-r24kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.815126 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "702288b9-1bd9-476d-bf96-6bf8f3e73b7c" (UID: "702288b9-1bd9-476d-bf96-6bf8f3e73b7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.856233 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-config-data" (OuterVolumeSpecName: "config-data") pod "702288b9-1bd9-476d-bf96-6bf8f3e73b7c" (UID: "702288b9-1bd9-476d-bf96-6bf8f3e73b7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.889586 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r24kq\" (UniqueName: \"kubernetes.io/projected/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-kube-api-access-r24kq\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.889640 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:04 crc kubenswrapper[4945]: I0109 01:03:04.889651 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/702288b9-1bd9-476d-bf96-6bf8f3e73b7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:05 crc kubenswrapper[4945]: I0109 01:03:05.307445 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-hvf5r" event={"ID":"702288b9-1bd9-476d-bf96-6bf8f3e73b7c","Type":"ContainerDied","Data":"e610f7bfbbe44e18ea073ff52e2af0fe278ff8a4fbfa71b419aa8099b9d79ab6"} Jan 09 01:03:05 crc kubenswrapper[4945]: I0109 01:03:05.307494 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e610f7bfbbe44e18ea073ff52e2af0fe278ff8a4fbfa71b419aa8099b9d79ab6" Jan 09 01:03:05 crc kubenswrapper[4945]: I0109 01:03:05.307515 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-hvf5r" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.190345 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5dcc459d4-6rx9t"] Jan 09 01:03:06 crc kubenswrapper[4945]: E0109 01:03:06.191135 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702288b9-1bd9-476d-bf96-6bf8f3e73b7c" containerName="heat-db-sync" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.191150 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="702288b9-1bd9-476d-bf96-6bf8f3e73b7c" containerName="heat-db-sync" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.191354 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="702288b9-1bd9-476d-bf96-6bf8f3e73b7c" containerName="heat-db-sync" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.192079 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.196801 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-8zqzb" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.197067 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.197272 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.204754 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5dcc459d4-6rx9t"] Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.303074 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7d67bffdcd-94tvm"] Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.304702 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.311385 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.324906 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d67bffdcd-94tvm"] Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.332303 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-combined-ca-bundle\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.332426 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-config-data-custom\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.332519 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-227cn\" (UniqueName: \"kubernetes.io/projected/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-kube-api-access-227cn\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.332571 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-config-data\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.408656 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-94594dc99-zmqzt"] Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.426523 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.438462 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.442138 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmdm\" (UniqueName: \"kubernetes.io/projected/e5f3554f-eb28-47b5-8974-5d0811b2b49f-kube-api-access-7pmdm\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.442198 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-combined-ca-bundle\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.442227 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-config-data\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.442250 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-config-data-custom\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.442286 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-config-data-custom\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.442344 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-227cn\" (UniqueName: \"kubernetes.io/projected/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-kube-api-access-227cn\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.442375 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-config-data\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.442397 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-combined-ca-bundle\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.446433 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-94594dc99-zmqzt"] Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.459592 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-config-data\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.480195 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-combined-ca-bundle\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.496982 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-config-data-custom\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.497766 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-227cn\" (UniqueName: \"kubernetes.io/projected/9ef821d4-234b-4c1c-b45e-0b25e6d905c9-kube-api-access-227cn\") pod \"heat-engine-5dcc459d4-6rx9t\" (UID: \"9ef821d4-234b-4c1c-b45e-0b25e6d905c9\") " pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.545143 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-combined-ca-bundle\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.545194 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-config-data\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.545235 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pmdm\" (UniqueName: \"kubernetes.io/projected/e5f3554f-eb28-47b5-8974-5d0811b2b49f-kube-api-access-7pmdm\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.545256 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk9zp\" (UniqueName: \"kubernetes.io/projected/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-kube-api-access-nk9zp\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.545289 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-config-data\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.545305 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-config-data-custom\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.545385 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-config-data-custom\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.545419 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-combined-ca-bundle\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.548678 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.563894 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-combined-ca-bundle\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.579269 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-config-data\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.590019 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5f3554f-eb28-47b5-8974-5d0811b2b49f-config-data-custom\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.619759 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pmdm\" (UniqueName: \"kubernetes.io/projected/e5f3554f-eb28-47b5-8974-5d0811b2b49f-kube-api-access-7pmdm\") pod \"heat-api-7d67bffdcd-94tvm\" (UID: \"e5f3554f-eb28-47b5-8974-5d0811b2b49f\") " pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.646905 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk9zp\" (UniqueName: \"kubernetes.io/projected/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-kube-api-access-nk9zp\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.647090 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-config-data-custom\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.647167 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-combined-ca-bundle\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.647186 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-config-data\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.653503 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.673974 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-config-data-custom\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.674110 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-config-data\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.674899 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-combined-ca-bundle\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.681591 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk9zp\" (UniqueName: \"kubernetes.io/projected/cbd6a97e-5d89-4301-ad12-96fe5b1ae27e-kube-api-access-nk9zp\") pod \"heat-cfnapi-94594dc99-zmqzt\" (UID: \"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e\") " pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:06 crc kubenswrapper[4945]: I0109 01:03:06.775086 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.231469 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5dcc459d4-6rx9t"] Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.254168 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d67bffdcd-94tvm"] Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.334425 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5dcc459d4-6rx9t" event={"ID":"9ef821d4-234b-4c1c-b45e-0b25e6d905c9","Type":"ContainerStarted","Data":"491629006e93f53d02930ea6d268e2a62003d46a5a95e39f54c494efae80e7b8"} Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.336554 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d67bffdcd-94tvm" event={"ID":"e5f3554f-eb28-47b5-8974-5d0811b2b49f","Type":"ContainerStarted","Data":"6665984ae63d616d571bff8b2de172b7fa87f53474a6ec853465c10bbf8520fe"} Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.362938 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-94594dc99-zmqzt"] Jan 09 01:03:07 crc kubenswrapper[4945]: W0109 01:03:07.371456 4945 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod702288b9_1bd9_476d_bf96_6bf8f3e73b7c.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod702288b9_1bd9_476d_bf96_6bf8f3e73b7c.slice: no such file or directory Jan 09 01:03:07 crc kubenswrapper[4945]: E0109 01:03:07.640665 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e15f948_cd94_4e42_9b32_e7d3e77920a0.slice/crio-conmon-2b427fbe98d03533682547f241f5eafd820bb2b12a59e1892efaebe2d8173550.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e15f948_cd94_4e42_9b32_e7d3e77920a0.slice/crio-fe8ca679b84908158d77321bf3cd4db7b2b2825fc8f06c55c2c8f59544d30e61\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e15f948_cd94_4e42_9b32_e7d3e77920a0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode152a1d4_b6dc_4626_b45e_e2a29dcb10b0.slice/crio-f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf7d2aa4_f30f_4cf4_a7a5_f7f7c708de8d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode152a1d4_b6dc_4626_b45e_e2a29dcb10b0.slice/crio-conmon-f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf7d2aa4_f30f_4cf4_a7a5_f7f7c708de8d.slice/crio-e431d5619a23cb556dd6291c374816e83f91fe803fc70b27213b5432ba039be7\": RecentStats: unable to find data in memory cache]" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.773009 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.880294 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-horizon-secret-key\") pod \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.880451 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-logs\") pod \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.880692 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-scripts\") pod \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.880730 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5hcl\" (UniqueName: \"kubernetes.io/projected/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-kube-api-access-w5hcl\") pod \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.880813 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-config-data\") pod \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\" (UID: \"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0\") " Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.880830 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-logs" (OuterVolumeSpecName: "logs") pod "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" (UID: "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.881226 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-logs\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.888897 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" (UID: "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.894854 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-kube-api-access-w5hcl" (OuterVolumeSpecName: "kube-api-access-w5hcl") pod "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" (UID: "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0"). InnerVolumeSpecName "kube-api-access-w5hcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.905449 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-config-data" (OuterVolumeSpecName: "config-data") pod "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" (UID: "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.907943 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-scripts" (OuterVolumeSpecName: "scripts") pod "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" (UID: "e152a1d4-b6dc-4626-b45e-e2a29dcb10b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.986599 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.986637 4945 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.986650 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:07 crc kubenswrapper[4945]: I0109 01:03:07.986667 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5hcl\" (UniqueName: \"kubernetes.io/projected/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0-kube-api-access-w5hcl\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.000414 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:03:08 crc kubenswrapper[4945]: E0109 01:03:08.000741 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.035063 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.358233 4945 generic.go:334] "Generic (PLEG): container finished" podID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerID="f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a" exitCode=137 Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.358604 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787b6b4f7-q4bq4" event={"ID":"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0","Type":"ContainerDied","Data":"f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a"} Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.358638 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787b6b4f7-q4bq4" event={"ID":"e152a1d4-b6dc-4626-b45e-e2a29dcb10b0","Type":"ContainerDied","Data":"9d1cbf19cd2d63c0dd0cf099c87e0d5ec9e2d0e656ce06581f6215dd559ea27b"} Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.358658 4945 scope.go:117] "RemoveContainer" containerID="e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32" Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.358795 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7787b6b4f7-q4bq4" Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.366807 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94594dc99-zmqzt" event={"ID":"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e","Type":"ContainerStarted","Data":"2e17b7520c3341f8c4453d66bc19e3cc2a877bfe5ae5f3f125e81365f48ed35d"} Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.370175 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5dcc459d4-6rx9t" event={"ID":"9ef821d4-234b-4c1c-b45e-0b25e6d905c9","Type":"ContainerStarted","Data":"bce0e3abcae71b90ef872a031ae7dc9fc53c2ce9e93631b8786e579d9ecfbac9"} Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.370892 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.399356 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5dcc459d4-6rx9t" podStartSLOduration=2.399328088 podStartE2EDuration="2.399328088s" podCreationTimestamp="2026-01-09 01:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:03:08.384571155 +0000 UTC m=+6458.695730101" watchObservedRunningTime="2026-01-09 01:03:08.399328088 +0000 UTC m=+6458.710487034" Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.414073 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7787b6b4f7-q4bq4"] Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.427258 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7787b6b4f7-q4bq4"] Jan 09 01:03:08 crc kubenswrapper[4945]: I0109 01:03:08.590787 4945 scope.go:117] "RemoveContainer" containerID="f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a" Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.025505 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" path="/var/lib/kubelet/pods/e152a1d4-b6dc-4626-b45e-e2a29dcb10b0/volumes" Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.049757 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-9b6dcf455-m4j6c" Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.113622 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8694df6bfc-2hqjt"] Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.114190 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8694df6bfc-2hqjt" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon-log" containerID="cri-o://a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354" gracePeriod=30 Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.114589 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8694df6bfc-2hqjt" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon" containerID="cri-o://44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286" gracePeriod=30 Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.242648 4945 scope.go:117] "RemoveContainer" containerID="e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32" Jan 09 01:03:10 crc kubenswrapper[4945]: E0109 01:03:10.243675 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32\": container with ID starting with e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32 not found: ID does not exist" containerID="e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32" Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.243712 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32"} err="failed to get container status \"e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32\": rpc error: code = NotFound desc = could not find container \"e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32\": container with ID starting with e8a244f776d29f0207d1668e5aeb2c8d2e7ae9b8e98d9bfa336ac0ba9d99bb32 not found: ID does not exist" Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.243735 4945 scope.go:117] "RemoveContainer" containerID="f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a" Jan 09 01:03:10 crc kubenswrapper[4945]: E0109 01:03:10.244274 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a\": container with ID starting with f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a not found: ID does not exist" containerID="f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a" Jan 09 01:03:10 crc kubenswrapper[4945]: I0109 01:03:10.244301 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a"} err="failed to get container status \"f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a\": rpc error: code = NotFound desc = could not find container \"f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a\": container with ID starting with f9b09e578dc934071b4eca8e83fbafab83e1daaf334fce344e13e21d357fa37a not found: ID does not exist" Jan 09 01:03:11 crc kubenswrapper[4945]: I0109 01:03:11.429839 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d67bffdcd-94tvm" event={"ID":"e5f3554f-eb28-47b5-8974-5d0811b2b49f","Type":"ContainerStarted","Data":"0a9d9f2997ee64b4848de1ffe893864e7fbfc20d25819187b827e736b6a20e3a"} Jan 09 01:03:11 crc kubenswrapper[4945]: I0109 01:03:11.430254 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:11 crc kubenswrapper[4945]: I0109 01:03:11.433162 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94594dc99-zmqzt" event={"ID":"cbd6a97e-5d89-4301-ad12-96fe5b1ae27e","Type":"ContainerStarted","Data":"496c8a777ece4182442159d6e18225a195529977a6a8cf334c2f7acbcbd05304"} Jan 09 01:03:11 crc kubenswrapper[4945]: I0109 01:03:11.433978 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:11 crc kubenswrapper[4945]: I0109 01:03:11.467810 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7d67bffdcd-94tvm" podStartSLOduration=2.451544383 podStartE2EDuration="5.467790266s" podCreationTimestamp="2026-01-09 01:03:06 +0000 UTC" firstStartedPulling="2026-01-09 01:03:07.244832638 +0000 UTC m=+6457.555991584" lastFinishedPulling="2026-01-09 01:03:10.261078521 +0000 UTC m=+6460.572237467" observedRunningTime="2026-01-09 01:03:11.466666158 +0000 UTC m=+6461.777825114" watchObservedRunningTime="2026-01-09 01:03:11.467790266 +0000 UTC m=+6461.778949212" Jan 09 01:03:11 crc kubenswrapper[4945]: I0109 01:03:11.485266 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-94594dc99-zmqzt" podStartSLOduration=2.6423714289999998 podStartE2EDuration="5.485246545s" podCreationTimestamp="2026-01-09 01:03:06 +0000 UTC" firstStartedPulling="2026-01-09 01:03:07.420406529 +0000 UTC m=+6457.731565465" lastFinishedPulling="2026-01-09 01:03:10.263281625 +0000 UTC m=+6460.574440581" observedRunningTime="2026-01-09 01:03:11.484774334 +0000 UTC m=+6461.795933280" watchObservedRunningTime="2026-01-09 01:03:11.485246545 +0000 UTC m=+6461.796405491" Jan 09 01:03:13 crc kubenswrapper[4945]: I0109 01:03:13.451109 4945 generic.go:334] "Generic (PLEG): container finished" podID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerID="44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286" exitCode=0 Jan 09 01:03:13 crc kubenswrapper[4945]: I0109 01:03:13.451190 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8694df6bfc-2hqjt" event={"ID":"5107d597-feb6-4d70-9587-1b0f23041c5d","Type":"ContainerDied","Data":"44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286"} Jan 09 01:03:13 crc kubenswrapper[4945]: I0109 01:03:13.683771 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8694df6bfc-2hqjt" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.117:8080: connect: connection refused" Jan 09 01:03:18 crc kubenswrapper[4945]: I0109 01:03:18.013437 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7d67bffdcd-94tvm" Jan 09 01:03:18 crc kubenswrapper[4945]: I0109 01:03:18.132542 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-94594dc99-zmqzt" Jan 09 01:03:23 crc kubenswrapper[4945]: I0109 01:03:23.003825 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:03:23 crc kubenswrapper[4945]: E0109 01:03:23.007968 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:03:23 crc kubenswrapper[4945]: I0109 01:03:23.684144 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8694df6bfc-2hqjt" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.117:8080: connect: connection refused" Jan 09 01:03:24 crc kubenswrapper[4945]: I0109 01:03:24.045967 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-bd87j"] Jan 09 01:03:24 crc kubenswrapper[4945]: I0109 01:03:24.057791 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0ba5-account-create-update-vsb75"] Jan 09 01:03:24 crc kubenswrapper[4945]: I0109 01:03:24.066113 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0ba5-account-create-update-vsb75"] Jan 09 01:03:24 crc kubenswrapper[4945]: I0109 01:03:24.073640 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-bd87j"] Jan 09 01:03:26 crc kubenswrapper[4945]: I0109 01:03:26.014458 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be6f33d-bbf7-4211-bb12-47972c8b03b3" path="/var/lib/kubelet/pods/3be6f33d-bbf7-4211-bb12-47972c8b03b3/volumes" Jan 09 01:03:26 crc kubenswrapper[4945]: I0109 01:03:26.016504 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30" path="/var/lib/kubelet/pods/9a0f38d2-f7c4-4be9-a7cf-3cc1b3f97a30/volumes" Jan 09 01:03:26 crc kubenswrapper[4945]: I0109 01:03:26.589209 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5dcc459d4-6rx9t" Jan 09 01:03:33 crc kubenswrapper[4945]: I0109 01:03:33.053439 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4jxxq"] Jan 09 01:03:33 crc kubenswrapper[4945]: I0109 01:03:33.067102 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4jxxq"] Jan 09 01:03:33 crc kubenswrapper[4945]: I0109 01:03:33.685822 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8694df6bfc-2hqjt" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.117:8080: connect: connection refused" Jan 09 01:03:33 crc kubenswrapper[4945]: I0109 01:03:33.686637 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:03:34 crc kubenswrapper[4945]: I0109 01:03:34.019931 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d8258a-52f0-455d-bdf8-2c039786ca6d" path="/var/lib/kubelet/pods/a6d8258a-52f0-455d-bdf8-2c039786ca6d/volumes" Jan 09 01:03:35 crc kubenswrapper[4945]: I0109 01:03:35.001683 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:03:35 crc kubenswrapper[4945]: E0109 01:03:35.002021 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:03:38 crc kubenswrapper[4945]: I0109 01:03:38.196643 4945 scope.go:117] "RemoveContainer" containerID="e5b420d4242cf12ffea08b34758342b1335f8b9056370de4c5cfd1decea5a462" Jan 09 01:03:38 crc kubenswrapper[4945]: I0109 01:03:38.225320 4945 scope.go:117] "RemoveContainer" containerID="9d92e41a0deac043add83594d3f3c29fcde9aac4f2081b0baa89f19a44518003" Jan 09 01:03:38 crc kubenswrapper[4945]: I0109 01:03:38.297220 4945 scope.go:117] "RemoveContainer" containerID="a75b276aeabb8e5bcf22cb4cf573b2a43289614e80ba9b39f68b04fa2801fabe" Jan 09 01:03:38 crc kubenswrapper[4945]: I0109 01:03:38.324730 4945 scope.go:117] "RemoveContainer" containerID="061f8769d87c721138e1e67fde4108c40a890c9a5bc792da70f0b50296777404" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.861735 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp"] Jan 09 01:03:39 crc kubenswrapper[4945]: E0109 01:03:39.862592 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.862609 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon" Jan 09 01:03:39 crc kubenswrapper[4945]: E0109 01:03:39.862630 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon-log" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.862639 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon-log" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.862912 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.862936 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e152a1d4-b6dc-4626-b45e-e2a29dcb10b0" containerName="horizon-log" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.865037 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.868182 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.888453 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp"] Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.920417 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.920485 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:39 crc kubenswrapper[4945]: I0109 01:03:39.921057 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4d5h\" (UniqueName: \"kubernetes.io/projected/674b6d94-cd48-43a3-a15b-748ef00b6579-kube-api-access-z4d5h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.022574 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.022692 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4d5h\" (UniqueName: \"kubernetes.io/projected/674b6d94-cd48-43a3-a15b-748ef00b6579-kube-api-access-z4d5h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.022763 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.023276 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.023298 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.050172 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4d5h\" (UniqueName: \"kubernetes.io/projected/674b6d94-cd48-43a3-a15b-748ef00b6579-kube-api-access-z4d5h\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.196022 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.678312 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.727030 4945 generic.go:334] "Generic (PLEG): container finished" podID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerID="a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354" exitCode=137 Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.727077 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8694df6bfc-2hqjt" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.727113 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8694df6bfc-2hqjt" event={"ID":"5107d597-feb6-4d70-9587-1b0f23041c5d","Type":"ContainerDied","Data":"a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354"} Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.727438 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8694df6bfc-2hqjt" event={"ID":"5107d597-feb6-4d70-9587-1b0f23041c5d","Type":"ContainerDied","Data":"b1006e864328107cc77f2f27b6a58e63e15f507bb864e4846fbc4837471b94fe"} Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.727464 4945 scope.go:117] "RemoveContainer" containerID="44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.780431 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp"] Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.841402 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-config-data\") pod \"5107d597-feb6-4d70-9587-1b0f23041c5d\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.841550 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5107d597-feb6-4d70-9587-1b0f23041c5d-logs\") pod \"5107d597-feb6-4d70-9587-1b0f23041c5d\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.841722 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5107d597-feb6-4d70-9587-1b0f23041c5d-horizon-secret-key\") pod \"5107d597-feb6-4d70-9587-1b0f23041c5d\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.841903 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5d27\" (UniqueName: \"kubernetes.io/projected/5107d597-feb6-4d70-9587-1b0f23041c5d-kube-api-access-p5d27\") pod \"5107d597-feb6-4d70-9587-1b0f23041c5d\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.842019 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-scripts\") pod \"5107d597-feb6-4d70-9587-1b0f23041c5d\" (UID: \"5107d597-feb6-4d70-9587-1b0f23041c5d\") " Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.842390 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5107d597-feb6-4d70-9587-1b0f23041c5d-logs" (OuterVolumeSpecName: "logs") pod "5107d597-feb6-4d70-9587-1b0f23041c5d" (UID: "5107d597-feb6-4d70-9587-1b0f23041c5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.847959 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5107d597-feb6-4d70-9587-1b0f23041c5d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5107d597-feb6-4d70-9587-1b0f23041c5d" (UID: "5107d597-feb6-4d70-9587-1b0f23041c5d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.848140 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5107d597-feb6-4d70-9587-1b0f23041c5d-kube-api-access-p5d27" (OuterVolumeSpecName: "kube-api-access-p5d27") pod "5107d597-feb6-4d70-9587-1b0f23041c5d" (UID: "5107d597-feb6-4d70-9587-1b0f23041c5d"). InnerVolumeSpecName "kube-api-access-p5d27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.866570 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-config-data" (OuterVolumeSpecName: "config-data") pod "5107d597-feb6-4d70-9587-1b0f23041c5d" (UID: "5107d597-feb6-4d70-9587-1b0f23041c5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.867538 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-scripts" (OuterVolumeSpecName: "scripts") pod "5107d597-feb6-4d70-9587-1b0f23041c5d" (UID: "5107d597-feb6-4d70-9587-1b0f23041c5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.883379 4945 scope.go:117] "RemoveContainer" containerID="a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354" Jan 09 01:03:40 crc kubenswrapper[4945]: W0109 01:03:40.889590 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod674b6d94_cd48_43a3_a15b_748ef00b6579.slice/crio-2116bf2d96c23a32c01a8b3f0b07f8198fbe20e9fa603efb94a50f043c9cca72 WatchSource:0}: Error finding container 2116bf2d96c23a32c01a8b3f0b07f8198fbe20e9fa603efb94a50f043c9cca72: Status 404 returned error can't find the container with id 2116bf2d96c23a32c01a8b3f0b07f8198fbe20e9fa603efb94a50f043c9cca72 Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.900734 4945 scope.go:117] "RemoveContainer" containerID="44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286" Jan 09 01:03:40 crc kubenswrapper[4945]: E0109 01:03:40.901656 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286\": container with ID starting with 44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286 not found: ID does not exist" containerID="44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.901741 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286"} err="failed to get container status \"44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286\": rpc error: code = NotFound desc = could not find container \"44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286\": container with ID starting with 44e6aef6594ccdbd226b3da459730305a11634b8c5dd7667fbb2a708ae7f5286 not found: ID does not exist" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.901766 4945 scope.go:117] "RemoveContainer" containerID="a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354" Jan 09 01:03:40 crc kubenswrapper[4945]: E0109 01:03:40.901970 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354\": container with ID starting with a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354 not found: ID does not exist" containerID="a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.902000 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354"} err="failed to get container status \"a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354\": rpc error: code = NotFound desc = could not find container \"a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354\": container with ID starting with a3f0a8720461444488136834e9a70eed05845e2f94239024313cd7ee11747354 not found: ID does not exist" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.944372 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.944413 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5107d597-feb6-4d70-9587-1b0f23041c5d-logs\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.944423 4945 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5107d597-feb6-4d70-9587-1b0f23041c5d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.944435 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5d27\" (UniqueName: \"kubernetes.io/projected/5107d597-feb6-4d70-9587-1b0f23041c5d-kube-api-access-p5d27\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:40 crc kubenswrapper[4945]: I0109 01:03:40.944445 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5107d597-feb6-4d70-9587-1b0f23041c5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:41 crc kubenswrapper[4945]: I0109 01:03:41.074309 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8694df6bfc-2hqjt"] Jan 09 01:03:41 crc kubenswrapper[4945]: I0109 01:03:41.085722 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8694df6bfc-2hqjt"] Jan 09 01:03:41 crc kubenswrapper[4945]: I0109 01:03:41.758774 4945 generic.go:334] "Generic (PLEG): container finished" podID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerID="d4b1b777c450e911b5dc5cf7ad1bcf83b15da7cf1da55f263b51194b899c19f4" exitCode=0 Jan 09 01:03:41 crc kubenswrapper[4945]: I0109 01:03:41.759128 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" event={"ID":"674b6d94-cd48-43a3-a15b-748ef00b6579","Type":"ContainerDied","Data":"d4b1b777c450e911b5dc5cf7ad1bcf83b15da7cf1da55f263b51194b899c19f4"} Jan 09 01:03:41 crc kubenswrapper[4945]: I0109 01:03:41.759161 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" event={"ID":"674b6d94-cd48-43a3-a15b-748ef00b6579","Type":"ContainerStarted","Data":"2116bf2d96c23a32c01a8b3f0b07f8198fbe20e9fa603efb94a50f043c9cca72"} Jan 09 01:03:42 crc kubenswrapper[4945]: I0109 01:03:42.012214 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" path="/var/lib/kubelet/pods/5107d597-feb6-4d70-9587-1b0f23041c5d/volumes" Jan 09 01:03:43 crc kubenswrapper[4945]: I0109 01:03:43.781520 4945 generic.go:334] "Generic (PLEG): container finished" podID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerID="44ba010b8e78335b3c0bba7f1ffea90497c8a0c2980483b2815dce0984ec31bf" exitCode=0 Jan 09 01:03:43 crc kubenswrapper[4945]: I0109 01:03:43.781779 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" event={"ID":"674b6d94-cd48-43a3-a15b-748ef00b6579","Type":"ContainerDied","Data":"44ba010b8e78335b3c0bba7f1ffea90497c8a0c2980483b2815dce0984ec31bf"} Jan 09 01:03:44 crc kubenswrapper[4945]: I0109 01:03:44.794175 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" event={"ID":"674b6d94-cd48-43a3-a15b-748ef00b6579","Type":"ContainerStarted","Data":"bac4f522b1d3feaec020edef828b83dcd9af5a9ac2daae23f08235a77cef882b"} Jan 09 01:03:44 crc kubenswrapper[4945]: I0109 01:03:44.817294 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" podStartSLOduration=4.488626182 podStartE2EDuration="5.817263867s" podCreationTimestamp="2026-01-09 01:03:39 +0000 UTC" firstStartedPulling="2026-01-09 01:03:41.764591257 +0000 UTC m=+6492.075750203" lastFinishedPulling="2026-01-09 01:03:43.093228942 +0000 UTC m=+6493.404387888" observedRunningTime="2026-01-09 01:03:44.813684079 +0000 UTC m=+6495.124843035" watchObservedRunningTime="2026-01-09 01:03:44.817263867 +0000 UTC m=+6495.128422813" Jan 09 01:03:45 crc kubenswrapper[4945]: I0109 01:03:45.809732 4945 generic.go:334] "Generic (PLEG): container finished" podID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerID="bac4f522b1d3feaec020edef828b83dcd9af5a9ac2daae23f08235a77cef882b" exitCode=0 Jan 09 01:03:45 crc kubenswrapper[4945]: I0109 01:03:45.809811 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" event={"ID":"674b6d94-cd48-43a3-a15b-748ef00b6579","Type":"ContainerDied","Data":"bac4f522b1d3feaec020edef828b83dcd9af5a9ac2daae23f08235a77cef882b"} Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.000922 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:03:47 crc kubenswrapper[4945]: E0109 01:03:47.001609 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.261489 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.357485 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4d5h\" (UniqueName: \"kubernetes.io/projected/674b6d94-cd48-43a3-a15b-748ef00b6579-kube-api-access-z4d5h\") pod \"674b6d94-cd48-43a3-a15b-748ef00b6579\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.357556 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-bundle\") pod \"674b6d94-cd48-43a3-a15b-748ef00b6579\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.357628 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-util\") pod \"674b6d94-cd48-43a3-a15b-748ef00b6579\" (UID: \"674b6d94-cd48-43a3-a15b-748ef00b6579\") " Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.360118 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-bundle" (OuterVolumeSpecName: "bundle") pod "674b6d94-cd48-43a3-a15b-748ef00b6579" (UID: "674b6d94-cd48-43a3-a15b-748ef00b6579"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.362889 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674b6d94-cd48-43a3-a15b-748ef00b6579-kube-api-access-z4d5h" (OuterVolumeSpecName: "kube-api-access-z4d5h") pod "674b6d94-cd48-43a3-a15b-748ef00b6579" (UID: "674b6d94-cd48-43a3-a15b-748ef00b6579"). InnerVolumeSpecName "kube-api-access-z4d5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.366512 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-util" (OuterVolumeSpecName: "util") pod "674b6d94-cd48-43a3-a15b-748ef00b6579" (UID: "674b6d94-cd48-43a3-a15b-748ef00b6579"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.460094 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4d5h\" (UniqueName: \"kubernetes.io/projected/674b6d94-cd48-43a3-a15b-748ef00b6579-kube-api-access-z4d5h\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.460136 4945 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.460149 4945 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/674b6d94-cd48-43a3-a15b-748ef00b6579-util\") on node \"crc\" DevicePath \"\"" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.831762 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" event={"ID":"674b6d94-cd48-43a3-a15b-748ef00b6579","Type":"ContainerDied","Data":"2116bf2d96c23a32c01a8b3f0b07f8198fbe20e9fa603efb94a50f043c9cca72"} Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.831926 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2116bf2d96c23a32c01a8b3f0b07f8198fbe20e9fa603efb94a50f043c9cca72" Jan 09 01:03:47 crc kubenswrapper[4945]: I0109 01:03:47.832085 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.433469 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8t2dg"] Jan 09 01:03:56 crc kubenswrapper[4945]: E0109 01:03:56.434370 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerName="extract" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.434382 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerName="extract" Jan 09 01:03:56 crc kubenswrapper[4945]: E0109 01:03:56.434403 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerName="util" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.434408 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerName="util" Jan 09 01:03:56 crc kubenswrapper[4945]: E0109 01:03:56.434434 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.434440 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon" Jan 09 01:03:56 crc kubenswrapper[4945]: E0109 01:03:56.434450 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerName="pull" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.434456 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerName="pull" Jan 09 01:03:56 crc kubenswrapper[4945]: E0109 01:03:56.434476 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon-log" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.434481 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon-log" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.434662 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon-log" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.434683 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5107d597-feb6-4d70-9587-1b0f23041c5d" containerName="horizon" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.434694 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="674b6d94-cd48-43a3-a15b-748ef00b6579" containerName="extract" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.436050 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.445301 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8t2dg"] Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.543132 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-utilities\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.543433 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95gvb\" (UniqueName: \"kubernetes.io/projected/61c160d2-8041-4fec-aa06-1d1fc66438cf-kube-api-access-95gvb\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.543599 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-catalog-content\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.646170 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-utilities\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.646290 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95gvb\" (UniqueName: \"kubernetes.io/projected/61c160d2-8041-4fec-aa06-1d1fc66438cf-kube-api-access-95gvb\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.646360 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-catalog-content\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.646831 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-utilities\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.646892 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-catalog-content\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.668395 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95gvb\" (UniqueName: \"kubernetes.io/projected/61c160d2-8041-4fec-aa06-1d1fc66438cf-kube-api-access-95gvb\") pod \"community-operators-8t2dg\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.760102 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.813704 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8"] Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.815079 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.824594 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-hbdjw" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.824659 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.824594 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.834655 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8"] Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.966871 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qm98\" (UniqueName: \"kubernetes.io/projected/be31d881-d239-450c-8a45-622a6645072f-kube-api-access-9qm98\") pod \"obo-prometheus-operator-68bc856cb9-5bzq8\" (UID: \"be31d881-d239-450c-8a45-622a6645072f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.977596 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t"] Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.979165 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.985960 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-pqwwm" Jan 09 01:03:56 crc kubenswrapper[4945]: I0109 01:03:56.986193 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.020026 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t"] Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.060189 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh"] Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.062129 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.069850 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t\" (UID: \"0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.069905 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t\" (UID: \"0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.070028 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qm98\" (UniqueName: \"kubernetes.io/projected/be31d881-d239-450c-8a45-622a6645072f-kube-api-access-9qm98\") pod \"obo-prometheus-operator-68bc856cb9-5bzq8\" (UID: \"be31d881-d239-450c-8a45-622a6645072f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.084243 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh"] Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.109840 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qm98\" (UniqueName: \"kubernetes.io/projected/be31d881-d239-450c-8a45-622a6645072f-kube-api-access-9qm98\") pod \"obo-prometheus-operator-68bc856cb9-5bzq8\" (UID: \"be31d881-d239-450c-8a45-622a6645072f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.172187 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72c8983c-93b8-44f7-bbe1-9e8d048f6b3f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh\" (UID: \"72c8983c-93b8-44f7-bbe1-9e8d048f6b3f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.172245 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72c8983c-93b8-44f7-bbe1-9e8d048f6b3f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh\" (UID: \"72c8983c-93b8-44f7-bbe1-9e8d048f6b3f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.172342 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t\" (UID: \"0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.172368 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t\" (UID: \"0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.173288 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bc94x"] Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.176325 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t\" (UID: \"0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.179541 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t\" (UID: \"0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.181174 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.185696 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.185926 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-vclds" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.188858 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bc94x"] Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.203266 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.274051 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dsb\" (UniqueName: \"kubernetes.io/projected/77ae07e8-12da-477b-86be-05e24de9edf7-kube-api-access-s6dsb\") pod \"observability-operator-59bdc8b94-bc94x\" (UID: \"77ae07e8-12da-477b-86be-05e24de9edf7\") " pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.274124 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72c8983c-93b8-44f7-bbe1-9e8d048f6b3f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh\" (UID: \"72c8983c-93b8-44f7-bbe1-9e8d048f6b3f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.274156 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72c8983c-93b8-44f7-bbe1-9e8d048f6b3f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh\" (UID: \"72c8983c-93b8-44f7-bbe1-9e8d048f6b3f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.274238 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/77ae07e8-12da-477b-86be-05e24de9edf7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bc94x\" (UID: \"77ae07e8-12da-477b-86be-05e24de9edf7\") " pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.282611 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72c8983c-93b8-44f7-bbe1-9e8d048f6b3f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh\" (UID: \"72c8983c-93b8-44f7-bbe1-9e8d048f6b3f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.282792 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72c8983c-93b8-44f7-bbe1-9e8d048f6b3f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh\" (UID: \"72c8983c-93b8-44f7-bbe1-9e8d048f6b3f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.331494 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.334311 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-w65s2"] Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.344978 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.349267 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-9s5g9" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.375694 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-w65s2"] Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.376968 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/77ae07e8-12da-477b-86be-05e24de9edf7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bc94x\" (UID: \"77ae07e8-12da-477b-86be-05e24de9edf7\") " pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.377149 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dsb\" (UniqueName: \"kubernetes.io/projected/77ae07e8-12da-477b-86be-05e24de9edf7-kube-api-access-s6dsb\") pod \"observability-operator-59bdc8b94-bc94x\" (UID: \"77ae07e8-12da-477b-86be-05e24de9edf7\") " pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.383379 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/77ae07e8-12da-477b-86be-05e24de9edf7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-bc94x\" (UID: \"77ae07e8-12da-477b-86be-05e24de9edf7\") " pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.397566 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.398797 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dsb\" (UniqueName: \"kubernetes.io/projected/77ae07e8-12da-477b-86be-05e24de9edf7-kube-api-access-s6dsb\") pod \"observability-operator-59bdc8b94-bc94x\" (UID: \"77ae07e8-12da-477b-86be-05e24de9edf7\") " pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.463281 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8t2dg"] Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.480074 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkw94\" (UniqueName: \"kubernetes.io/projected/bca46932-b26c-40a7-a51f-9008f7e153ab-kube-api-access-xkw94\") pod \"perses-operator-5bf474d74f-w65s2\" (UID: \"bca46932-b26c-40a7-a51f-9008f7e153ab\") " pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.480151 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bca46932-b26c-40a7-a51f-9008f7e153ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-w65s2\" (UID: \"bca46932-b26c-40a7-a51f-9008f7e153ab\") " pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.561528 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.583275 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkw94\" (UniqueName: \"kubernetes.io/projected/bca46932-b26c-40a7-a51f-9008f7e153ab-kube-api-access-xkw94\") pod \"perses-operator-5bf474d74f-w65s2\" (UID: \"bca46932-b26c-40a7-a51f-9008f7e153ab\") " pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.583385 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bca46932-b26c-40a7-a51f-9008f7e153ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-w65s2\" (UID: \"bca46932-b26c-40a7-a51f-9008f7e153ab\") " pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.584572 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bca46932-b26c-40a7-a51f-9008f7e153ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-w65s2\" (UID: \"bca46932-b26c-40a7-a51f-9008f7e153ab\") " pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.608685 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkw94\" (UniqueName: \"kubernetes.io/projected/bca46932-b26c-40a7-a51f-9008f7e153ab-kube-api-access-xkw94\") pod \"perses-operator-5bf474d74f-w65s2\" (UID: \"bca46932-b26c-40a7-a51f-9008f7e153ab\") " pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.666491 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:03:57 crc kubenswrapper[4945]: I0109 01:03:57.886132 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8"] Jan 09 01:03:57 crc kubenswrapper[4945]: W0109 01:03:57.939767 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe31d881_d239_450c_8a45_622a6645072f.slice/crio-a1f347b70e35325c60acaccef3dab397b2a3275682c57ff391ad514123344153 WatchSource:0}: Error finding container a1f347b70e35325c60acaccef3dab397b2a3275682c57ff391ad514123344153: Status 404 returned error can't find the container with id a1f347b70e35325c60acaccef3dab397b2a3275682c57ff391ad514123344153 Jan 09 01:03:58 crc kubenswrapper[4945]: I0109 01:03:58.009956 4945 generic.go:334] "Generic (PLEG): container finished" podID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerID="a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927" exitCode=0 Jan 09 01:03:58 crc kubenswrapper[4945]: I0109 01:03:58.042595 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8" event={"ID":"be31d881-d239-450c-8a45-622a6645072f","Type":"ContainerStarted","Data":"a1f347b70e35325c60acaccef3dab397b2a3275682c57ff391ad514123344153"} Jan 09 01:03:58 crc kubenswrapper[4945]: I0109 01:03:58.042649 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t"] Jan 09 01:03:58 crc kubenswrapper[4945]: I0109 01:03:58.042666 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8t2dg" event={"ID":"61c160d2-8041-4fec-aa06-1d1fc66438cf","Type":"ContainerDied","Data":"a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927"} Jan 09 01:03:58 crc kubenswrapper[4945]: I0109 01:03:58.042679 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8t2dg" event={"ID":"61c160d2-8041-4fec-aa06-1d1fc66438cf","Type":"ContainerStarted","Data":"c8cda3f2671e88c995d471f90ca3a4cb569a853ce127338c4ee4b2df86634c58"} Jan 09 01:03:58 crc kubenswrapper[4945]: I0109 01:03:58.292586 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh"] Jan 09 01:03:58 crc kubenswrapper[4945]: I0109 01:03:58.332780 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-bc94x"] Jan 09 01:03:58 crc kubenswrapper[4945]: W0109 01:03:58.336950 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77ae07e8_12da_477b_86be_05e24de9edf7.slice/crio-6092d86caa341e82654842296e22f744951a5cd179d1423947c05f457536be5e WatchSource:0}: Error finding container 6092d86caa341e82654842296e22f744951a5cd179d1423947c05f457536be5e: Status 404 returned error can't find the container with id 6092d86caa341e82654842296e22f744951a5cd179d1423947c05f457536be5e Jan 09 01:03:58 crc kubenswrapper[4945]: I0109 01:03:58.497544 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-w65s2"] Jan 09 01:03:59 crc kubenswrapper[4945]: I0109 01:03:59.025710 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" event={"ID":"0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf","Type":"ContainerStarted","Data":"552fae93aa0e3e5a9eaaaa289849130ba07fe3fd350101a80f6cbbd4644c3d3e"} Jan 09 01:03:59 crc kubenswrapper[4945]: I0109 01:03:59.031781 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-w65s2" event={"ID":"bca46932-b26c-40a7-a51f-9008f7e153ab","Type":"ContainerStarted","Data":"e8ed420c804a7c422763f773bfb3bb9e00dea40c291bf78e87b1618ef42a512d"} Jan 09 01:03:59 crc kubenswrapper[4945]: I0109 01:03:59.035651 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" event={"ID":"72c8983c-93b8-44f7-bbe1-9e8d048f6b3f","Type":"ContainerStarted","Data":"e345d4c89279f59a0814fe67e4066bb63b423c1753c48ec8fb5fde8c33dc57ff"} Jan 09 01:03:59 crc kubenswrapper[4945]: I0109 01:03:59.040853 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-bc94x" event={"ID":"77ae07e8-12da-477b-86be-05e24de9edf7","Type":"ContainerStarted","Data":"6092d86caa341e82654842296e22f744951a5cd179d1423947c05f457536be5e"} Jan 09 01:04:01 crc kubenswrapper[4945]: I0109 01:04:01.002967 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:04:01 crc kubenswrapper[4945]: E0109 01:04:01.003725 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:04:05 crc kubenswrapper[4945]: I0109 01:04:05.034662 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2f1d-account-create-update-vm58z"] Jan 09 01:04:05 crc kubenswrapper[4945]: I0109 01:04:05.047147 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6sr9b"] Jan 09 01:04:05 crc kubenswrapper[4945]: I0109 01:04:05.058067 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6sr9b"] Jan 09 01:04:05 crc kubenswrapper[4945]: I0109 01:04:05.068804 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2f1d-account-create-update-vm58z"] Jan 09 01:04:05 crc kubenswrapper[4945]: I0109 01:04:05.901409 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mm8rl"] Jan 09 01:04:05 crc kubenswrapper[4945]: I0109 01:04:05.903778 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:05 crc kubenswrapper[4945]: I0109 01:04:05.912693 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm8rl"] Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.012782 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777ee79a-fbcc-49ed-8e95-287c51727cee" path="/var/lib/kubelet/pods/777ee79a-fbcc-49ed-8e95-287c51727cee/volumes" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.013828 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3cb9ec-890b-44a1-8ef2-149b421860cd" path="/var/lib/kubelet/pods/fa3cb9ec-890b-44a1-8ef2-149b421860cd/volumes" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.014438 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-catalog-content\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.014509 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmh68\" (UniqueName: \"kubernetes.io/projected/e2f00338-3a6d-49f9-9d51-5137156838e3-kube-api-access-zmh68\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.014681 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-utilities\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.115983 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-catalog-content\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.116067 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmh68\" (UniqueName: \"kubernetes.io/projected/e2f00338-3a6d-49f9-9d51-5137156838e3-kube-api-access-zmh68\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.116250 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-utilities\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.116825 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-utilities\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.117456 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-catalog-content\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.137607 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmh68\" (UniqueName: \"kubernetes.io/projected/e2f00338-3a6d-49f9-9d51-5137156838e3-kube-api-access-zmh68\") pod \"redhat-marketplace-mm8rl\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:06 crc kubenswrapper[4945]: I0109 01:04:06.226833 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:10 crc kubenswrapper[4945]: I0109 01:04:10.042683 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-n99ps"] Jan 09 01:04:10 crc kubenswrapper[4945]: I0109 01:04:10.056839 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-n99ps"] Jan 09 01:04:12 crc kubenswrapper[4945]: I0109 01:04:12.003569 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:04:12 crc kubenswrapper[4945]: E0109 01:04:12.005843 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:04:12 crc kubenswrapper[4945]: I0109 01:04:12.025139 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604969b3-7c1f-429e-ae71-a5ad0c8b9729" path="/var/lib/kubelet/pods/604969b3-7c1f-429e-ae71-a5ad0c8b9729/volumes" Jan 09 01:04:12 crc kubenswrapper[4945]: I0109 01:04:12.360161 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm8rl"] Jan 09 01:04:12 crc kubenswrapper[4945]: W0109 01:04:12.371826 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2f00338_3a6d_49f9_9d51_5137156838e3.slice/crio-5bafb91d619248f7b927f0ba3ed1e9091c4bd1883813b76354e2d2bc8e6c839e WatchSource:0}: Error finding container 5bafb91d619248f7b927f0ba3ed1e9091c4bd1883813b76354e2d2bc8e6c839e: Status 404 returned error can't find the container with id 5bafb91d619248f7b927f0ba3ed1e9091c4bd1883813b76354e2d2bc8e6c839e Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.237271 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" event={"ID":"72c8983c-93b8-44f7-bbe1-9e8d048f6b3f","Type":"ContainerStarted","Data":"fa40a49981495d6091e29df71357629b470f08d37adb706b6055fbf205366663"} Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.240291 4945 generic.go:334] "Generic (PLEG): container finished" podID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerID="1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252" exitCode=0 Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.240376 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm8rl" event={"ID":"e2f00338-3a6d-49f9-9d51-5137156838e3","Type":"ContainerDied","Data":"1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252"} Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.240404 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm8rl" event={"ID":"e2f00338-3a6d-49f9-9d51-5137156838e3","Type":"ContainerStarted","Data":"5bafb91d619248f7b927f0ba3ed1e9091c4bd1883813b76354e2d2bc8e6c839e"} Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.245853 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-bc94x" event={"ID":"77ae07e8-12da-477b-86be-05e24de9edf7","Type":"ContainerStarted","Data":"8ff841ad115a6ccdf386f7f4e8aa48d9430dec06db1df400706d3961bb268213"} Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.246576 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.250934 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-bc94x" Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.252406 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8" event={"ID":"be31d881-d239-450c-8a45-622a6645072f","Type":"ContainerStarted","Data":"042cc3f40a1d620225d47720f2909e6cebf269b81a047b3eb6e3b195c31d089c"} Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.255438 4945 generic.go:334] "Generic (PLEG): container finished" podID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerID="ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285" exitCode=0 Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.255509 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8t2dg" event={"ID":"61c160d2-8041-4fec-aa06-1d1fc66438cf","Type":"ContainerDied","Data":"ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285"} Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.257938 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" event={"ID":"0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf","Type":"ContainerStarted","Data":"ac4b86d176c6da298a703800c8b75273a39321b81334390d111465b4c4bd44f2"} Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.260457 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-w65s2" event={"ID":"bca46932-b26c-40a7-a51f-9008f7e153ab","Type":"ContainerStarted","Data":"055a1aeb5ea763c5cb4a9f8905d5fb0d83d1edd391e38e4efc64e60558785196"} Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.261192 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.275210 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh" podStartSLOduration=3.884524196 podStartE2EDuration="17.275192068s" podCreationTimestamp="2026-01-09 01:03:56 +0000 UTC" firstStartedPulling="2026-01-09 01:03:58.300282611 +0000 UTC m=+6508.611441557" lastFinishedPulling="2026-01-09 01:04:11.690950473 +0000 UTC m=+6522.002109429" observedRunningTime="2026-01-09 01:04:13.272928782 +0000 UTC m=+6523.584087748" watchObservedRunningTime="2026-01-09 01:04:13.275192068 +0000 UTC m=+6523.586351014" Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.307314 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5bzq8" podStartSLOduration=3.513128417 podStartE2EDuration="17.307292398s" podCreationTimestamp="2026-01-09 01:03:56 +0000 UTC" firstStartedPulling="2026-01-09 01:03:57.952723439 +0000 UTC m=+6508.263882385" lastFinishedPulling="2026-01-09 01:04:11.74688742 +0000 UTC m=+6522.058046366" observedRunningTime="2026-01-09 01:04:13.301457014 +0000 UTC m=+6523.612615980" watchObservedRunningTime="2026-01-09 01:04:13.307292398 +0000 UTC m=+6523.618451344" Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.341688 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-w65s2" podStartSLOduration=3.100786768 podStartE2EDuration="16.341667034s" podCreationTimestamp="2026-01-09 01:03:57 +0000 UTC" firstStartedPulling="2026-01-09 01:03:58.506167968 +0000 UTC m=+6508.817326904" lastFinishedPulling="2026-01-09 01:04:11.747048224 +0000 UTC m=+6522.058207170" observedRunningTime="2026-01-09 01:04:13.326583663 +0000 UTC m=+6523.637742619" watchObservedRunningTime="2026-01-09 01:04:13.341667034 +0000 UTC m=+6523.652825980" Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.507949 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t" podStartSLOduration=3.883737906 podStartE2EDuration="17.507917945s" podCreationTimestamp="2026-01-09 01:03:56 +0000 UTC" firstStartedPulling="2026-01-09 01:03:58.066624451 +0000 UTC m=+6508.377783397" lastFinishedPulling="2026-01-09 01:04:11.6908045 +0000 UTC m=+6522.001963436" observedRunningTime="2026-01-09 01:04:13.468577967 +0000 UTC m=+6523.779736913" watchObservedRunningTime="2026-01-09 01:04:13.507917945 +0000 UTC m=+6523.819076891" Jan 09 01:04:13 crc kubenswrapper[4945]: I0109 01:04:13.600551 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-bc94x" podStartSLOduration=3.079218066 podStartE2EDuration="16.600526423s" podCreationTimestamp="2026-01-09 01:03:57 +0000 UTC" firstStartedPulling="2026-01-09 01:03:58.338748358 +0000 UTC m=+6508.649907304" lastFinishedPulling="2026-01-09 01:04:11.860056715 +0000 UTC m=+6522.171215661" observedRunningTime="2026-01-09 01:04:13.569631213 +0000 UTC m=+6523.880790159" watchObservedRunningTime="2026-01-09 01:04:13.600526423 +0000 UTC m=+6523.911685369" Jan 09 01:04:15 crc kubenswrapper[4945]: I0109 01:04:15.281819 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8t2dg" event={"ID":"61c160d2-8041-4fec-aa06-1d1fc66438cf","Type":"ContainerStarted","Data":"89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44"} Jan 09 01:04:15 crc kubenswrapper[4945]: I0109 01:04:15.286141 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm8rl" event={"ID":"e2f00338-3a6d-49f9-9d51-5137156838e3","Type":"ContainerStarted","Data":"9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31"} Jan 09 01:04:15 crc kubenswrapper[4945]: I0109 01:04:15.297212 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8t2dg" podStartSLOduration=2.8827001340000002 podStartE2EDuration="19.297188974s" podCreationTimestamp="2026-01-09 01:03:56 +0000 UTC" firstStartedPulling="2026-01-09 01:03:58.024934086 +0000 UTC m=+6508.336093032" lastFinishedPulling="2026-01-09 01:04:14.439422926 +0000 UTC m=+6524.750581872" observedRunningTime="2026-01-09 01:04:15.295967534 +0000 UTC m=+6525.607126490" watchObservedRunningTime="2026-01-09 01:04:15.297188974 +0000 UTC m=+6525.608347920" Jan 09 01:04:16 crc kubenswrapper[4945]: I0109 01:04:16.299425 4945 generic.go:334] "Generic (PLEG): container finished" podID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerID="9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31" exitCode=0 Jan 09 01:04:16 crc kubenswrapper[4945]: I0109 01:04:16.299520 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm8rl" event={"ID":"e2f00338-3a6d-49f9-9d51-5137156838e3","Type":"ContainerDied","Data":"9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31"} Jan 09 01:04:16 crc kubenswrapper[4945]: I0109 01:04:16.760751 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:04:16 crc kubenswrapper[4945]: I0109 01:04:16.760828 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:04:17 crc kubenswrapper[4945]: I0109 01:04:17.311287 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm8rl" event={"ID":"e2f00338-3a6d-49f9-9d51-5137156838e3","Type":"ContainerStarted","Data":"fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4"} Jan 09 01:04:17 crc kubenswrapper[4945]: I0109 01:04:17.332934 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mm8rl" podStartSLOduration=8.806611695 podStartE2EDuration="12.332912378s" podCreationTimestamp="2026-01-09 01:04:05 +0000 UTC" firstStartedPulling="2026-01-09 01:04:13.242620537 +0000 UTC m=+6523.553779483" lastFinishedPulling="2026-01-09 01:04:16.76892122 +0000 UTC m=+6527.080080166" observedRunningTime="2026-01-09 01:04:17.330797116 +0000 UTC m=+6527.641956062" watchObservedRunningTime="2026-01-09 01:04:17.332912378 +0000 UTC m=+6527.644071324" Jan 09 01:04:17 crc kubenswrapper[4945]: I0109 01:04:17.669977 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-w65s2" Jan 09 01:04:17 crc kubenswrapper[4945]: I0109 01:04:17.817580 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8t2dg" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="registry-server" probeResult="failure" output=< Jan 09 01:04:17 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 01:04:17 crc kubenswrapper[4945]: > Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.321073 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.321593 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="90ccb708-384f-414c-b03b-676f19656e35" containerName="openstackclient" containerID="cri-o://09c862520ef1555654e18fac009e4f5512b9a7c1d5dd268896f167c4e7e2e45e" gracePeriod=2 Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.338417 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.375623 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 09 01:04:20 crc kubenswrapper[4945]: E0109 01:04:20.396259 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ccb708-384f-414c-b03b-676f19656e35" containerName="openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.396299 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ccb708-384f-414c-b03b-676f19656e35" containerName="openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.396613 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ccb708-384f-414c-b03b-676f19656e35" containerName="openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.397484 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.402888 4945 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="90ccb708-384f-414c-b03b-676f19656e35" podUID="f161a9b2-86ed-4cd1-9def-3a9c8736b302" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.426206 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.438445 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f161a9b2-86ed-4cd1-9def-3a9c8736b302-openstack-config\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.438754 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvtb\" (UniqueName: \"kubernetes.io/projected/f161a9b2-86ed-4cd1-9def-3a9c8736b302-kube-api-access-fxvtb\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.438880 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f161a9b2-86ed-4cd1-9def-3a9c8736b302-openstack-config-secret\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.482381 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.483965 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.498184 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-s8gvj" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.537574 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.542949 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxvtb\" (UniqueName: \"kubernetes.io/projected/f161a9b2-86ed-4cd1-9def-3a9c8736b302-kube-api-access-fxvtb\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.543043 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f161a9b2-86ed-4cd1-9def-3a9c8736b302-openstack-config-secret\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.543125 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh8r4\" (UniqueName: \"kubernetes.io/projected/77b91420-00bf-4b20-9999-52325501b237-kube-api-access-xh8r4\") pod \"kube-state-metrics-0\" (UID: \"77b91420-00bf-4b20-9999-52325501b237\") " pod="openstack/kube-state-metrics-0" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.543187 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f161a9b2-86ed-4cd1-9def-3a9c8736b302-openstack-config\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.544062 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f161a9b2-86ed-4cd1-9def-3a9c8736b302-openstack-config\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.561497 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f161a9b2-86ed-4cd1-9def-3a9c8736b302-openstack-config-secret\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.596172 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxvtb\" (UniqueName: \"kubernetes.io/projected/f161a9b2-86ed-4cd1-9def-3a9c8736b302-kube-api-access-fxvtb\") pod \"openstackclient\" (UID: \"f161a9b2-86ed-4cd1-9def-3a9c8736b302\") " pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.651828 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh8r4\" (UniqueName: \"kubernetes.io/projected/77b91420-00bf-4b20-9999-52325501b237-kube-api-access-xh8r4\") pod \"kube-state-metrics-0\" (UID: \"77b91420-00bf-4b20-9999-52325501b237\") " pod="openstack/kube-state-metrics-0" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.727585 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh8r4\" (UniqueName: \"kubernetes.io/projected/77b91420-00bf-4b20-9999-52325501b237-kube-api-access-xh8r4\") pod \"kube-state-metrics-0\" (UID: \"77b91420-00bf-4b20-9999-52325501b237\") " pod="openstack/kube-state-metrics-0" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.738389 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 01:04:20 crc kubenswrapper[4945]: I0109 01:04:20.814475 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.372781 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.412013 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.412122 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.421027 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.421238 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.421319 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.421380 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.422786 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-bp48m" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.578192 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.578524 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/515f90ea-9ad3-4d93-82c0-ccb39b893643-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.578583 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/515f90ea-9ad3-4d93-82c0-ccb39b893643-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.578606 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/515f90ea-9ad3-4d93-82c0-ccb39b893643-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.578663 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7wt\" (UniqueName: \"kubernetes.io/projected/515f90ea-9ad3-4d93-82c0-ccb39b893643-kube-api-access-fv7wt\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.578689 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.578723 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.686214 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7wt\" (UniqueName: \"kubernetes.io/projected/515f90ea-9ad3-4d93-82c0-ccb39b893643-kube-api-access-fv7wt\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.686276 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.686298 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.686403 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.686437 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/515f90ea-9ad3-4d93-82c0-ccb39b893643-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.686468 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/515f90ea-9ad3-4d93-82c0-ccb39b893643-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.686490 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/515f90ea-9ad3-4d93-82c0-ccb39b893643-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.687285 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/515f90ea-9ad3-4d93-82c0-ccb39b893643-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.718847 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.718984 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.720949 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/515f90ea-9ad3-4d93-82c0-ccb39b893643-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.727800 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/515f90ea-9ad3-4d93-82c0-ccb39b893643-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.736270 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/515f90ea-9ad3-4d93-82c0-ccb39b893643-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.736616 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7wt\" (UniqueName: \"kubernetes.io/projected/515f90ea-9ad3-4d93-82c0-ccb39b893643-kube-api-access-fv7wt\") pod \"alertmanager-metric-storage-0\" (UID: \"515f90ea-9ad3-4d93-82c0-ccb39b893643\") " pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.738970 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.782907 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.868455 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.872290 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.878918 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.879184 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.879200 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.879324 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.879440 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.879552 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.879587 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-j2qhp" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.885393 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 09 01:04:21 crc kubenswrapper[4945]: I0109 01:04:21.927058 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.002471 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.006778 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-27cad2ec-fa77-4116-8309-4d1504445bb9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27cad2ec-fa77-4116-8309-4d1504445bb9\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.006831 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.006874 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmmcj\" (UniqueName: \"kubernetes.io/projected/124ecf7f-2df8-4d30-82e9-b393785c7786-kube-api-access-hmmcj\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.006893 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.006931 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.006949 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.012065 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/124ecf7f-2df8-4d30-82e9-b393785c7786-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.012248 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.012299 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/124ecf7f-2df8-4d30-82e9-b393785c7786-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.012333 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-config\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120290 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120366 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120460 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/124ecf7f-2df8-4d30-82e9-b393785c7786-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120532 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120558 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/124ecf7f-2df8-4d30-82e9-b393785c7786-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120588 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-config\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120670 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-27cad2ec-fa77-4116-8309-4d1504445bb9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27cad2ec-fa77-4116-8309-4d1504445bb9\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120719 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120794 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmmcj\" (UniqueName: \"kubernetes.io/projected/124ecf7f-2df8-4d30-82e9-b393785c7786-kube-api-access-hmmcj\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.120814 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.124521 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.139779 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.144736 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/124ecf7f-2df8-4d30-82e9-b393785c7786-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.145248 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.146073 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/124ecf7f-2df8-4d30-82e9-b393785c7786-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.149210 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/124ecf7f-2df8-4d30-82e9-b393785c7786-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.153626 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.159164 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/124ecf7f-2df8-4d30-82e9-b393785c7786-config\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.202831 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmmcj\" (UniqueName: \"kubernetes.io/projected/124ecf7f-2df8-4d30-82e9-b393785c7786-kube-api-access-hmmcj\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.203336 4945 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.203377 4945 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-27cad2ec-fa77-4116-8309-4d1504445bb9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27cad2ec-fa77-4116-8309-4d1504445bb9\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d2d9197e56e739181f8e8a10f94a1a0d9a3273f44e6d5eb217afb09626ac6e2f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.257110 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-27cad2ec-fa77-4116-8309-4d1504445bb9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-27cad2ec-fa77-4116-8309-4d1504445bb9\") pod \"prometheus-metric-storage-0\" (UID: \"124ecf7f-2df8-4d30-82e9-b393785c7786\") " pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.407489 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"77b91420-00bf-4b20-9999-52325501b237","Type":"ContainerStarted","Data":"700cc64bb22928a42d6c782954a728348248de41169989b438ede937ff48ad74"} Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.420601 4945 generic.go:334] "Generic (PLEG): container finished" podID="90ccb708-384f-414c-b03b-676f19656e35" containerID="09c862520ef1555654e18fac009e4f5512b9a7c1d5dd268896f167c4e7e2e45e" exitCode=137 Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.423186 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f161a9b2-86ed-4cd1-9def-3a9c8736b302","Type":"ContainerStarted","Data":"154391cf921b10f2a26a9d12b215c4b2a7ed92fb46241ba5d637ad79e77813e4"} Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.423224 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f161a9b2-86ed-4cd1-9def-3a9c8736b302","Type":"ContainerStarted","Data":"1975ef91fdf0a4dd9d351cfde4090ac86a2696e04a78539dad605e4c064f0c03"} Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.455497 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.455471992 podStartE2EDuration="2.455471992s" podCreationTimestamp="2026-01-09 01:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:04:22.444281557 +0000 UTC m=+6532.755440503" watchObservedRunningTime="2026-01-09 01:04:22.455471992 +0000 UTC m=+6532.766630938" Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.515232 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 09 01:04:22 crc kubenswrapper[4945]: I0109 01:04:22.539190 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.086770 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.163756 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90ccb708-384f-414c-b03b-676f19656e35-openstack-config-secret\") pod \"90ccb708-384f-414c-b03b-676f19656e35\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.163887 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m52k9\" (UniqueName: \"kubernetes.io/projected/90ccb708-384f-414c-b03b-676f19656e35-kube-api-access-m52k9\") pod \"90ccb708-384f-414c-b03b-676f19656e35\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.164041 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90ccb708-384f-414c-b03b-676f19656e35-openstack-config\") pod \"90ccb708-384f-414c-b03b-676f19656e35\" (UID: \"90ccb708-384f-414c-b03b-676f19656e35\") " Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.170357 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ccb708-384f-414c-b03b-676f19656e35-kube-api-access-m52k9" (OuterVolumeSpecName: "kube-api-access-m52k9") pod "90ccb708-384f-414c-b03b-676f19656e35" (UID: "90ccb708-384f-414c-b03b-676f19656e35"). InnerVolumeSpecName "kube-api-access-m52k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.205574 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ccb708-384f-414c-b03b-676f19656e35-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "90ccb708-384f-414c-b03b-676f19656e35" (UID: "90ccb708-384f-414c-b03b-676f19656e35"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.216136 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ccb708-384f-414c-b03b-676f19656e35-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "90ccb708-384f-414c-b03b-676f19656e35" (UID: "90ccb708-384f-414c-b03b-676f19656e35"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.259079 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.266576 4945 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/90ccb708-384f-414c-b03b-676f19656e35-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.266603 4945 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/90ccb708-384f-414c-b03b-676f19656e35-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.266613 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m52k9\" (UniqueName: \"kubernetes.io/projected/90ccb708-384f-414c-b03b-676f19656e35-kube-api-access-m52k9\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.449973 4945 scope.go:117] "RemoveContainer" containerID="09c862520ef1555654e18fac009e4f5512b9a7c1d5dd268896f167c4e7e2e45e" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.451221 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.457962 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"77b91420-00bf-4b20-9999-52325501b237","Type":"ContainerStarted","Data":"844df87fba1e3819ac228ba7d1f3f8f258b106988ef8797ace5fff1369d22eb8"} Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.458369 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.462839 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"515f90ea-9ad3-4d93-82c0-ccb39b893643","Type":"ContainerStarted","Data":"0110eab9658021315bd96e3ea652113f4371cbeac2b43e2fa1cba2a9d4acc7ef"} Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.467884 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"124ecf7f-2df8-4d30-82e9-b393785c7786","Type":"ContainerStarted","Data":"19f490f2843ce7820e1f929dd0560cfefe6fc64ef6c6edc6989d221dca7326ca"} Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.477565 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.889521743 podStartE2EDuration="3.477542783s" podCreationTimestamp="2026-01-09 01:04:20 +0000 UTC" firstStartedPulling="2026-01-09 01:04:21.730879392 +0000 UTC m=+6532.042038338" lastFinishedPulling="2026-01-09 01:04:22.318900432 +0000 UTC m=+6532.630059378" observedRunningTime="2026-01-09 01:04:23.473845932 +0000 UTC m=+6533.785004898" watchObservedRunningTime="2026-01-09 01:04:23.477542783 +0000 UTC m=+6533.788701729" Jan 09 01:04:23 crc kubenswrapper[4945]: I0109 01:04:23.482264 4945 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="90ccb708-384f-414c-b03b-676f19656e35" podUID="f161a9b2-86ed-4cd1-9def-3a9c8736b302" Jan 09 01:04:24 crc kubenswrapper[4945]: I0109 01:04:24.001547 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:04:24 crc kubenswrapper[4945]: E0109 01:04:24.002242 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:04:24 crc kubenswrapper[4945]: I0109 01:04:24.014814 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ccb708-384f-414c-b03b-676f19656e35" path="/var/lib/kubelet/pods/90ccb708-384f-414c-b03b-676f19656e35/volumes" Jan 09 01:04:26 crc kubenswrapper[4945]: I0109 01:04:26.227739 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:26 crc kubenswrapper[4945]: I0109 01:04:26.228395 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:26 crc kubenswrapper[4945]: I0109 01:04:26.299439 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:26 crc kubenswrapper[4945]: I0109 01:04:26.554517 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:27 crc kubenswrapper[4945]: I0109 01:04:27.534997 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm8rl"] Jan 09 01:04:27 crc kubenswrapper[4945]: I0109 01:04:27.799271 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:04:27 crc kubenswrapper[4945]: I0109 01:04:27.853801 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:04:28 crc kubenswrapper[4945]: I0109 01:04:28.524750 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"124ecf7f-2df8-4d30-82e9-b393785c7786","Type":"ContainerStarted","Data":"007b90e59511c7f59281e813cf1b985d7eef7c59a951fe252cad5bd65262f625"} Jan 09 01:04:28 crc kubenswrapper[4945]: I0109 01:04:28.526978 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"515f90ea-9ad3-4d93-82c0-ccb39b893643","Type":"ContainerStarted","Data":"e6988e0a2fa47da6cd0d44877e7fa49f3d14431943e20bb01695f8f5257af1bb"} Jan 09 01:04:28 crc kubenswrapper[4945]: I0109 01:04:28.527537 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mm8rl" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerName="registry-server" containerID="cri-o://fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4" gracePeriod=2 Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.159074 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.289826 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmh68\" (UniqueName: \"kubernetes.io/projected/e2f00338-3a6d-49f9-9d51-5137156838e3-kube-api-access-zmh68\") pod \"e2f00338-3a6d-49f9-9d51-5137156838e3\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.289982 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-catalog-content\") pod \"e2f00338-3a6d-49f9-9d51-5137156838e3\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.290132 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-utilities\") pod \"e2f00338-3a6d-49f9-9d51-5137156838e3\" (UID: \"e2f00338-3a6d-49f9-9d51-5137156838e3\") " Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.290743 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-utilities" (OuterVolumeSpecName: "utilities") pod "e2f00338-3a6d-49f9-9d51-5137156838e3" (UID: "e2f00338-3a6d-49f9-9d51-5137156838e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.294782 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f00338-3a6d-49f9-9d51-5137156838e3-kube-api-access-zmh68" (OuterVolumeSpecName: "kube-api-access-zmh68") pod "e2f00338-3a6d-49f9-9d51-5137156838e3" (UID: "e2f00338-3a6d-49f9-9d51-5137156838e3"). InnerVolumeSpecName "kube-api-access-zmh68". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.309045 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2f00338-3a6d-49f9-9d51-5137156838e3" (UID: "e2f00338-3a6d-49f9-9d51-5137156838e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.393367 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmh68\" (UniqueName: \"kubernetes.io/projected/e2f00338-3a6d-49f9-9d51-5137156838e3-kube-api-access-zmh68\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.393587 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.393645 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f00338-3a6d-49f9-9d51-5137156838e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.539224 4945 generic.go:334] "Generic (PLEG): container finished" podID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerID="fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4" exitCode=0 Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.539322 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm8rl" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.539549 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm8rl" event={"ID":"e2f00338-3a6d-49f9-9d51-5137156838e3","Type":"ContainerDied","Data":"fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4"} Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.539657 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm8rl" event={"ID":"e2f00338-3a6d-49f9-9d51-5137156838e3","Type":"ContainerDied","Data":"5bafb91d619248f7b927f0ba3ed1e9091c4bd1883813b76354e2d2bc8e6c839e"} Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.539688 4945 scope.go:117] "RemoveContainer" containerID="fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.562191 4945 scope.go:117] "RemoveContainer" containerID="9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.577268 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm8rl"] Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.590331 4945 scope.go:117] "RemoveContainer" containerID="1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.590717 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm8rl"] Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.660700 4945 scope.go:117] "RemoveContainer" containerID="fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4" Jan 09 01:04:29 crc kubenswrapper[4945]: E0109 01:04:29.661248 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4\": container with ID starting with fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4 not found: ID does not exist" containerID="fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.661282 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4"} err="failed to get container status \"fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4\": rpc error: code = NotFound desc = could not find container \"fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4\": container with ID starting with fc9aab54542b881d3f137a0c316de59b0ae67fe7c1d646c57d1194dc2e6ffcd4 not found: ID does not exist" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.661319 4945 scope.go:117] "RemoveContainer" containerID="9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31" Jan 09 01:04:29 crc kubenswrapper[4945]: E0109 01:04:29.661909 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31\": container with ID starting with 9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31 not found: ID does not exist" containerID="9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.662111 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31"} err="failed to get container status \"9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31\": rpc error: code = NotFound desc = could not find container \"9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31\": container with ID starting with 9841b177f6690795cdc277b8f99a05b5036fffc2d9e07f6198395e56429fef31 not found: ID does not exist" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.662199 4945 scope.go:117] "RemoveContainer" containerID="1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252" Jan 09 01:04:29 crc kubenswrapper[4945]: E0109 01:04:29.662733 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252\": container with ID starting with 1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252 not found: ID does not exist" containerID="1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.662753 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252"} err="failed to get container status \"1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252\": rpc error: code = NotFound desc = could not find container \"1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252\": container with ID starting with 1b0c43f0c70851341aa8daae510867bbebcc45e3f331f8ae4f6e0fc3475f1252 not found: ID does not exist" Jan 09 01:04:29 crc kubenswrapper[4945]: I0109 01:04:29.766413 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8t2dg"] Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.020941 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" path="/var/lib/kubelet/pods/e2f00338-3a6d-49f9-9d51-5137156838e3/volumes" Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.136694 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzfrf"] Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.137025 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tzfrf" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="registry-server" containerID="cri-o://b334092ef1c769fdfda0923d4214e75aa87efb9a18e85fa2b915be7bab9536ba" gracePeriod=2 Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.559224 4945 generic.go:334] "Generic (PLEG): container finished" podID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerID="b334092ef1c769fdfda0923d4214e75aa87efb9a18e85fa2b915be7bab9536ba" exitCode=0 Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.559430 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzfrf" event={"ID":"f1caf04d-dc9f-4207-a8d2-a47faa2620f5","Type":"ContainerDied","Data":"b334092ef1c769fdfda0923d4214e75aa87efb9a18e85fa2b915be7bab9536ba"} Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.709790 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzfrf" Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.819032 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.823757 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-utilities\") pod \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.823854 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-catalog-content\") pod \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.823916 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcg7v\" (UniqueName: \"kubernetes.io/projected/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-kube-api-access-dcg7v\") pod \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\" (UID: \"f1caf04d-dc9f-4207-a8d2-a47faa2620f5\") " Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.824465 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-utilities" (OuterVolumeSpecName: "utilities") pod "f1caf04d-dc9f-4207-a8d2-a47faa2620f5" (UID: "f1caf04d-dc9f-4207-a8d2-a47faa2620f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.840465 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-kube-api-access-dcg7v" (OuterVolumeSpecName: "kube-api-access-dcg7v") pod "f1caf04d-dc9f-4207-a8d2-a47faa2620f5" (UID: "f1caf04d-dc9f-4207-a8d2-a47faa2620f5"). InnerVolumeSpecName "kube-api-access-dcg7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.874631 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1caf04d-dc9f-4207-a8d2-a47faa2620f5" (UID: "f1caf04d-dc9f-4207-a8d2-a47faa2620f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.925975 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.926489 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:30 crc kubenswrapper[4945]: I0109 01:04:30.926578 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcg7v\" (UniqueName: \"kubernetes.io/projected/f1caf04d-dc9f-4207-a8d2-a47faa2620f5-kube-api-access-dcg7v\") on node \"crc\" DevicePath \"\"" Jan 09 01:04:31 crc kubenswrapper[4945]: I0109 01:04:31.578650 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tzfrf" event={"ID":"f1caf04d-dc9f-4207-a8d2-a47faa2620f5","Type":"ContainerDied","Data":"b4380f75c9fcfc0a9a848bdcc847b06bccf61f8fac95293032d242d5227396c3"} Jan 09 01:04:31 crc kubenswrapper[4945]: I0109 01:04:31.578760 4945 scope.go:117] "RemoveContainer" containerID="b334092ef1c769fdfda0923d4214e75aa87efb9a18e85fa2b915be7bab9536ba" Jan 09 01:04:31 crc kubenswrapper[4945]: I0109 01:04:31.578768 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tzfrf" Jan 09 01:04:31 crc kubenswrapper[4945]: I0109 01:04:31.608012 4945 scope.go:117] "RemoveContainer" containerID="012c8f653fb4b8fcf419427629140512850aa94825a0cbd7066df7e3fdb552ea" Jan 09 01:04:31 crc kubenswrapper[4945]: I0109 01:04:31.624263 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tzfrf"] Jan 09 01:04:31 crc kubenswrapper[4945]: I0109 01:04:31.632895 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tzfrf"] Jan 09 01:04:31 crc kubenswrapper[4945]: I0109 01:04:31.639695 4945 scope.go:117] "RemoveContainer" containerID="ac17086a999cb4af2db9baf60869dc360d4b761bae9b4b1ca4a5a6e623c65268" Jan 09 01:04:32 crc kubenswrapper[4945]: I0109 01:04:32.011060 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" path="/var/lib/kubelet/pods/f1caf04d-dc9f-4207-a8d2-a47faa2620f5/volumes" Jan 09 01:04:34 crc kubenswrapper[4945]: I0109 01:04:34.627553 4945 generic.go:334] "Generic (PLEG): container finished" podID="515f90ea-9ad3-4d93-82c0-ccb39b893643" containerID="e6988e0a2fa47da6cd0d44877e7fa49f3d14431943e20bb01695f8f5257af1bb" exitCode=0 Jan 09 01:04:34 crc kubenswrapper[4945]: I0109 01:04:34.627652 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"515f90ea-9ad3-4d93-82c0-ccb39b893643","Type":"ContainerDied","Data":"e6988e0a2fa47da6cd0d44877e7fa49f3d14431943e20bb01695f8f5257af1bb"} Jan 09 01:04:34 crc kubenswrapper[4945]: I0109 01:04:34.639440 4945 generic.go:334] "Generic (PLEG): container finished" podID="124ecf7f-2df8-4d30-82e9-b393785c7786" containerID="007b90e59511c7f59281e813cf1b985d7eef7c59a951fe252cad5bd65262f625" exitCode=0 Jan 09 01:04:34 crc kubenswrapper[4945]: I0109 01:04:34.639579 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"124ecf7f-2df8-4d30-82e9-b393785c7786","Type":"ContainerDied","Data":"007b90e59511c7f59281e813cf1b985d7eef7c59a951fe252cad5bd65262f625"} Jan 09 01:04:37 crc kubenswrapper[4945]: I0109 01:04:37.001161 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:04:37 crc kubenswrapper[4945]: E0109 01:04:37.001904 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:04:37 crc kubenswrapper[4945]: I0109 01:04:37.676709 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"515f90ea-9ad3-4d93-82c0-ccb39b893643","Type":"ContainerStarted","Data":"cd061fb7fd62ee7e50f71f2aba9db1125c4e0194c85a504af4755d2d0a8e4c30"} Jan 09 01:04:38 crc kubenswrapper[4945]: I0109 01:04:38.510629 4945 scope.go:117] "RemoveContainer" containerID="dd17725e34c943b0ae4e752c797d784ce9dbf0fbccc78c403bbc91128fe5fe12" Jan 09 01:04:40 crc kubenswrapper[4945]: I0109 01:04:40.532706 4945 scope.go:117] "RemoveContainer" containerID="624f452d4e92899034642e58f4ecaf0aed7b9155c18780fe176dcd1eb87e91f6" Jan 09 01:04:40 crc kubenswrapper[4945]: I0109 01:04:40.596892 4945 scope.go:117] "RemoveContainer" containerID="159f6bb2662aace3edae8bda41df45a946d818d9cd93fac5c5ea32338ef5ab85" Jan 09 01:04:40 crc kubenswrapper[4945]: I0109 01:04:40.749870 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"515f90ea-9ad3-4d93-82c0-ccb39b893643","Type":"ContainerStarted","Data":"6431d9d09b04e88490e9b4d0759ae57d6cd042ded0977659a815153a98a1b969"} Jan 09 01:04:40 crc kubenswrapper[4945]: I0109 01:04:40.751210 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:40 crc kubenswrapper[4945]: I0109 01:04:40.754822 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 09 01:04:40 crc kubenswrapper[4945]: I0109 01:04:40.800092 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.536513408 podStartE2EDuration="19.800041919s" podCreationTimestamp="2026-01-09 01:04:21 +0000 UTC" firstStartedPulling="2026-01-09 01:04:22.581259918 +0000 UTC m=+6532.892418864" lastFinishedPulling="2026-01-09 01:04:36.844788429 +0000 UTC m=+6547.155947375" observedRunningTime="2026-01-09 01:04:40.776549621 +0000 UTC m=+6551.087708577" watchObservedRunningTime="2026-01-09 01:04:40.800041919 +0000 UTC m=+6551.111200865" Jan 09 01:04:41 crc kubenswrapper[4945]: I0109 01:04:41.761675 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"124ecf7f-2df8-4d30-82e9-b393785c7786","Type":"ContainerStarted","Data":"7d6fcb8d4ea890ee340a23dedf9144921ed3762cff9e59e58fbfdc5c47c16ef8"} Jan 09 01:04:44 crc kubenswrapper[4945]: I0109 01:04:44.791140 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"124ecf7f-2df8-4d30-82e9-b393785c7786","Type":"ContainerStarted","Data":"b7684613c19ebce28c20b8bf14a392adba490912d7937611c834bc12182705e0"} Jan 09 01:04:49 crc kubenswrapper[4945]: I0109 01:04:49.851312 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"124ecf7f-2df8-4d30-82e9-b393785c7786","Type":"ContainerStarted","Data":"bafd619fc27b461ae42bc31ef894bb949f826b5cc77d1aa07f019d1ed9fa58d7"} Jan 09 01:04:51 crc kubenswrapper[4945]: I0109 01:04:51.000313 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:04:51 crc kubenswrapper[4945]: E0109 01:04:51.000933 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:04:52 crc kubenswrapper[4945]: I0109 01:04:52.540575 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:52 crc kubenswrapper[4945]: I0109 01:04:52.540860 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:52 crc kubenswrapper[4945]: I0109 01:04:52.542307 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:52 crc kubenswrapper[4945]: I0109 01:04:52.570903 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=6.595877787 podStartE2EDuration="32.570881771s" podCreationTimestamp="2026-01-09 01:04:20 +0000 UTC" firstStartedPulling="2026-01-09 01:04:23.262572863 +0000 UTC m=+6533.573731809" lastFinishedPulling="2026-01-09 01:04:49.237576847 +0000 UTC m=+6559.548735793" observedRunningTime="2026-01-09 01:04:49.926255013 +0000 UTC m=+6560.237413969" watchObservedRunningTime="2026-01-09 01:04:52.570881771 +0000 UTC m=+6562.882040727" Jan 09 01:04:52 crc kubenswrapper[4945]: I0109 01:04:52.886286 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.551880 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:04:54 crc kubenswrapper[4945]: E0109 01:04:54.552734 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="extract-utilities" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.552748 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="extract-utilities" Jan 09 01:04:54 crc kubenswrapper[4945]: E0109 01:04:54.552793 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="registry-server" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.552800 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="registry-server" Jan 09 01:04:54 crc kubenswrapper[4945]: E0109 01:04:54.552807 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="extract-content" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.552812 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="extract-content" Jan 09 01:04:54 crc kubenswrapper[4945]: E0109 01:04:54.552823 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerName="extract-content" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.552831 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerName="extract-content" Jan 09 01:04:54 crc kubenswrapper[4945]: E0109 01:04:54.552863 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerName="registry-server" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.552869 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerName="registry-server" Jan 09 01:04:54 crc kubenswrapper[4945]: E0109 01:04:54.552879 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerName="extract-utilities" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.552885 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerName="extract-utilities" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.553114 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1caf04d-dc9f-4207-a8d2-a47faa2620f5" containerName="registry-server" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.553139 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f00338-3a6d-49f9-9d51-5137156838e3" containerName="registry-server" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.555351 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.558181 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.559109 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.579351 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.679799 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.679869 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-config-data\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.679895 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.680090 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-scripts\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.680263 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-log-httpd\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.680383 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95nr\" (UniqueName: \"kubernetes.io/projected/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-kube-api-access-m95nr\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.680436 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-run-httpd\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.782140 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-scripts\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.782249 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-log-httpd\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.782302 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95nr\" (UniqueName: \"kubernetes.io/projected/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-kube-api-access-m95nr\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.782343 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-run-httpd\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.782393 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.782435 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-config-data\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.782460 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.782970 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-run-httpd\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.783442 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-log-httpd\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.788531 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.793502 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.799387 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95nr\" (UniqueName: \"kubernetes.io/projected/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-kube-api-access-m95nr\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.799665 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-scripts\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.804980 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-config-data\") pod \"ceilometer-0\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " pod="openstack/ceilometer-0" Jan 09 01:04:54 crc kubenswrapper[4945]: I0109 01:04:54.883895 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:04:55 crc kubenswrapper[4945]: I0109 01:04:55.426879 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:04:55 crc kubenswrapper[4945]: I0109 01:04:55.916974 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerStarted","Data":"cddf4c530c79d4b2c7febab41e341bb79ded42fc14928d8c9c4fa39368c690d4"} Jan 09 01:04:56 crc kubenswrapper[4945]: I0109 01:04:56.928824 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerStarted","Data":"118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d"} Jan 09 01:04:57 crc kubenswrapper[4945]: I0109 01:04:57.943496 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerStarted","Data":"7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585"} Jan 09 01:04:57 crc kubenswrapper[4945]: I0109 01:04:57.943849 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerStarted","Data":"7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae"} Jan 09 01:05:01 crc kubenswrapper[4945]: I0109 01:05:01.988476 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerStarted","Data":"a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932"} Jan 09 01:05:01 crc kubenswrapper[4945]: I0109 01:05:01.990386 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 01:05:02 crc kubenswrapper[4945]: I0109 01:05:02.026160 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.570330938 podStartE2EDuration="8.025847084s" podCreationTimestamp="2026-01-09 01:04:54 +0000 UTC" firstStartedPulling="2026-01-09 01:04:55.429031833 +0000 UTC m=+6565.740190779" lastFinishedPulling="2026-01-09 01:05:00.884547979 +0000 UTC m=+6571.195706925" observedRunningTime="2026-01-09 01:05:02.016153396 +0000 UTC m=+6572.327312352" watchObservedRunningTime="2026-01-09 01:05:02.025847084 +0000 UTC m=+6572.337006030" Jan 09 01:05:04 crc kubenswrapper[4945]: I0109 01:05:04.002570 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:05:04 crc kubenswrapper[4945]: E0109 01:05:04.003780 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.542570 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-xpp5z"] Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.545086 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.557805 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xpp5z"] Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.650969 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-f55d-account-create-update-8bjjl"] Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.652429 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.654455 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7wft\" (UniqueName: \"kubernetes.io/projected/2379e260-5c37-4fb1-9216-a8b2037dcdc4-kube-api-access-q7wft\") pod \"aodh-db-create-xpp5z\" (UID: \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\") " pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.654501 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2379e260-5c37-4fb1-9216-a8b2037dcdc4-operator-scripts\") pod \"aodh-db-create-xpp5z\" (UID: \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\") " pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.654786 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.660320 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-f55d-account-create-update-8bjjl"] Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.756911 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7wft\" (UniqueName: \"kubernetes.io/projected/2379e260-5c37-4fb1-9216-a8b2037dcdc4-kube-api-access-q7wft\") pod \"aodh-db-create-xpp5z\" (UID: \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\") " pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.757003 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2379e260-5c37-4fb1-9216-a8b2037dcdc4-operator-scripts\") pod \"aodh-db-create-xpp5z\" (UID: \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\") " pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.757039 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b4bbab-8905-41f8-8226-da26a5d644aa-operator-scripts\") pod \"aodh-f55d-account-create-update-8bjjl\" (UID: \"93b4bbab-8905-41f8-8226-da26a5d644aa\") " pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.757136 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msltf\" (UniqueName: \"kubernetes.io/projected/93b4bbab-8905-41f8-8226-da26a5d644aa-kube-api-access-msltf\") pod \"aodh-f55d-account-create-update-8bjjl\" (UID: \"93b4bbab-8905-41f8-8226-da26a5d644aa\") " pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.757922 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2379e260-5c37-4fb1-9216-a8b2037dcdc4-operator-scripts\") pod \"aodh-db-create-xpp5z\" (UID: \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\") " pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.785873 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7wft\" (UniqueName: \"kubernetes.io/projected/2379e260-5c37-4fb1-9216-a8b2037dcdc4-kube-api-access-q7wft\") pod \"aodh-db-create-xpp5z\" (UID: \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\") " pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.858883 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b4bbab-8905-41f8-8226-da26a5d644aa-operator-scripts\") pod \"aodh-f55d-account-create-update-8bjjl\" (UID: \"93b4bbab-8905-41f8-8226-da26a5d644aa\") " pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.859006 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msltf\" (UniqueName: \"kubernetes.io/projected/93b4bbab-8905-41f8-8226-da26a5d644aa-kube-api-access-msltf\") pod \"aodh-f55d-account-create-update-8bjjl\" (UID: \"93b4bbab-8905-41f8-8226-da26a5d644aa\") " pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.860025 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b4bbab-8905-41f8-8226-da26a5d644aa-operator-scripts\") pod \"aodh-f55d-account-create-update-8bjjl\" (UID: \"93b4bbab-8905-41f8-8226-da26a5d644aa\") " pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.873915 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.878753 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msltf\" (UniqueName: \"kubernetes.io/projected/93b4bbab-8905-41f8-8226-da26a5d644aa-kube-api-access-msltf\") pod \"aodh-f55d-account-create-update-8bjjl\" (UID: \"93b4bbab-8905-41f8-8226-da26a5d644aa\") " pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:06 crc kubenswrapper[4945]: I0109 01:05:06.979594 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:07 crc kubenswrapper[4945]: I0109 01:05:07.426210 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xpp5z"] Jan 09 01:05:07 crc kubenswrapper[4945]: I0109 01:05:07.571619 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-f55d-account-create-update-8bjjl"] Jan 09 01:05:07 crc kubenswrapper[4945]: W0109 01:05:07.581440 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93b4bbab_8905_41f8_8226_da26a5d644aa.slice/crio-7702c68ac6dfcd55bbad350c762c4136c3db5f5007922ea500e7a376d96103df WatchSource:0}: Error finding container 7702c68ac6dfcd55bbad350c762c4136c3db5f5007922ea500e7a376d96103df: Status 404 returned error can't find the container with id 7702c68ac6dfcd55bbad350c762c4136c3db5f5007922ea500e7a376d96103df Jan 09 01:05:08 crc kubenswrapper[4945]: I0109 01:05:08.068027 4945 generic.go:334] "Generic (PLEG): container finished" podID="93b4bbab-8905-41f8-8226-da26a5d644aa" containerID="36e7722651311a55865afe553a6d99e8b531d87ed3d2178adb9552161d49fc25" exitCode=0 Jan 09 01:05:08 crc kubenswrapper[4945]: I0109 01:05:08.068296 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-f55d-account-create-update-8bjjl" event={"ID":"93b4bbab-8905-41f8-8226-da26a5d644aa","Type":"ContainerDied","Data":"36e7722651311a55865afe553a6d99e8b531d87ed3d2178adb9552161d49fc25"} Jan 09 01:05:08 crc kubenswrapper[4945]: I0109 01:05:08.068351 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-f55d-account-create-update-8bjjl" event={"ID":"93b4bbab-8905-41f8-8226-da26a5d644aa","Type":"ContainerStarted","Data":"7702c68ac6dfcd55bbad350c762c4136c3db5f5007922ea500e7a376d96103df"} Jan 09 01:05:08 crc kubenswrapper[4945]: I0109 01:05:08.070255 4945 generic.go:334] "Generic (PLEG): container finished" podID="2379e260-5c37-4fb1-9216-a8b2037dcdc4" containerID="99f28ad374f8b7277676403e8b10e27f1bd49668e13e2c6c7d1c80f32399f1e3" exitCode=0 Jan 09 01:05:08 crc kubenswrapper[4945]: I0109 01:05:08.070299 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xpp5z" event={"ID":"2379e260-5c37-4fb1-9216-a8b2037dcdc4","Type":"ContainerDied","Data":"99f28ad374f8b7277676403e8b10e27f1bd49668e13e2c6c7d1c80f32399f1e3"} Jan 09 01:05:08 crc kubenswrapper[4945]: I0109 01:05:08.070318 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xpp5z" event={"ID":"2379e260-5c37-4fb1-9216-a8b2037dcdc4","Type":"ContainerStarted","Data":"57e62958e9cafdc4e4e15dedd2bbb5047376da29bd7fbc20d1f1a4c25e28fa0a"} Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.604416 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.617141 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.729629 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b4bbab-8905-41f8-8226-da26a5d644aa-operator-scripts\") pod \"93b4bbab-8905-41f8-8226-da26a5d644aa\" (UID: \"93b4bbab-8905-41f8-8226-da26a5d644aa\") " Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.729958 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msltf\" (UniqueName: \"kubernetes.io/projected/93b4bbab-8905-41f8-8226-da26a5d644aa-kube-api-access-msltf\") pod \"93b4bbab-8905-41f8-8226-da26a5d644aa\" (UID: \"93b4bbab-8905-41f8-8226-da26a5d644aa\") " Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.730128 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2379e260-5c37-4fb1-9216-a8b2037dcdc4-operator-scripts\") pod \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\" (UID: \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\") " Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.730129 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b4bbab-8905-41f8-8226-da26a5d644aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93b4bbab-8905-41f8-8226-da26a5d644aa" (UID: "93b4bbab-8905-41f8-8226-da26a5d644aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.730373 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7wft\" (UniqueName: \"kubernetes.io/projected/2379e260-5c37-4fb1-9216-a8b2037dcdc4-kube-api-access-q7wft\") pod \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\" (UID: \"2379e260-5c37-4fb1-9216-a8b2037dcdc4\") " Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.730901 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b4bbab-8905-41f8-8226-da26a5d644aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.731117 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2379e260-5c37-4fb1-9216-a8b2037dcdc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2379e260-5c37-4fb1-9216-a8b2037dcdc4" (UID: "2379e260-5c37-4fb1-9216-a8b2037dcdc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.735666 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b4bbab-8905-41f8-8226-da26a5d644aa-kube-api-access-msltf" (OuterVolumeSpecName: "kube-api-access-msltf") pod "93b4bbab-8905-41f8-8226-da26a5d644aa" (UID: "93b4bbab-8905-41f8-8226-da26a5d644aa"). InnerVolumeSpecName "kube-api-access-msltf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.735975 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2379e260-5c37-4fb1-9216-a8b2037dcdc4-kube-api-access-q7wft" (OuterVolumeSpecName: "kube-api-access-q7wft") pod "2379e260-5c37-4fb1-9216-a8b2037dcdc4" (UID: "2379e260-5c37-4fb1-9216-a8b2037dcdc4"). InnerVolumeSpecName "kube-api-access-q7wft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.832701 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msltf\" (UniqueName: \"kubernetes.io/projected/93b4bbab-8905-41f8-8226-da26a5d644aa-kube-api-access-msltf\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.832741 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2379e260-5c37-4fb1-9216-a8b2037dcdc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:09 crc kubenswrapper[4945]: I0109 01:05:09.832752 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7wft\" (UniqueName: \"kubernetes.io/projected/2379e260-5c37-4fb1-9216-a8b2037dcdc4-kube-api-access-q7wft\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.067804 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7dp9w"] Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.086767 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3127-account-create-update-tnrnz"] Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.089167 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-f55d-account-create-update-8bjjl" event={"ID":"93b4bbab-8905-41f8-8226-da26a5d644aa","Type":"ContainerDied","Data":"7702c68ac6dfcd55bbad350c762c4136c3db5f5007922ea500e7a376d96103df"} Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.089202 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-f55d-account-create-update-8bjjl" Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.089212 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7702c68ac6dfcd55bbad350c762c4136c3db5f5007922ea500e7a376d96103df" Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.090573 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xpp5z" event={"ID":"2379e260-5c37-4fb1-9216-a8b2037dcdc4","Type":"ContainerDied","Data":"57e62958e9cafdc4e4e15dedd2bbb5047376da29bd7fbc20d1f1a4c25e28fa0a"} Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.090600 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57e62958e9cafdc4e4e15dedd2bbb5047376da29bd7fbc20d1f1a4c25e28fa0a" Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.090668 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xpp5z" Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.104683 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-nvm8x"] Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.118322 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-79e3-account-create-update-9rgmt"] Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.127463 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-79e3-account-create-update-9rgmt"] Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.135882 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7dp9w"] Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.144666 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-nvm8x"] Jan 09 01:05:10 crc kubenswrapper[4945]: I0109 01:05:10.161281 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3127-account-create-update-tnrnz"] Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.037873 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vsm4l"] Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.050182 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-73e8-account-create-update-tz4jl"] Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.060620 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vsm4l"] Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.073267 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-73e8-account-create-update-tz4jl"] Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.982827 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-cp4nf"] Jan 09 01:05:11 crc kubenswrapper[4945]: E0109 01:05:11.983591 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2379e260-5c37-4fb1-9216-a8b2037dcdc4" containerName="mariadb-database-create" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.983606 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2379e260-5c37-4fb1-9216-a8b2037dcdc4" containerName="mariadb-database-create" Jan 09 01:05:11 crc kubenswrapper[4945]: E0109 01:05:11.983616 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b4bbab-8905-41f8-8226-da26a5d644aa" containerName="mariadb-account-create-update" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.983622 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b4bbab-8905-41f8-8226-da26a5d644aa" containerName="mariadb-account-create-update" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.983814 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b4bbab-8905-41f8-8226-da26a5d644aa" containerName="mariadb-account-create-update" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.983828 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2379e260-5c37-4fb1-9216-a8b2037dcdc4" containerName="mariadb-database-create" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.984844 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.987770 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-2h7dg" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.987911 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.987942 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.989598 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 09 01:05:11 crc kubenswrapper[4945]: I0109 01:05:11.994724 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-cp4nf"] Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.020782 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10" path="/var/lib/kubelet/pods/35bfe4a6-a7b0-407a-b5a4-9c1f69b2bb10/volumes" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.021843 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e45dcc7-78b7-462c-9137-48181b6c114a" path="/var/lib/kubelet/pods/4e45dcc7-78b7-462c-9137-48181b6c114a/volumes" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.022501 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e4d7a36-fe46-48ef-835d-9ceef2b01dd3" path="/var/lib/kubelet/pods/8e4d7a36-fe46-48ef-835d-9ceef2b01dd3/volumes" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.023192 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34718dd-0f79-40b2-afeb-e0c2f4b05d74" path="/var/lib/kubelet/pods/a34718dd-0f79-40b2-afeb-e0c2f4b05d74/volumes" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.025790 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1" path="/var/lib/kubelet/pods/b60bef7f-d7cd-4be3-b1ab-3a451ad1c7d1/volumes" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.026517 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a19223-51f2-407d-92a3-8bea7f37f1fb" path="/var/lib/kubelet/pods/d1a19223-51f2-407d-92a3-8bea7f37f1fb/volumes" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.093050 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjms6\" (UniqueName: \"kubernetes.io/projected/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-kube-api-access-jjms6\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.093204 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-scripts\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.093275 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-config-data\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.093608 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-combined-ca-bundle\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.195391 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-combined-ca-bundle\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.195520 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjms6\" (UniqueName: \"kubernetes.io/projected/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-kube-api-access-jjms6\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.195604 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-scripts\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.195638 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-config-data\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.202617 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-config-data\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.211315 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-scripts\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.212786 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-combined-ca-bundle\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.216650 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjms6\" (UniqueName: \"kubernetes.io/projected/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-kube-api-access-jjms6\") pod \"aodh-db-sync-cp4nf\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.321492 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:12 crc kubenswrapper[4945]: I0109 01:05:12.769067 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-cp4nf"] Jan 09 01:05:13 crc kubenswrapper[4945]: I0109 01:05:13.130073 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cp4nf" event={"ID":"afe16e63-4938-4e6e-86d6-96b8ccfc95cb","Type":"ContainerStarted","Data":"bc4cebc340688039cc63bd4f5066a695eaac09f41a619f1ebe7e2adab34d182a"} Jan 09 01:05:18 crc kubenswrapper[4945]: I0109 01:05:18.206449 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cp4nf" event={"ID":"afe16e63-4938-4e6e-86d6-96b8ccfc95cb","Type":"ContainerStarted","Data":"9d758a92d2249cf72cd4fa253393cd510675c16432039506ef4f6552ab30507c"} Jan 09 01:05:18 crc kubenswrapper[4945]: I0109 01:05:18.227116 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-cp4nf" podStartSLOduration=2.966349961 podStartE2EDuration="7.227092898s" podCreationTimestamp="2026-01-09 01:05:11 +0000 UTC" firstStartedPulling="2026-01-09 01:05:12.771868079 +0000 UTC m=+6583.083027025" lastFinishedPulling="2026-01-09 01:05:17.032611016 +0000 UTC m=+6587.343769962" observedRunningTime="2026-01-09 01:05:18.224404722 +0000 UTC m=+6588.535563668" watchObservedRunningTime="2026-01-09 01:05:18.227092898 +0000 UTC m=+6588.538251844" Jan 09 01:05:19 crc kubenswrapper[4945]: I0109 01:05:19.001340 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:05:19 crc kubenswrapper[4945]: E0109 01:05:19.001650 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:05:20 crc kubenswrapper[4945]: I0109 01:05:20.042214 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2vhh7"] Jan 09 01:05:20 crc kubenswrapper[4945]: I0109 01:05:20.057224 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2vhh7"] Jan 09 01:05:20 crc kubenswrapper[4945]: I0109 01:05:20.226550 4945 generic.go:334] "Generic (PLEG): container finished" podID="afe16e63-4938-4e6e-86d6-96b8ccfc95cb" containerID="9d758a92d2249cf72cd4fa253393cd510675c16432039506ef4f6552ab30507c" exitCode=0 Jan 09 01:05:20 crc kubenswrapper[4945]: I0109 01:05:20.226604 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cp4nf" event={"ID":"afe16e63-4938-4e6e-86d6-96b8ccfc95cb","Type":"ContainerDied","Data":"9d758a92d2249cf72cd4fa253393cd510675c16432039506ef4f6552ab30507c"} Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.641351 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.813190 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-config-data\") pod \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.813301 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-combined-ca-bundle\") pod \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.813354 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-scripts\") pod \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.813526 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjms6\" (UniqueName: \"kubernetes.io/projected/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-kube-api-access-jjms6\") pod \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\" (UID: \"afe16e63-4938-4e6e-86d6-96b8ccfc95cb\") " Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.819707 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-scripts" (OuterVolumeSpecName: "scripts") pod "afe16e63-4938-4e6e-86d6-96b8ccfc95cb" (UID: "afe16e63-4938-4e6e-86d6-96b8ccfc95cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.823674 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-kube-api-access-jjms6" (OuterVolumeSpecName: "kube-api-access-jjms6") pod "afe16e63-4938-4e6e-86d6-96b8ccfc95cb" (UID: "afe16e63-4938-4e6e-86d6-96b8ccfc95cb"). InnerVolumeSpecName "kube-api-access-jjms6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.842329 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe16e63-4938-4e6e-86d6-96b8ccfc95cb" (UID: "afe16e63-4938-4e6e-86d6-96b8ccfc95cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.842681 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-config-data" (OuterVolumeSpecName: "config-data") pod "afe16e63-4938-4e6e-86d6-96b8ccfc95cb" (UID: "afe16e63-4938-4e6e-86d6-96b8ccfc95cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.917566 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.917609 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.917624 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:21 crc kubenswrapper[4945]: I0109 01:05:21.917637 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjms6\" (UniqueName: \"kubernetes.io/projected/afe16e63-4938-4e6e-86d6-96b8ccfc95cb-kube-api-access-jjms6\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:22 crc kubenswrapper[4945]: I0109 01:05:22.019287 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55bd014-f2d4-456a-9961-5f7db7ff79d2" path="/var/lib/kubelet/pods/f55bd014-f2d4-456a-9961-5f7db7ff79d2/volumes" Jan 09 01:05:22 crc kubenswrapper[4945]: I0109 01:05:22.268599 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cp4nf" event={"ID":"afe16e63-4938-4e6e-86d6-96b8ccfc95cb","Type":"ContainerDied","Data":"bc4cebc340688039cc63bd4f5066a695eaac09f41a619f1ebe7e2adab34d182a"} Jan 09 01:05:22 crc kubenswrapper[4945]: I0109 01:05:22.268644 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4cebc340688039cc63bd4f5066a695eaac09f41a619f1ebe7e2adab34d182a" Jan 09 01:05:22 crc kubenswrapper[4945]: I0109 01:05:22.268707 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cp4nf" Jan 09 01:05:24 crc kubenswrapper[4945]: I0109 01:05:24.901494 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.645284 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 09 01:05:26 crc kubenswrapper[4945]: E0109 01:05:26.646367 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe16e63-4938-4e6e-86d6-96b8ccfc95cb" containerName="aodh-db-sync" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.646394 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe16e63-4938-4e6e-86d6-96b8ccfc95cb" containerName="aodh-db-sync" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.646819 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe16e63-4938-4e6e-86d6-96b8ccfc95cb" containerName="aodh-db-sync" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.651178 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.663792 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-2h7dg" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.664493 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.665883 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.668038 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.741524 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.741702 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rwmg\" (UniqueName: \"kubernetes.io/projected/1d8f820e-0157-4bc4-b675-a96d5a704c07-kube-api-access-4rwmg\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.741845 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-scripts\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.741926 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-config-data\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.843842 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.843971 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rwmg\" (UniqueName: \"kubernetes.io/projected/1d8f820e-0157-4bc4-b675-a96d5a704c07-kube-api-access-4rwmg\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.844103 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-scripts\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.844159 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-config-data\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.851730 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.852033 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-config-data\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.858121 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d8f820e-0157-4bc4-b675-a96d5a704c07-scripts\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.865172 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rwmg\" (UniqueName: \"kubernetes.io/projected/1d8f820e-0157-4bc4-b675-a96d5a704c07-kube-api-access-4rwmg\") pod \"aodh-0\" (UID: \"1d8f820e-0157-4bc4-b675-a96d5a704c07\") " pod="openstack/aodh-0" Jan 09 01:05:26 crc kubenswrapper[4945]: I0109 01:05:26.986088 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 09 01:05:27 crc kubenswrapper[4945]: I0109 01:05:27.522294 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 09 01:05:28 crc kubenswrapper[4945]: I0109 01:05:28.244791 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:05:28 crc kubenswrapper[4945]: I0109 01:05:28.245575 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="ceilometer-central-agent" containerID="cri-o://118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d" gracePeriod=30 Jan 09 01:05:28 crc kubenswrapper[4945]: I0109 01:05:28.245610 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="proxy-httpd" containerID="cri-o://a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932" gracePeriod=30 Jan 09 01:05:28 crc kubenswrapper[4945]: I0109 01:05:28.245677 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="ceilometer-notification-agent" containerID="cri-o://7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae" gracePeriod=30 Jan 09 01:05:28 crc kubenswrapper[4945]: I0109 01:05:28.245654 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="sg-core" containerID="cri-o://7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585" gracePeriod=30 Jan 09 01:05:28 crc kubenswrapper[4945]: I0109 01:05:28.330085 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8f820e-0157-4bc4-b675-a96d5a704c07","Type":"ContainerStarted","Data":"654b7f45a778c78541ab854bbb2069743f3172054a0961395016850364b305a4"} Jan 09 01:05:29 crc kubenswrapper[4945]: I0109 01:05:29.344863 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8f820e-0157-4bc4-b675-a96d5a704c07","Type":"ContainerStarted","Data":"523ecf3cc33d582e644747cce477af7900b0d981edebd58b4d97690e5ffdbe89"} Jan 09 01:05:29 crc kubenswrapper[4945]: I0109 01:05:29.347833 4945 generic.go:334] "Generic (PLEG): container finished" podID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerID="a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932" exitCode=0 Jan 09 01:05:29 crc kubenswrapper[4945]: I0109 01:05:29.347869 4945 generic.go:334] "Generic (PLEG): container finished" podID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerID="7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585" exitCode=2 Jan 09 01:05:29 crc kubenswrapper[4945]: I0109 01:05:29.347879 4945 generic.go:334] "Generic (PLEG): container finished" podID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerID="118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d" exitCode=0 Jan 09 01:05:29 crc kubenswrapper[4945]: I0109 01:05:29.347891 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerDied","Data":"a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932"} Jan 09 01:05:29 crc kubenswrapper[4945]: I0109 01:05:29.347921 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerDied","Data":"7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585"} Jan 09 01:05:29 crc kubenswrapper[4945]: I0109 01:05:29.347932 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerDied","Data":"118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d"} Jan 09 01:05:30 crc kubenswrapper[4945]: I0109 01:05:30.035638 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:05:30 crc kubenswrapper[4945]: E0109 01:05:30.036729 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:05:30 crc kubenswrapper[4945]: I0109 01:05:30.358770 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8f820e-0157-4bc4-b675-a96d5a704c07","Type":"ContainerStarted","Data":"df9b57a62cf40fab7da50746141a30726aa1ea923363a153681415ac20ee6fec"} Jan 09 01:05:32 crc kubenswrapper[4945]: I0109 01:05:32.382384 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8f820e-0157-4bc4-b675-a96d5a704c07","Type":"ContainerStarted","Data":"0a97cc39fb33a23356e5892ede923b60df0b2b47cf8ea217fe78f745e0d061ca"} Jan 09 01:05:33 crc kubenswrapper[4945]: I0109 01:05:33.932049 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.011107 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m95nr\" (UniqueName: \"kubernetes.io/projected/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-kube-api-access-m95nr\") pod \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.011248 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-scripts\") pod \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.011279 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-run-httpd\") pod \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.011333 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-combined-ca-bundle\") pod \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.011384 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-sg-core-conf-yaml\") pod \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.011405 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-log-httpd\") pod \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.011496 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-config-data\") pod \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\" (UID: \"0f676ce4-e116-4b87-af86-4f6f15b4d1a7\") " Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.011703 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0f676ce4-e116-4b87-af86-4f6f15b4d1a7" (UID: "0f676ce4-e116-4b87-af86-4f6f15b4d1a7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.012190 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.017138 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-kube-api-access-m95nr" (OuterVolumeSpecName: "kube-api-access-m95nr") pod "0f676ce4-e116-4b87-af86-4f6f15b4d1a7" (UID: "0f676ce4-e116-4b87-af86-4f6f15b4d1a7"). InnerVolumeSpecName "kube-api-access-m95nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.028391 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0f676ce4-e116-4b87-af86-4f6f15b4d1a7" (UID: "0f676ce4-e116-4b87-af86-4f6f15b4d1a7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.029681 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-scripts" (OuterVolumeSpecName: "scripts") pod "0f676ce4-e116-4b87-af86-4f6f15b4d1a7" (UID: "0f676ce4-e116-4b87-af86-4f6f15b4d1a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.041944 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0f676ce4-e116-4b87-af86-4f6f15b4d1a7" (UID: "0f676ce4-e116-4b87-af86-4f6f15b4d1a7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.087665 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f676ce4-e116-4b87-af86-4f6f15b4d1a7" (UID: "0f676ce4-e116-4b87-af86-4f6f15b4d1a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.110643 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-config-data" (OuterVolumeSpecName: "config-data") pod "0f676ce4-e116-4b87-af86-4f6f15b4d1a7" (UID: "0f676ce4-e116-4b87-af86-4f6f15b4d1a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.114618 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m95nr\" (UniqueName: \"kubernetes.io/projected/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-kube-api-access-m95nr\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.114644 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.114657 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.114667 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.114678 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.114690 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f676ce4-e116-4b87-af86-4f6f15b4d1a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.401640 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1d8f820e-0157-4bc4-b675-a96d5a704c07","Type":"ContainerStarted","Data":"517324acfb733ecb365806beb51c666524330fc181b0083d5964ef1bbd2170f0"} Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.404236 4945 generic.go:334] "Generic (PLEG): container finished" podID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerID="7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae" exitCode=0 Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.404300 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerDied","Data":"7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae"} Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.404339 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.404387 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0f676ce4-e116-4b87-af86-4f6f15b4d1a7","Type":"ContainerDied","Data":"cddf4c530c79d4b2c7febab41e341bb79ded42fc14928d8c9c4fa39368c690d4"} Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.404417 4945 scope.go:117] "RemoveContainer" containerID="a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.436707 4945 scope.go:117] "RemoveContainer" containerID="7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.443836 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.815122186 podStartE2EDuration="8.443807984s" podCreationTimestamp="2026-01-09 01:05:26 +0000 UTC" firstStartedPulling="2026-01-09 01:05:27.537764003 +0000 UTC m=+6597.848922949" lastFinishedPulling="2026-01-09 01:05:33.166449801 +0000 UTC m=+6603.477608747" observedRunningTime="2026-01-09 01:05:34.421665159 +0000 UTC m=+6604.732824105" watchObservedRunningTime="2026-01-09 01:05:34.443807984 +0000 UTC m=+6604.754966940" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.469659 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.482323 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.491497 4945 scope.go:117] "RemoveContainer" containerID="7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.507787 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:05:34 crc kubenswrapper[4945]: E0109 01:05:34.508214 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="sg-core" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.508235 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="sg-core" Jan 09 01:05:34 crc kubenswrapper[4945]: E0109 01:05:34.508250 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="proxy-httpd" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.508257 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="proxy-httpd" Jan 09 01:05:34 crc kubenswrapper[4945]: E0109 01:05:34.508281 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="ceilometer-notification-agent" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.508288 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="ceilometer-notification-agent" Jan 09 01:05:34 crc kubenswrapper[4945]: E0109 01:05:34.508295 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="ceilometer-central-agent" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.508301 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="ceilometer-central-agent" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.508498 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="ceilometer-central-agent" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.508523 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="proxy-httpd" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.508533 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="sg-core" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.508548 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" containerName="ceilometer-notification-agent" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.510515 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.513298 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.513548 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.517334 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.546709 4945 scope.go:117] "RemoveContainer" containerID="118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.580428 4945 scope.go:117] "RemoveContainer" containerID="a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932" Jan 09 01:05:34 crc kubenswrapper[4945]: E0109 01:05:34.582264 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932\": container with ID starting with a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932 not found: ID does not exist" containerID="a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.582313 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932"} err="failed to get container status \"a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932\": rpc error: code = NotFound desc = could not find container \"a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932\": container with ID starting with a0fcb246e05927318cc38aa7691171d0a1838495d01b55511664f768fba2b932 not found: ID does not exist" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.582340 4945 scope.go:117] "RemoveContainer" containerID="7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585" Jan 09 01:05:34 crc kubenswrapper[4945]: E0109 01:05:34.582949 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585\": container with ID starting with 7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585 not found: ID does not exist" containerID="7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.582979 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585"} err="failed to get container status \"7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585\": rpc error: code = NotFound desc = could not find container \"7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585\": container with ID starting with 7f25ebf9f5fd2c152172b51c15c9714c4c92db591d646b8066759f439b65d585 not found: ID does not exist" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.583020 4945 scope.go:117] "RemoveContainer" containerID="7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae" Jan 09 01:05:34 crc kubenswrapper[4945]: E0109 01:05:34.583467 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae\": container with ID starting with 7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae not found: ID does not exist" containerID="7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.583488 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae"} err="failed to get container status \"7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae\": rpc error: code = NotFound desc = could not find container \"7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae\": container with ID starting with 7231370f72f6275dee22622f7f4047a218db57cc6a8b664e14a0f26dc28816ae not found: ID does not exist" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.583502 4945 scope.go:117] "RemoveContainer" containerID="118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d" Jan 09 01:05:34 crc kubenswrapper[4945]: E0109 01:05:34.583756 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d\": container with ID starting with 118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d not found: ID does not exist" containerID="118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.583775 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d"} err="failed to get container status \"118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d\": rpc error: code = NotFound desc = could not find container \"118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d\": container with ID starting with 118d7d8ac2fe6d40895a81b7d48ef5cdeacdf69c8ab3e850658c160435bdd80d not found: ID does not exist" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.627709 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.627785 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.627963 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-run-httpd\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.628182 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-config-data\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.628235 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsbz\" (UniqueName: \"kubernetes.io/projected/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-kube-api-access-ldsbz\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.628293 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-scripts\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.628505 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-log-httpd\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.730616 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-log-httpd\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.730691 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.730736 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.730795 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-run-httpd\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.730852 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-config-data\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.730878 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsbz\" (UniqueName: \"kubernetes.io/projected/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-kube-api-access-ldsbz\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.730897 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-scripts\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.731221 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-log-httpd\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.731400 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-run-httpd\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.735808 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-config-data\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.735978 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-scripts\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.736456 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.738445 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.750756 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsbz\" (UniqueName: \"kubernetes.io/projected/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-kube-api-access-ldsbz\") pod \"ceilometer-0\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " pod="openstack/ceilometer-0" Jan 09 01:05:34 crc kubenswrapper[4945]: I0109 01:05:34.853046 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:05:35 crc kubenswrapper[4945]: I0109 01:05:35.395076 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:05:35 crc kubenswrapper[4945]: W0109 01:05:35.396257 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod363b5f94_e0d4_4426_9c36_c3f7c4b7f7df.slice/crio-eaa7396caf744a5c8291030cc6ecab3aefbc0fdde4d68dc2cc01529528b3aa0b WatchSource:0}: Error finding container eaa7396caf744a5c8291030cc6ecab3aefbc0fdde4d68dc2cc01529528b3aa0b: Status 404 returned error can't find the container with id eaa7396caf744a5c8291030cc6ecab3aefbc0fdde4d68dc2cc01529528b3aa0b Jan 09 01:05:35 crc kubenswrapper[4945]: I0109 01:05:35.457845 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerStarted","Data":"eaa7396caf744a5c8291030cc6ecab3aefbc0fdde4d68dc2cc01529528b3aa0b"} Jan 09 01:05:36 crc kubenswrapper[4945]: I0109 01:05:36.014815 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f676ce4-e116-4b87-af86-4f6f15b4d1a7" path="/var/lib/kubelet/pods/0f676ce4-e116-4b87-af86-4f6f15b4d1a7/volumes" Jan 09 01:05:37 crc kubenswrapper[4945]: I0109 01:05:37.480961 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerStarted","Data":"7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f"} Jan 09 01:05:38 crc kubenswrapper[4945]: I0109 01:05:38.074705 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n2c85"] Jan 09 01:05:38 crc kubenswrapper[4945]: I0109 01:05:38.084946 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n2c85"] Jan 09 01:05:38 crc kubenswrapper[4945]: I0109 01:05:38.497280 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerStarted","Data":"6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff"} Jan 09 01:05:38 crc kubenswrapper[4945]: I0109 01:05:38.497329 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerStarted","Data":"cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3"} Jan 09 01:05:39 crc kubenswrapper[4945]: I0109 01:05:39.034835 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-cgs5h"] Jan 09 01:05:39 crc kubenswrapper[4945]: I0109 01:05:39.045746 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-cgs5h"] Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.016280 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c2c0a7-3d97-4a52-b029-f3a64c96a68c" path="/var/lib/kubelet/pods/65c2c0a7-3d97-4a52-b029-f3a64c96a68c/volumes" Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.017638 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cd86543-8a26-480f-9ce7-f74a2d3da10c" path="/var/lib/kubelet/pods/8cd86543-8a26-480f-9ce7-f74a2d3da10c/volumes" Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.519911 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerStarted","Data":"9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb"} Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.520111 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.552435 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.320945985 podStartE2EDuration="6.552412531s" podCreationTimestamp="2026-01-09 01:05:34 +0000 UTC" firstStartedPulling="2026-01-09 01:05:35.401614823 +0000 UTC m=+6605.712773769" lastFinishedPulling="2026-01-09 01:05:39.633081369 +0000 UTC m=+6609.944240315" observedRunningTime="2026-01-09 01:05:40.546593068 +0000 UTC m=+6610.857752024" watchObservedRunningTime="2026-01-09 01:05:40.552412531 +0000 UTC m=+6610.863571477" Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.796638 4945 scope.go:117] "RemoveContainer" containerID="e48f67d17b31f692516d3f65e326421d8c10523621bb9891f19e07382787e384" Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.824467 4945 scope.go:117] "RemoveContainer" containerID="4e4bdf4057ca4569333923b7a5b6ca22bfd6bdf4b592a44390d815cab57e12a2" Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.899426 4945 scope.go:117] "RemoveContainer" containerID="6f4460bd69d58fb0645f055b1676f502c089957212f2edc9895fca63e9e46b07" Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.933595 4945 scope.go:117] "RemoveContainer" containerID="055e64eb4a4c5ce1e53035e51e26c3e829a1b7cb58d9c8fd2aca834c9bb8ef6d" Jan 09 01:05:40 crc kubenswrapper[4945]: I0109 01:05:40.984250 4945 scope.go:117] "RemoveContainer" containerID="c4c682af01ebe842e5031f26cc47aa0aa1e943988b431fb24ee27b51dab605c1" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.000818 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:05:41 crc kubenswrapper[4945]: E0109 01:05:41.001238 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.038335 4945 scope.go:117] "RemoveContainer" containerID="5ef4a4794772e7f294b7a148a6ea2f5798bc54bb5108d36b20693886bb90542a" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.092729 4945 scope.go:117] "RemoveContainer" containerID="bfde8956adc8fc5147deb3a2c814a8d5124d17e932e110702310677e5ccd5ecf" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.112665 4945 scope.go:117] "RemoveContainer" containerID="99b8428d154d57c509cee1d9fa42733ac8ab887dd634b62dacbd91d2385e2f34" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.147832 4945 scope.go:117] "RemoveContainer" containerID="7412d5bba03cfde89f567c48c934a9136919eaac82eb94284d8abd8bcb728ab5" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.345238 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-488w9"] Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.346618 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-488w9" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.373262 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-488w9"] Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.468437 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-243c-account-create-update-2tqbr"] Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.470089 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.489524 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b18f9a-4b09-4370-822b-f7036ce59f70-operator-scripts\") pod \"manila-db-create-488w9\" (UID: \"86b18f9a-4b09-4370-822b-f7036ce59f70\") " pod="openstack/manila-db-create-488w9" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.490786 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp8dr\" (UniqueName: \"kubernetes.io/projected/86b18f9a-4b09-4370-822b-f7036ce59f70-kube-api-access-kp8dr\") pod \"manila-db-create-488w9\" (UID: \"86b18f9a-4b09-4370-822b-f7036ce59f70\") " pod="openstack/manila-db-create-488w9" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.495447 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.513139 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-243c-account-create-update-2tqbr"] Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.598211 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp8dr\" (UniqueName: \"kubernetes.io/projected/86b18f9a-4b09-4370-822b-f7036ce59f70-kube-api-access-kp8dr\") pod \"manila-db-create-488w9\" (UID: \"86b18f9a-4b09-4370-822b-f7036ce59f70\") " pod="openstack/manila-db-create-488w9" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.598292 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd7jt\" (UniqueName: \"kubernetes.io/projected/5e839fcf-036c-420a-b4b4-d71d554fd7e2-kube-api-access-vd7jt\") pod \"manila-243c-account-create-update-2tqbr\" (UID: \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\") " pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.598482 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b18f9a-4b09-4370-822b-f7036ce59f70-operator-scripts\") pod \"manila-db-create-488w9\" (UID: \"86b18f9a-4b09-4370-822b-f7036ce59f70\") " pod="openstack/manila-db-create-488w9" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.598568 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e839fcf-036c-420a-b4b4-d71d554fd7e2-operator-scripts\") pod \"manila-243c-account-create-update-2tqbr\" (UID: \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\") " pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.602191 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b18f9a-4b09-4370-822b-f7036ce59f70-operator-scripts\") pod \"manila-db-create-488w9\" (UID: \"86b18f9a-4b09-4370-822b-f7036ce59f70\") " pod="openstack/manila-db-create-488w9" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.671776 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp8dr\" (UniqueName: \"kubernetes.io/projected/86b18f9a-4b09-4370-822b-f7036ce59f70-kube-api-access-kp8dr\") pod \"manila-db-create-488w9\" (UID: \"86b18f9a-4b09-4370-822b-f7036ce59f70\") " pod="openstack/manila-db-create-488w9" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.683612 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-488w9" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.701853 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e839fcf-036c-420a-b4b4-d71d554fd7e2-operator-scripts\") pod \"manila-243c-account-create-update-2tqbr\" (UID: \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\") " pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.701926 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd7jt\" (UniqueName: \"kubernetes.io/projected/5e839fcf-036c-420a-b4b4-d71d554fd7e2-kube-api-access-vd7jt\") pod \"manila-243c-account-create-update-2tqbr\" (UID: \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\") " pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.703707 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e839fcf-036c-420a-b4b4-d71d554fd7e2-operator-scripts\") pod \"manila-243c-account-create-update-2tqbr\" (UID: \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\") " pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.724764 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd7jt\" (UniqueName: \"kubernetes.io/projected/5e839fcf-036c-420a-b4b4-d71d554fd7e2-kube-api-access-vd7jt\") pod \"manila-243c-account-create-update-2tqbr\" (UID: \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\") " pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:41 crc kubenswrapper[4945]: I0109 01:05:41.798302 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:42 crc kubenswrapper[4945]: I0109 01:05:42.594835 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-488w9"] Jan 09 01:05:42 crc kubenswrapper[4945]: I0109 01:05:42.700753 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-243c-account-create-update-2tqbr"] Jan 09 01:05:43 crc kubenswrapper[4945]: I0109 01:05:43.561672 4945 generic.go:334] "Generic (PLEG): container finished" podID="86b18f9a-4b09-4370-822b-f7036ce59f70" containerID="d07a8c05d394b3d653e5b0e7c466605d842cf70cfef827248a86851d7e365bdc" exitCode=0 Jan 09 01:05:43 crc kubenswrapper[4945]: I0109 01:05:43.561730 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-488w9" event={"ID":"86b18f9a-4b09-4370-822b-f7036ce59f70","Type":"ContainerDied","Data":"d07a8c05d394b3d653e5b0e7c466605d842cf70cfef827248a86851d7e365bdc"} Jan 09 01:05:43 crc kubenswrapper[4945]: I0109 01:05:43.562640 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-488w9" event={"ID":"86b18f9a-4b09-4370-822b-f7036ce59f70","Type":"ContainerStarted","Data":"44b425635b0ac75a042a04e6e5405a1785c76bf63c49e9af5cc51a1fa0356348"} Jan 09 01:05:43 crc kubenswrapper[4945]: I0109 01:05:43.563961 4945 generic.go:334] "Generic (PLEG): container finished" podID="5e839fcf-036c-420a-b4b4-d71d554fd7e2" containerID="71080c2dbc035bf08e9b21e66dd23f8e9df061e8848ccfc8328e096a1567d14d" exitCode=0 Jan 09 01:05:43 crc kubenswrapper[4945]: I0109 01:05:43.564038 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-243c-account-create-update-2tqbr" event={"ID":"5e839fcf-036c-420a-b4b4-d71d554fd7e2","Type":"ContainerDied","Data":"71080c2dbc035bf08e9b21e66dd23f8e9df061e8848ccfc8328e096a1567d14d"} Jan 09 01:05:43 crc kubenswrapper[4945]: I0109 01:05:43.564078 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-243c-account-create-update-2tqbr" event={"ID":"5e839fcf-036c-420a-b4b4-d71d554fd7e2","Type":"ContainerStarted","Data":"2c93d9dd418520848581ede883011d19a20dcb5dfa7fe540ccf69e3b18ea1107"} Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.094426 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-488w9" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.106556 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.202612 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd7jt\" (UniqueName: \"kubernetes.io/projected/5e839fcf-036c-420a-b4b4-d71d554fd7e2-kube-api-access-vd7jt\") pod \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\" (UID: \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\") " Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.202659 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp8dr\" (UniqueName: \"kubernetes.io/projected/86b18f9a-4b09-4370-822b-f7036ce59f70-kube-api-access-kp8dr\") pod \"86b18f9a-4b09-4370-822b-f7036ce59f70\" (UID: \"86b18f9a-4b09-4370-822b-f7036ce59f70\") " Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.202788 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e839fcf-036c-420a-b4b4-d71d554fd7e2-operator-scripts\") pod \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\" (UID: \"5e839fcf-036c-420a-b4b4-d71d554fd7e2\") " Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.203051 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b18f9a-4b09-4370-822b-f7036ce59f70-operator-scripts\") pod \"86b18f9a-4b09-4370-822b-f7036ce59f70\" (UID: \"86b18f9a-4b09-4370-822b-f7036ce59f70\") " Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.203980 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e839fcf-036c-420a-b4b4-d71d554fd7e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e839fcf-036c-420a-b4b4-d71d554fd7e2" (UID: "5e839fcf-036c-420a-b4b4-d71d554fd7e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.204129 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86b18f9a-4b09-4370-822b-f7036ce59f70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86b18f9a-4b09-4370-822b-f7036ce59f70" (UID: "86b18f9a-4b09-4370-822b-f7036ce59f70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.208146 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e839fcf-036c-420a-b4b4-d71d554fd7e2-kube-api-access-vd7jt" (OuterVolumeSpecName: "kube-api-access-vd7jt") pod "5e839fcf-036c-420a-b4b4-d71d554fd7e2" (UID: "5e839fcf-036c-420a-b4b4-d71d554fd7e2"). InnerVolumeSpecName "kube-api-access-vd7jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.208230 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b18f9a-4b09-4370-822b-f7036ce59f70-kube-api-access-kp8dr" (OuterVolumeSpecName: "kube-api-access-kp8dr") pod "86b18f9a-4b09-4370-822b-f7036ce59f70" (UID: "86b18f9a-4b09-4370-822b-f7036ce59f70"). InnerVolumeSpecName "kube-api-access-kp8dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.306031 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp8dr\" (UniqueName: \"kubernetes.io/projected/86b18f9a-4b09-4370-822b-f7036ce59f70-kube-api-access-kp8dr\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.306077 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e839fcf-036c-420a-b4b4-d71d554fd7e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.306090 4945 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b18f9a-4b09-4370-822b-f7036ce59f70-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.306105 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd7jt\" (UniqueName: \"kubernetes.io/projected/5e839fcf-036c-420a-b4b4-d71d554fd7e2-kube-api-access-vd7jt\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.583515 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-243c-account-create-update-2tqbr" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.583558 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-243c-account-create-update-2tqbr" event={"ID":"5e839fcf-036c-420a-b4b4-d71d554fd7e2","Type":"ContainerDied","Data":"2c93d9dd418520848581ede883011d19a20dcb5dfa7fe540ccf69e3b18ea1107"} Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.583608 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c93d9dd418520848581ede883011d19a20dcb5dfa7fe540ccf69e3b18ea1107" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.585428 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-488w9" event={"ID":"86b18f9a-4b09-4370-822b-f7036ce59f70","Type":"ContainerDied","Data":"44b425635b0ac75a042a04e6e5405a1785c76bf63c49e9af5cc51a1fa0356348"} Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.585470 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b425635b0ac75a042a04e6e5405a1785c76bf63c49e9af5cc51a1fa0356348" Jan 09 01:05:45 crc kubenswrapper[4945]: I0109 01:05:45.585530 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-488w9" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.878698 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-grnpk"] Jan 09 01:05:46 crc kubenswrapper[4945]: E0109 01:05:46.879579 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e839fcf-036c-420a-b4b4-d71d554fd7e2" containerName="mariadb-account-create-update" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.879597 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e839fcf-036c-420a-b4b4-d71d554fd7e2" containerName="mariadb-account-create-update" Jan 09 01:05:46 crc kubenswrapper[4945]: E0109 01:05:46.879629 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b18f9a-4b09-4370-822b-f7036ce59f70" containerName="mariadb-database-create" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.879637 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b18f9a-4b09-4370-822b-f7036ce59f70" containerName="mariadb-database-create" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.879891 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e839fcf-036c-420a-b4b4-d71d554fd7e2" containerName="mariadb-account-create-update" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.879906 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b18f9a-4b09-4370-822b-f7036ce59f70" containerName="mariadb-database-create" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.881042 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.887632 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-cxlvc" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.888763 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.895800 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-grnpk"] Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.945546 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqbqq\" (UniqueName: \"kubernetes.io/projected/7608530a-71f8-40ed-b897-a55a6a23b021-kube-api-access-zqbqq\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.945772 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-combined-ca-bundle\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.945816 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-config-data\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:46 crc kubenswrapper[4945]: I0109 01:05:46.945874 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-job-config-data\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.047249 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-combined-ca-bundle\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.047309 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-config-data\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.047417 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-job-config-data\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.047446 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqbqq\" (UniqueName: \"kubernetes.io/projected/7608530a-71f8-40ed-b897-a55a6a23b021-kube-api-access-zqbqq\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.055013 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-combined-ca-bundle\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.058800 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-config-data\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.059575 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-job-config-data\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.094622 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqbqq\" (UniqueName: \"kubernetes.io/projected/7608530a-71f8-40ed-b897-a55a6a23b021-kube-api-access-zqbqq\") pod \"manila-db-sync-grnpk\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:47 crc kubenswrapper[4945]: I0109 01:05:47.204092 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:48 crc kubenswrapper[4945]: W0109 01:05:48.001850 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7608530a_71f8_40ed_b897_a55a6a23b021.slice/crio-1c01aa941576391426162eaf9e66eff2e1666159e75d26a7694a39e16e5f1770 WatchSource:0}: Error finding container 1c01aa941576391426162eaf9e66eff2e1666159e75d26a7694a39e16e5f1770: Status 404 returned error can't find the container with id 1c01aa941576391426162eaf9e66eff2e1666159e75d26a7694a39e16e5f1770 Jan 09 01:05:48 crc kubenswrapper[4945]: I0109 01:05:48.018249 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-grnpk"] Jan 09 01:05:48 crc kubenswrapper[4945]: I0109 01:05:48.625585 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-grnpk" event={"ID":"7608530a-71f8-40ed-b897-a55a6a23b021","Type":"ContainerStarted","Data":"1c01aa941576391426162eaf9e66eff2e1666159e75d26a7694a39e16e5f1770"} Jan 09 01:05:52 crc kubenswrapper[4945]: I0109 01:05:52.000988 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:05:52 crc kubenswrapper[4945]: E0109 01:05:52.001913 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:05:53 crc kubenswrapper[4945]: I0109 01:05:53.048942 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mjfmf"] Jan 09 01:05:53 crc kubenswrapper[4945]: I0109 01:05:53.066437 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mjfmf"] Jan 09 01:05:54 crc kubenswrapper[4945]: I0109 01:05:54.025095 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc970906-6f53-4c9c-931f-ba1bb6758411" path="/var/lib/kubelet/pods/fc970906-6f53-4c9c-931f-ba1bb6758411/volumes" Jan 09 01:05:54 crc kubenswrapper[4945]: I0109 01:05:54.722278 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-grnpk" event={"ID":"7608530a-71f8-40ed-b897-a55a6a23b021","Type":"ContainerStarted","Data":"9737a6da3c2441ef90696522bd6b18d46a553b34640ee9ef51f8dde20813d3c6"} Jan 09 01:05:54 crc kubenswrapper[4945]: I0109 01:05:54.745435 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-grnpk" podStartSLOduration=3.5581070390000002 podStartE2EDuration="8.745412898s" podCreationTimestamp="2026-01-09 01:05:46 +0000 UTC" firstStartedPulling="2026-01-09 01:05:48.004091729 +0000 UTC m=+6618.315250675" lastFinishedPulling="2026-01-09 01:05:53.191397588 +0000 UTC m=+6623.502556534" observedRunningTime="2026-01-09 01:05:54.741920232 +0000 UTC m=+6625.053079178" watchObservedRunningTime="2026-01-09 01:05:54.745412898 +0000 UTC m=+6625.056571854" Jan 09 01:05:56 crc kubenswrapper[4945]: I0109 01:05:56.741372 4945 generic.go:334] "Generic (PLEG): container finished" podID="7608530a-71f8-40ed-b897-a55a6a23b021" containerID="9737a6da3c2441ef90696522bd6b18d46a553b34640ee9ef51f8dde20813d3c6" exitCode=0 Jan 09 01:05:56 crc kubenswrapper[4945]: I0109 01:05:56.741473 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-grnpk" event={"ID":"7608530a-71f8-40ed-b897-a55a6a23b021","Type":"ContainerDied","Data":"9737a6da3c2441ef90696522bd6b18d46a553b34640ee9ef51f8dde20813d3c6"} Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.226417 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.341690 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-config-data\") pod \"7608530a-71f8-40ed-b897-a55a6a23b021\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.341752 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-job-config-data\") pod \"7608530a-71f8-40ed-b897-a55a6a23b021\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.341902 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-combined-ca-bundle\") pod \"7608530a-71f8-40ed-b897-a55a6a23b021\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.341981 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqbqq\" (UniqueName: \"kubernetes.io/projected/7608530a-71f8-40ed-b897-a55a6a23b021-kube-api-access-zqbqq\") pod \"7608530a-71f8-40ed-b897-a55a6a23b021\" (UID: \"7608530a-71f8-40ed-b897-a55a6a23b021\") " Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.347381 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7608530a-71f8-40ed-b897-a55a6a23b021-kube-api-access-zqbqq" (OuterVolumeSpecName: "kube-api-access-zqbqq") pod "7608530a-71f8-40ed-b897-a55a6a23b021" (UID: "7608530a-71f8-40ed-b897-a55a6a23b021"). InnerVolumeSpecName "kube-api-access-zqbqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.354385 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "7608530a-71f8-40ed-b897-a55a6a23b021" (UID: "7608530a-71f8-40ed-b897-a55a6a23b021"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.356579 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-config-data" (OuterVolumeSpecName: "config-data") pod "7608530a-71f8-40ed-b897-a55a6a23b021" (UID: "7608530a-71f8-40ed-b897-a55a6a23b021"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.380244 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7608530a-71f8-40ed-b897-a55a6a23b021" (UID: "7608530a-71f8-40ed-b897-a55a6a23b021"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.444788 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.444836 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqbqq\" (UniqueName: \"kubernetes.io/projected/7608530a-71f8-40ed-b897-a55a6a23b021-kube-api-access-zqbqq\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.444851 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.444863 4945 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7608530a-71f8-40ed-b897-a55a6a23b021-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.763553 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-grnpk" event={"ID":"7608530a-71f8-40ed-b897-a55a6a23b021","Type":"ContainerDied","Data":"1c01aa941576391426162eaf9e66eff2e1666159e75d26a7694a39e16e5f1770"} Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.763596 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c01aa941576391426162eaf9e66eff2e1666159e75d26a7694a39e16e5f1770" Jan 09 01:05:58 crc kubenswrapper[4945]: I0109 01:05:58.763619 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-grnpk" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.146329 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 09 01:05:59 crc kubenswrapper[4945]: E0109 01:05:59.147355 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7608530a-71f8-40ed-b897-a55a6a23b021" containerName="manila-db-sync" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.147570 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7608530a-71f8-40ed-b897-a55a6a23b021" containerName="manila-db-sync" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.152463 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7608530a-71f8-40ed-b897-a55a6a23b021" containerName="manila-db-sync" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.161243 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.169623 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.169903 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.170025 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-cxlvc" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.173532 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.207116 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.255424 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.258101 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.264840 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.266402 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-scripts\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.266494 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6j9l\" (UniqueName: \"kubernetes.io/projected/3f17ceb7-ed77-4912-aeaf-025c32f52c78-kube-api-access-b6j9l\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.266673 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.266706 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.266743 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-config-data\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.266777 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f17ceb7-ed77-4912-aeaf-025c32f52c78-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.281898 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59b9476d7c-64qv9"] Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.287186 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.296433 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.340143 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59b9476d7c-64qv9"] Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373115 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-sb\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373167 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fcb36454-abb9-473c-a184-f7b89cc73f6b-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373204 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373243 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373278 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373305 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-scripts\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373330 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fcb36454-abb9-473c-a184-f7b89cc73f6b-ceph\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373356 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-config-data\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373400 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f17ceb7-ed77-4912-aeaf-025c32f52c78-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373452 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-scripts\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373473 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-nb\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373528 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sj89\" (UniqueName: \"kubernetes.io/projected/fcb36454-abb9-473c-a184-f7b89cc73f6b-kube-api-access-6sj89\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373552 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58p9t\" (UniqueName: \"kubernetes.io/projected/26efb867-69f2-460f-93c1-902af82b7e4a-kube-api-access-58p9t\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373587 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6j9l\" (UniqueName: \"kubernetes.io/projected/3f17ceb7-ed77-4912-aeaf-025c32f52c78-kube-api-access-b6j9l\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373611 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcb36454-abb9-473c-a184-f7b89cc73f6b-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373653 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-config-data\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373687 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.373719 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-config\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.382509 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.386210 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f17ceb7-ed77-4912-aeaf-025c32f52c78-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.386696 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-dns-svc\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.389279 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-config-data\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.393614 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-scripts\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.396664 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.398632 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.399970 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f17ceb7-ed77-4912-aeaf-025c32f52c78-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.403901 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.430074 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6j9l\" (UniqueName: \"kubernetes.io/projected/3f17ceb7-ed77-4912-aeaf-025c32f52c78-kube-api-access-b6j9l\") pod \"manila-scheduler-0\" (UID: \"3f17ceb7-ed77-4912-aeaf-025c32f52c78\") " pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.437951 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.488781 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sj89\" (UniqueName: \"kubernetes.io/projected/fcb36454-abb9-473c-a184-f7b89cc73f6b-kube-api-access-6sj89\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.488832 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58p9t\" (UniqueName: \"kubernetes.io/projected/26efb867-69f2-460f-93c1-902af82b7e4a-kube-api-access-58p9t\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.488857 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-config-data\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.488889 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcb36454-abb9-473c-a184-f7b89cc73f6b-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.488914 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db943e15-5363-4299-85fb-cd9b0805fb86-logs\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.488958 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg6dp\" (UniqueName: \"kubernetes.io/projected/db943e15-5363-4299-85fb-cd9b0805fb86-kube-api-access-kg6dp\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.488978 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-config-data\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489128 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-scripts\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489156 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489181 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-config\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489206 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489234 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-config-data-custom\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489256 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db943e15-5363-4299-85fb-cd9b0805fb86-etc-machine-id\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489298 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-dns-svc\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489333 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-sb\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489350 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fcb36454-abb9-473c-a184-f7b89cc73f6b-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489368 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489394 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-scripts\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489409 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fcb36454-abb9-473c-a184-f7b89cc73f6b-ceph\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.489460 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-nb\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.491566 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-nb\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.492111 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fcb36454-abb9-473c-a184-f7b89cc73f6b-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.492637 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/fcb36454-abb9-473c-a184-f7b89cc73f6b-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.493546 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-dns-svc\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.493905 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-config\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.494434 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-sb\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.496520 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.500541 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-scripts\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.502092 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-config-data\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.502503 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb36454-abb9-473c-a184-f7b89cc73f6b-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.503785 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fcb36454-abb9-473c-a184-f7b89cc73f6b-ceph\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.517232 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58p9t\" (UniqueName: \"kubernetes.io/projected/26efb867-69f2-460f-93c1-902af82b7e4a-kube-api-access-58p9t\") pod \"dnsmasq-dns-59b9476d7c-64qv9\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.518862 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sj89\" (UniqueName: \"kubernetes.io/projected/fcb36454-abb9-473c-a184-f7b89cc73f6b-kube-api-access-6sj89\") pod \"manila-share-share1-0\" (UID: \"fcb36454-abb9-473c-a184-f7b89cc73f6b\") " pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.529789 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.588474 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.593124 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-config-data\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.593233 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db943e15-5363-4299-85fb-cd9b0805fb86-logs\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.593262 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg6dp\" (UniqueName: \"kubernetes.io/projected/db943e15-5363-4299-85fb-cd9b0805fb86-kube-api-access-kg6dp\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.593283 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-scripts\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.593341 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.593369 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-config-data-custom\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.593389 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db943e15-5363-4299-85fb-cd9b0805fb86-etc-machine-id\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.593553 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db943e15-5363-4299-85fb-cd9b0805fb86-etc-machine-id\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.594529 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db943e15-5363-4299-85fb-cd9b0805fb86-logs\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.599629 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-config-data-custom\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.601958 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-config-data\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.602312 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-scripts\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.606444 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db943e15-5363-4299-85fb-cd9b0805fb86-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.614036 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.616940 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg6dp\" (UniqueName: \"kubernetes.io/projected/db943e15-5363-4299-85fb-cd9b0805fb86-kube-api-access-kg6dp\") pod \"manila-api-0\" (UID: \"db943e15-5363-4299-85fb-cd9b0805fb86\") " pod="openstack/manila-api-0" Jan 09 01:05:59 crc kubenswrapper[4945]: I0109 01:05:59.719241 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.112274 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 09 01:06:00 crc kubenswrapper[4945]: W0109 01:06:00.128843 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f17ceb7_ed77_4912_aeaf_025c32f52c78.slice/crio-0b021ce45460a870c8794c389a5724bd9ea94c29a3f53c8ccc3e3ff3e0da9c81 WatchSource:0}: Error finding container 0b021ce45460a870c8794c389a5724bd9ea94c29a3f53c8ccc3e3ff3e0da9c81: Status 404 returned error can't find the container with id 0b021ce45460a870c8794c389a5724bd9ea94c29a3f53c8ccc3e3ff3e0da9c81 Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.309545 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59b9476d7c-64qv9"] Jan 09 01:06:00 crc kubenswrapper[4945]: W0109 01:06:00.322764 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26efb867_69f2_460f_93c1_902af82b7e4a.slice/crio-989a0c6f26226208b43471faf9629305e135dfd828a30a0fc46c98eb450c7e1f WatchSource:0}: Error finding container 989a0c6f26226208b43471faf9629305e135dfd828a30a0fc46c98eb450c7e1f: Status 404 returned error can't find the container with id 989a0c6f26226208b43471faf9629305e135dfd828a30a0fc46c98eb450c7e1f Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.380505 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.609208 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 09 01:06:00 crc kubenswrapper[4945]: W0109 01:06:00.615512 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb943e15_5363_4299_85fb_cd9b0805fb86.slice/crio-0708e03156037ad26a8079e4868af035ced8b00df0354c3f1c8daebc0a720a01 WatchSource:0}: Error finding container 0708e03156037ad26a8079e4868af035ced8b00df0354c3f1c8daebc0a720a01: Status 404 returned error can't find the container with id 0708e03156037ad26a8079e4868af035ced8b00df0354c3f1c8daebc0a720a01 Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.821286 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"db943e15-5363-4299-85fb-cd9b0805fb86","Type":"ContainerStarted","Data":"0708e03156037ad26a8079e4868af035ced8b00df0354c3f1c8daebc0a720a01"} Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.823648 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fcb36454-abb9-473c-a184-f7b89cc73f6b","Type":"ContainerStarted","Data":"a39c067d23fde98472ef49f387822d57b8cea20ff743dfc5c162ace0b746d1df"} Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.827298 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3f17ceb7-ed77-4912-aeaf-025c32f52c78","Type":"ContainerStarted","Data":"0b021ce45460a870c8794c389a5724bd9ea94c29a3f53c8ccc3e3ff3e0da9c81"} Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.837302 4945 generic.go:334] "Generic (PLEG): container finished" podID="26efb867-69f2-460f-93c1-902af82b7e4a" containerID="61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72" exitCode=0 Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.837382 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" event={"ID":"26efb867-69f2-460f-93c1-902af82b7e4a","Type":"ContainerDied","Data":"61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72"} Jan 09 01:06:00 crc kubenswrapper[4945]: I0109 01:06:00.837411 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" event={"ID":"26efb867-69f2-460f-93c1-902af82b7e4a","Type":"ContainerStarted","Data":"989a0c6f26226208b43471faf9629305e135dfd828a30a0fc46c98eb450c7e1f"} Jan 09 01:06:01 crc kubenswrapper[4945]: I0109 01:06:01.851713 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" event={"ID":"26efb867-69f2-460f-93c1-902af82b7e4a","Type":"ContainerStarted","Data":"c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b"} Jan 09 01:06:01 crc kubenswrapper[4945]: I0109 01:06:01.852932 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:06:01 crc kubenswrapper[4945]: I0109 01:06:01.856219 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"db943e15-5363-4299-85fb-cd9b0805fb86","Type":"ContainerStarted","Data":"cb44363a49e1723860347fd637db684ae4fe71d5fcb118c09d7758942d1d3c14"} Jan 09 01:06:01 crc kubenswrapper[4945]: I0109 01:06:01.882178 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" podStartSLOduration=2.882154876 podStartE2EDuration="2.882154876s" podCreationTimestamp="2026-01-09 01:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:06:01.86850739 +0000 UTC m=+6632.179666326" watchObservedRunningTime="2026-01-09 01:06:01.882154876 +0000 UTC m=+6632.193313822" Jan 09 01:06:02 crc kubenswrapper[4945]: I0109 01:06:02.882857 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"db943e15-5363-4299-85fb-cd9b0805fb86","Type":"ContainerStarted","Data":"5536bed579c7e4d1d4ac491330f793855011661de40e0b837e629b68b262a674"} Jan 09 01:06:02 crc kubenswrapper[4945]: I0109 01:06:02.883553 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 09 01:06:02 crc kubenswrapper[4945]: I0109 01:06:02.891104 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3f17ceb7-ed77-4912-aeaf-025c32f52c78","Type":"ContainerStarted","Data":"71b8607ca447ae579915dd21d44ef4a46bfb8cc996486327c89b7063913c7af0"} Jan 09 01:06:02 crc kubenswrapper[4945]: I0109 01:06:02.891155 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"3f17ceb7-ed77-4912-aeaf-025c32f52c78","Type":"ContainerStarted","Data":"4dc5f7338bce17b00a40ce6a14d8fe559aece673c68a8bd9e28a8030739dbbd7"} Jan 09 01:06:02 crc kubenswrapper[4945]: I0109 01:06:02.908822 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.9088060799999997 podStartE2EDuration="3.90880608s" podCreationTimestamp="2026-01-09 01:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:06:02.90680445 +0000 UTC m=+6633.217963396" watchObservedRunningTime="2026-01-09 01:06:02.90880608 +0000 UTC m=+6633.219965026" Jan 09 01:06:02 crc kubenswrapper[4945]: I0109 01:06:02.943544 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.716624393 podStartE2EDuration="3.943522114s" podCreationTimestamp="2026-01-09 01:05:59 +0000 UTC" firstStartedPulling="2026-01-09 01:06:00.13529612 +0000 UTC m=+6630.446455066" lastFinishedPulling="2026-01-09 01:06:01.362193841 +0000 UTC m=+6631.673352787" observedRunningTime="2026-01-09 01:06:02.932743829 +0000 UTC m=+6633.243902775" watchObservedRunningTime="2026-01-09 01:06:02.943522114 +0000 UTC m=+6633.254681060" Jan 09 01:06:03 crc kubenswrapper[4945]: I0109 01:06:03.001335 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:06:03 crc kubenswrapper[4945]: E0109 01:06:03.001558 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:06:04 crc kubenswrapper[4945]: I0109 01:06:04.860708 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 01:06:09 crc kubenswrapper[4945]: I0109 01:06:09.531302 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 09 01:06:09 crc kubenswrapper[4945]: I0109 01:06:09.616193 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:06:09 crc kubenswrapper[4945]: I0109 01:06:09.685815 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65dbcdd779-zl7sv"] Jan 09 01:06:09 crc kubenswrapper[4945]: I0109 01:06:09.686134 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" podUID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerName="dnsmasq-dns" containerID="cri-o://bf18348951b20e24bf96c447bbabd7f41d7ebb9467abfef5283d14b4f128edd4" gracePeriod=10 Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.090835 4945 generic.go:334] "Generic (PLEG): container finished" podID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerID="bf18348951b20e24bf96c447bbabd7f41d7ebb9467abfef5283d14b4f128edd4" exitCode=0 Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.091145 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" event={"ID":"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902","Type":"ContainerDied","Data":"bf18348951b20e24bf96c447bbabd7f41d7ebb9467abfef5283d14b4f128edd4"} Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.109425 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fcb36454-abb9-473c-a184-f7b89cc73f6b","Type":"ContainerStarted","Data":"9c71af63f46582ee74f892d6296246e61ed1ed8b3068a2216560ea5616610659"} Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.109470 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"fcb36454-abb9-473c-a184-f7b89cc73f6b","Type":"ContainerStarted","Data":"6613bb3928e19a679e0c1ced270e9e5e84305040c1402c8d48e23d7b7e55a7e5"} Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.136738 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.75060442 podStartE2EDuration="11.136714591s" podCreationTimestamp="2026-01-09 01:05:59 +0000 UTC" firstStartedPulling="2026-01-09 01:06:00.38316799 +0000 UTC m=+6630.694326936" lastFinishedPulling="2026-01-09 01:06:08.769278151 +0000 UTC m=+6639.080437107" observedRunningTime="2026-01-09 01:06:10.136374853 +0000 UTC m=+6640.447533799" watchObservedRunningTime="2026-01-09 01:06:10.136714591 +0000 UTC m=+6640.447873537" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.471132 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.592721 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-dns-svc\") pod \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.592854 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-nb\") pod \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.592888 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-config\") pod \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.593100 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw8tn\" (UniqueName: \"kubernetes.io/projected/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-kube-api-access-rw8tn\") pod \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.593179 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-sb\") pod \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\" (UID: \"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902\") " Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.602767 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-kube-api-access-rw8tn" (OuterVolumeSpecName: "kube-api-access-rw8tn") pod "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" (UID: "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902"). InnerVolumeSpecName "kube-api-access-rw8tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.654317 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" (UID: "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.657060 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" (UID: "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.665119 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" (UID: "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.670529 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-config" (OuterVolumeSpecName: "config") pod "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" (UID: "ab6b5adb-52f7-4a02-bf71-07b1f8ae0902"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.695519 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.695555 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-config\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.695567 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw8tn\" (UniqueName: \"kubernetes.io/projected/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-kube-api-access-rw8tn\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.695577 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:10 crc kubenswrapper[4945]: I0109 01:06:10.696287 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:11 crc kubenswrapper[4945]: I0109 01:06:11.120379 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" Jan 09 01:06:11 crc kubenswrapper[4945]: I0109 01:06:11.123144 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" event={"ID":"ab6b5adb-52f7-4a02-bf71-07b1f8ae0902","Type":"ContainerDied","Data":"2e6bad86ce051ae0953b402d87ee8a93c4f01cc45d4fd119148b440e52c06682"} Jan 09 01:06:11 crc kubenswrapper[4945]: I0109 01:06:11.123195 4945 scope.go:117] "RemoveContainer" containerID="bf18348951b20e24bf96c447bbabd7f41d7ebb9467abfef5283d14b4f128edd4" Jan 09 01:06:11 crc kubenswrapper[4945]: I0109 01:06:11.149602 4945 scope.go:117] "RemoveContainer" containerID="c8960da6e46e713a1bd01ef5c271429f3e230cfff53463715c38cd883b320261" Jan 09 01:06:11 crc kubenswrapper[4945]: I0109 01:06:11.161143 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65dbcdd779-zl7sv"] Jan 09 01:06:11 crc kubenswrapper[4945]: I0109 01:06:11.182565 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65dbcdd779-zl7sv"] Jan 09 01:06:12 crc kubenswrapper[4945]: I0109 01:06:12.020922 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" path="/var/lib/kubelet/pods/ab6b5adb-52f7-4a02-bf71-07b1f8ae0902/volumes" Jan 09 01:06:12 crc kubenswrapper[4945]: I0109 01:06:12.209052 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:06:12 crc kubenswrapper[4945]: I0109 01:06:12.216490 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="ceilometer-central-agent" containerID="cri-o://7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f" gracePeriod=30 Jan 09 01:06:12 crc kubenswrapper[4945]: I0109 01:06:12.216551 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="proxy-httpd" containerID="cri-o://9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb" gracePeriod=30 Jan 09 01:06:12 crc kubenswrapper[4945]: I0109 01:06:12.216617 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="ceilometer-notification-agent" containerID="cri-o://cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3" gracePeriod=30 Jan 09 01:06:12 crc kubenswrapper[4945]: I0109 01:06:12.216841 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="sg-core" containerID="cri-o://6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff" gracePeriod=30 Jan 09 01:06:12 crc kubenswrapper[4945]: E0109 01:06:12.769847 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod363b5f94_e0d4_4426_9c36_c3f7c4b7f7df.slice/crio-7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod363b5f94_e0d4_4426_9c36_c3f7c4b7f7df.slice/crio-conmon-7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f.scope\": RecentStats: unable to find data in memory cache]" Jan 09 01:06:13 crc kubenswrapper[4945]: I0109 01:06:13.148267 4945 generic.go:334] "Generic (PLEG): container finished" podID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerID="9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb" exitCode=0 Jan 09 01:06:13 crc kubenswrapper[4945]: I0109 01:06:13.148557 4945 generic.go:334] "Generic (PLEG): container finished" podID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerID="6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff" exitCode=2 Jan 09 01:06:13 crc kubenswrapper[4945]: I0109 01:06:13.148568 4945 generic.go:334] "Generic (PLEG): container finished" podID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerID="7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f" exitCode=0 Jan 09 01:06:13 crc kubenswrapper[4945]: I0109 01:06:13.148356 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerDied","Data":"9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb"} Jan 09 01:06:13 crc kubenswrapper[4945]: I0109 01:06:13.148607 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerDied","Data":"6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff"} Jan 09 01:06:13 crc kubenswrapper[4945]: I0109 01:06:13.148620 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerDied","Data":"7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f"} Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.899639 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.985218 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-run-httpd\") pod \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.985407 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-sg-core-conf-yaml\") pod \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.985565 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-combined-ca-bundle\") pod \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.985649 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-config-data\") pod \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.985695 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-scripts\") pod \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.985765 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-log-httpd\") pod \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.985853 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldsbz\" (UniqueName: \"kubernetes.io/projected/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-kube-api-access-ldsbz\") pod \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\" (UID: \"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df\") " Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.987393 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" (UID: "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.988869 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" (UID: "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:06:14 crc kubenswrapper[4945]: I0109 01:06:14.996289 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-kube-api-access-ldsbz" (OuterVolumeSpecName: "kube-api-access-ldsbz") pod "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" (UID: "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df"). InnerVolumeSpecName "kube-api-access-ldsbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.011239 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-scripts" (OuterVolumeSpecName: "scripts") pod "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" (UID: "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.041401 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" (UID: "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.089416 4945 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-scripts\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.089513 4945 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.089529 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldsbz\" (UniqueName: \"kubernetes.io/projected/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-kube-api-access-ldsbz\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.089575 4945 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.089589 4945 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.119380 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" (UID: "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.125284 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-config-data" (OuterVolumeSpecName: "config-data") pod "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" (UID: "363b5f94-e0d4-4426-9c36-c3f7c4b7f7df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.166909 4945 generic.go:334] "Generic (PLEG): container finished" podID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerID="cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3" exitCode=0 Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.166957 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerDied","Data":"cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3"} Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.166967 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.166985 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"363b5f94-e0d4-4426-9c36-c3f7c4b7f7df","Type":"ContainerDied","Data":"eaa7396caf744a5c8291030cc6ecab3aefbc0fdde4d68dc2cc01529528b3aa0b"} Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.167016 4945 scope.go:117] "RemoveContainer" containerID="9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.187938 4945 scope.go:117] "RemoveContainer" containerID="6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.194728 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.194836 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.209437 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.224265 4945 scope.go:117] "RemoveContainer" containerID="cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.226100 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.239454 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.240113 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerName="dnsmasq-dns" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240140 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerName="dnsmasq-dns" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.240155 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="ceilometer-notification-agent" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240164 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="ceilometer-notification-agent" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.240186 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerName="init" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240194 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerName="init" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.240211 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="proxy-httpd" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240221 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="proxy-httpd" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.240245 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="ceilometer-central-agent" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240253 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="ceilometer-central-agent" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.240271 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="sg-core" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240278 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="sg-core" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240548 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="ceilometer-notification-agent" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240578 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="sg-core" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240596 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="ceilometer-central-agent" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240605 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" containerName="proxy-httpd" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.240636 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerName="dnsmasq-dns" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.242829 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.246176 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.246193 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.259964 4945 scope.go:117] "RemoveContainer" containerID="7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.294406 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.297552 4945 scope.go:117] "RemoveContainer" containerID="9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.298204 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb\": container with ID starting with 9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb not found: ID does not exist" containerID="9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.298233 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb"} err="failed to get container status \"9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb\": rpc error: code = NotFound desc = could not find container \"9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb\": container with ID starting with 9e4cbafef575aaad1645abaa50887f09405fede1e20bba404a45abe42e22a8eb not found: ID does not exist" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.298254 4945 scope.go:117] "RemoveContainer" containerID="6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.299823 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff\": container with ID starting with 6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff not found: ID does not exist" containerID="6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.299911 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff"} err="failed to get container status \"6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff\": rpc error: code = NotFound desc = could not find container \"6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff\": container with ID starting with 6607fb0a8993d67d39ba2f17cfd925fa76b668a63bbb578f34f473955b6e4dff not found: ID does not exist" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.299942 4945 scope.go:117] "RemoveContainer" containerID="cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.300537 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3\": container with ID starting with cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3 not found: ID does not exist" containerID="cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.300559 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3"} err="failed to get container status \"cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3\": rpc error: code = NotFound desc = could not find container \"cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3\": container with ID starting with cbce05960b5e82d59465ade8ded004258da688f04be0cd2b41ad87e2fa97e7b3 not found: ID does not exist" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.300572 4945 scope.go:117] "RemoveContainer" containerID="7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f" Jan 09 01:06:15 crc kubenswrapper[4945]: E0109 01:06:15.301341 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f\": container with ID starting with 7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f not found: ID does not exist" containerID="7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.301370 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f"} err="failed to get container status \"7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f\": rpc error: code = NotFound desc = could not find container \"7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f\": container with ID starting with 7ca13dcedb8bdbd70ccf2d3829aa9aa21ad63569f27c95a7c29e816347e4e30f not found: ID does not exist" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.375084 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-65dbcdd779-zl7sv" podUID="ab6b5adb-52f7-4a02-bf71-07b1f8ae0902" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.84:5353: i/o timeout" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.401595 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770e54ce-50a7-4cd5-8be6-f905ed744d17-run-httpd\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.401645 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-scripts\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.401662 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770e54ce-50a7-4cd5-8be6-f905ed744d17-log-httpd\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.401719 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.401834 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qlt\" (UniqueName: \"kubernetes.io/projected/770e54ce-50a7-4cd5-8be6-f905ed744d17-kube-api-access-b7qlt\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.402008 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.402066 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-config-data\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.509609 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.509731 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-config-data\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.509792 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770e54ce-50a7-4cd5-8be6-f905ed744d17-run-httpd\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.509824 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-scripts\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.509844 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770e54ce-50a7-4cd5-8be6-f905ed744d17-log-httpd\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.509918 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.509956 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qlt\" (UniqueName: \"kubernetes.io/projected/770e54ce-50a7-4cd5-8be6-f905ed744d17-kube-api-access-b7qlt\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.512905 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770e54ce-50a7-4cd5-8be6-f905ed744d17-log-httpd\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.512935 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770e54ce-50a7-4cd5-8be6-f905ed744d17-run-httpd\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.514754 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.514959 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-scripts\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.516277 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.516612 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770e54ce-50a7-4cd5-8be6-f905ed744d17-config-data\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.527695 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qlt\" (UniqueName: \"kubernetes.io/projected/770e54ce-50a7-4cd5-8be6-f905ed744d17-kube-api-access-b7qlt\") pod \"ceilometer-0\" (UID: \"770e54ce-50a7-4cd5-8be6-f905ed744d17\") " pod="openstack/ceilometer-0" Jan 09 01:06:15 crc kubenswrapper[4945]: I0109 01:06:15.571731 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 09 01:06:16 crc kubenswrapper[4945]: I0109 01:06:16.000327 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:06:16 crc kubenswrapper[4945]: E0109 01:06:16.000774 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:06:16 crc kubenswrapper[4945]: I0109 01:06:16.014242 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363b5f94-e0d4-4426-9c36-c3f7c4b7f7df" path="/var/lib/kubelet/pods/363b5f94-e0d4-4426-9c36-c3f7c4b7f7df/volumes" Jan 09 01:06:16 crc kubenswrapper[4945]: W0109 01:06:16.083711 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod770e54ce_50a7_4cd5_8be6_f905ed744d17.slice/crio-aed279085ab3e1e962319b7705f818fb012e14f730306abcadd4cb4408214260 WatchSource:0}: Error finding container aed279085ab3e1e962319b7705f818fb012e14f730306abcadd4cb4408214260: Status 404 returned error can't find the container with id aed279085ab3e1e962319b7705f818fb012e14f730306abcadd4cb4408214260 Jan 09 01:06:16 crc kubenswrapper[4945]: I0109 01:06:16.087153 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 09 01:06:16 crc kubenswrapper[4945]: I0109 01:06:16.180350 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770e54ce-50a7-4cd5-8be6-f905ed744d17","Type":"ContainerStarted","Data":"aed279085ab3e1e962319b7705f818fb012e14f730306abcadd4cb4408214260"} Jan 09 01:06:18 crc kubenswrapper[4945]: I0109 01:06:18.272318 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770e54ce-50a7-4cd5-8be6-f905ed744d17","Type":"ContainerStarted","Data":"55cedefb57e17db88d7362247635f348bcac01b635acb445347e70a58d268359"} Jan 09 01:06:19 crc kubenswrapper[4945]: I0109 01:06:19.285721 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770e54ce-50a7-4cd5-8be6-f905ed744d17","Type":"ContainerStarted","Data":"c0410160f127cc2861d987cf891f48263f342dd2576e512ea3214fcfdbad6654"} Jan 09 01:06:19 crc kubenswrapper[4945]: I0109 01:06:19.286305 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770e54ce-50a7-4cd5-8be6-f905ed744d17","Type":"ContainerStarted","Data":"ef49eab1659f1e9993122e6aa7b6f948dbe051db4d58b1bed724682c5354af34"} Jan 09 01:06:19 crc kubenswrapper[4945]: I0109 01:06:19.589596 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 09 01:06:21 crc kubenswrapper[4945]: I0109 01:06:21.206571 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 09 01:06:21 crc kubenswrapper[4945]: I0109 01:06:21.242831 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 09 01:06:21 crc kubenswrapper[4945]: I0109 01:06:21.312837 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 09 01:06:21 crc kubenswrapper[4945]: I0109 01:06:21.322740 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770e54ce-50a7-4cd5-8be6-f905ed744d17","Type":"ContainerStarted","Data":"b9f52ec91e55f579c37623aadcbb6d9c33389e96bb91ea0a78f86d363f311d32"} Jan 09 01:06:21 crc kubenswrapper[4945]: I0109 01:06:21.323555 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 09 01:06:29 crc kubenswrapper[4945]: I0109 01:06:28.999914 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:06:29 crc kubenswrapper[4945]: E0109 01:06:29.000638 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:06:36 crc kubenswrapper[4945]: I0109 01:06:36.042605 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=16.494184777 podStartE2EDuration="21.042583522s" podCreationTimestamp="2026-01-09 01:06:15 +0000 UTC" firstStartedPulling="2026-01-09 01:06:16.08849678 +0000 UTC m=+6646.399655716" lastFinishedPulling="2026-01-09 01:06:20.636895495 +0000 UTC m=+6650.948054461" observedRunningTime="2026-01-09 01:06:21.372614729 +0000 UTC m=+6651.683773675" watchObservedRunningTime="2026-01-09 01:06:36.042583522 +0000 UTC m=+6666.353742458" Jan 09 01:06:36 crc kubenswrapper[4945]: I0109 01:06:36.068968 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-swcpn"] Jan 09 01:06:36 crc kubenswrapper[4945]: I0109 01:06:36.087943 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-73d7-account-create-update-6r96b"] Jan 09 01:06:36 crc kubenswrapper[4945]: I0109 01:06:36.098951 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-swcpn"] Jan 09 01:06:36 crc kubenswrapper[4945]: I0109 01:06:36.116033 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-73d7-account-create-update-6r96b"] Jan 09 01:06:38 crc kubenswrapper[4945]: I0109 01:06:38.013436 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16705031-b599-4f15-91f2-5258e613426e" path="/var/lib/kubelet/pods/16705031-b599-4f15-91f2-5258e613426e/volumes" Jan 09 01:06:38 crc kubenswrapper[4945]: I0109 01:06:38.014915 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bffef94-e4a5-4fec-9fed-5199d1eb52e3" path="/var/lib/kubelet/pods/1bffef94-e4a5-4fec-9fed-5199d1eb52e3/volumes" Jan 09 01:06:41 crc kubenswrapper[4945]: I0109 01:06:41.001326 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:06:41 crc kubenswrapper[4945]: E0109 01:06:41.002187 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:06:41 crc kubenswrapper[4945]: I0109 01:06:41.429798 4945 scope.go:117] "RemoveContainer" containerID="92087555576f9f158e9c44ce0b79c033c186af919d669c86b70cb3027f1369c5" Jan 09 01:06:41 crc kubenswrapper[4945]: I0109 01:06:41.458263 4945 scope.go:117] "RemoveContainer" containerID="c168a479d6a02854dfde8d1ce4fdb7fe1e298b6b14e6c602536f2626a9483f39" Jan 09 01:06:41 crc kubenswrapper[4945]: I0109 01:06:41.503971 4945 scope.go:117] "RemoveContainer" containerID="91eafc14becc9e8efd3e606ce5a4a22cefdddf74ae934e452d5be8c5ba811203" Jan 09 01:06:41 crc kubenswrapper[4945]: I0109 01:06:41.550266 4945 scope.go:117] "RemoveContainer" containerID="184dd71414cf8eb1b5b7e931902b7a58b81159888ff1fcdd658fe98f555dd2f0" Jan 09 01:06:44 crc kubenswrapper[4945]: I0109 01:06:44.039307 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-lbcpp"] Jan 09 01:06:44 crc kubenswrapper[4945]: I0109 01:06:44.053331 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-lbcpp"] Jan 09 01:06:45 crc kubenswrapper[4945]: I0109 01:06:45.577567 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 09 01:06:46 crc kubenswrapper[4945]: I0109 01:06:46.015955 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6d1165-979d-43be-8b0b-76917ab91e5e" path="/var/lib/kubelet/pods/6c6d1165-979d-43be-8b0b-76917ab91e5e/volumes" Jan 09 01:06:55 crc kubenswrapper[4945]: I0109 01:06:55.000764 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:06:55 crc kubenswrapper[4945]: E0109 01:06:55.001639 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:07:07 crc kubenswrapper[4945]: I0109 01:07:07.000446 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:07:07 crc kubenswrapper[4945]: E0109 01:07:07.002400 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.314923 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d899dbfc5-7jsvg"] Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.322512 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.325875 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.342798 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d899dbfc5-7jsvg"] Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.424146 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.424218 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bwp\" (UniqueName: \"kubernetes.io/projected/550a101f-23d7-47ec-a00e-c4f42dfcb152-kube-api-access-b8bwp\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.424380 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.424466 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-openstack-cell1\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.424562 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-config\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.424632 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-dns-svc\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.526919 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-openstack-cell1\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.527063 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-config\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.527128 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-dns-svc\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.527207 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.527243 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8bwp\" (UniqueName: \"kubernetes.io/projected/550a101f-23d7-47ec-a00e-c4f42dfcb152-kube-api-access-b8bwp\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.527368 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.528039 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-openstack-cell1\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.528461 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-sb\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.528495 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-nb\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.528681 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-config\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.528964 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-dns-svc\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.556170 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8bwp\" (UniqueName: \"kubernetes.io/projected/550a101f-23d7-47ec-a00e-c4f42dfcb152-kube-api-access-b8bwp\") pod \"dnsmasq-dns-6d899dbfc5-7jsvg\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.575194 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d899dbfc5-7jsvg"] Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.577880 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.619072 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b7bc899f-mv8gz"] Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.621698 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.674799 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b7bc899f-mv8gz"] Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.731417 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.731583 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-dns-svc\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.731676 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-config\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.731826 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfgp\" (UniqueName: \"kubernetes.io/projected/a3643c97-f962-483e-b870-b95122174cbd-kube-api-access-9wfgp\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.731893 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-openstack-cell1\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.731972 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.833369 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfgp\" (UniqueName: \"kubernetes.io/projected/a3643c97-f962-483e-b870-b95122174cbd-kube-api-access-9wfgp\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.833726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-openstack-cell1\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.833775 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.833821 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.833864 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-dns-svc\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.834438 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-config\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.835023 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.835110 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-openstack-cell1\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.835180 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.835385 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-config\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.835567 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3643c97-f962-483e-b870-b95122174cbd-dns-svc\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:11 crc kubenswrapper[4945]: I0109 01:07:11.852185 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfgp\" (UniqueName: \"kubernetes.io/projected/a3643c97-f962-483e-b870-b95122174cbd-kube-api-access-9wfgp\") pod \"dnsmasq-dns-b7bc899f-mv8gz\" (UID: \"a3643c97-f962-483e-b870-b95122174cbd\") " pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.050550 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.183268 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d899dbfc5-7jsvg"] Jan 09 01:07:12 crc kubenswrapper[4945]: W0109 01:07:12.192926 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod550a101f_23d7_47ec_a00e_c4f42dfcb152.slice/crio-2b87260b047e0c9e5b0efdf7386e202f98409f3bcc00f3c0769ee2eb248d3b4b WatchSource:0}: Error finding container 2b87260b047e0c9e5b0efdf7386e202f98409f3bcc00f3c0769ee2eb248d3b4b: Status 404 returned error can't find the container with id 2b87260b047e0c9e5b0efdf7386e202f98409f3bcc00f3c0769ee2eb248d3b4b Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.610985 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b7bc899f-mv8gz"] Jan 09 01:07:12 crc kubenswrapper[4945]: W0109 01:07:12.615216 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3643c97_f962_483e_b870_b95122174cbd.slice/crio-17fd71e88c46ece31e7c2198f4327e73d90ab294d2f1ae9b98eab65351487620 WatchSource:0}: Error finding container 17fd71e88c46ece31e7c2198f4327e73d90ab294d2f1ae9b98eab65351487620: Status 404 returned error can't find the container with id 17fd71e88c46ece31e7c2198f4327e73d90ab294d2f1ae9b98eab65351487620 Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.942451 4945 generic.go:334] "Generic (PLEG): container finished" podID="550a101f-23d7-47ec-a00e-c4f42dfcb152" containerID="0abee2ddd5abe7a6474d9f586528933819e9faa2ac025666cd8cec0f4fc45568" exitCode=0 Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.942683 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" event={"ID":"550a101f-23d7-47ec-a00e-c4f42dfcb152","Type":"ContainerDied","Data":"0abee2ddd5abe7a6474d9f586528933819e9faa2ac025666cd8cec0f4fc45568"} Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.942919 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" event={"ID":"550a101f-23d7-47ec-a00e-c4f42dfcb152","Type":"ContainerStarted","Data":"2b87260b047e0c9e5b0efdf7386e202f98409f3bcc00f3c0769ee2eb248d3b4b"} Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.945192 4945 generic.go:334] "Generic (PLEG): container finished" podID="a3643c97-f962-483e-b870-b95122174cbd" containerID="dd984044d0a4f921eb9319fd0f6be2c7783d698f77e398946b147aaf5f8c2a02" exitCode=0 Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.945240 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" event={"ID":"a3643c97-f962-483e-b870-b95122174cbd","Type":"ContainerDied","Data":"dd984044d0a4f921eb9319fd0f6be2c7783d698f77e398946b147aaf5f8c2a02"} Jan 09 01:07:12 crc kubenswrapper[4945]: I0109 01:07:12.945274 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" event={"ID":"a3643c97-f962-483e-b870-b95122174cbd","Type":"ContainerStarted","Data":"17fd71e88c46ece31e7c2198f4327e73d90ab294d2f1ae9b98eab65351487620"} Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.264157 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.277909 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-dns-svc\") pod \"550a101f-23d7-47ec-a00e-c4f42dfcb152\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.278085 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-openstack-cell1\") pod \"550a101f-23d7-47ec-a00e-c4f42dfcb152\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.278234 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-sb\") pod \"550a101f-23d7-47ec-a00e-c4f42dfcb152\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.278325 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-config\") pod \"550a101f-23d7-47ec-a00e-c4f42dfcb152\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.278370 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-nb\") pod \"550a101f-23d7-47ec-a00e-c4f42dfcb152\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.278456 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8bwp\" (UniqueName: \"kubernetes.io/projected/550a101f-23d7-47ec-a00e-c4f42dfcb152-kube-api-access-b8bwp\") pod \"550a101f-23d7-47ec-a00e-c4f42dfcb152\" (UID: \"550a101f-23d7-47ec-a00e-c4f42dfcb152\") " Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.292149 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550a101f-23d7-47ec-a00e-c4f42dfcb152-kube-api-access-b8bwp" (OuterVolumeSpecName: "kube-api-access-b8bwp") pod "550a101f-23d7-47ec-a00e-c4f42dfcb152" (UID: "550a101f-23d7-47ec-a00e-c4f42dfcb152"). InnerVolumeSpecName "kube-api-access-b8bwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.307440 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "550a101f-23d7-47ec-a00e-c4f42dfcb152" (UID: "550a101f-23d7-47ec-a00e-c4f42dfcb152"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.309478 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "550a101f-23d7-47ec-a00e-c4f42dfcb152" (UID: "550a101f-23d7-47ec-a00e-c4f42dfcb152"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.309743 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "550a101f-23d7-47ec-a00e-c4f42dfcb152" (UID: "550a101f-23d7-47ec-a00e-c4f42dfcb152"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.310219 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "550a101f-23d7-47ec-a00e-c4f42dfcb152" (UID: "550a101f-23d7-47ec-a00e-c4f42dfcb152"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.310840 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-config" (OuterVolumeSpecName: "config") pod "550a101f-23d7-47ec-a00e-c4f42dfcb152" (UID: "550a101f-23d7-47ec-a00e-c4f42dfcb152"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.386094 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.386687 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-config\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.386766 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.386926 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8bwp\" (UniqueName: \"kubernetes.io/projected/550a101f-23d7-47ec-a00e-c4f42dfcb152-kube-api-access-b8bwp\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.387015 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.387083 4945 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/550a101f-23d7-47ec-a00e-c4f42dfcb152-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.955775 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" event={"ID":"a3643c97-f962-483e-b870-b95122174cbd","Type":"ContainerStarted","Data":"d47f1950425b6213a86ec853e43448db8e0c70759891c55e3c667dfcb38b9e46"} Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.955902 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.958355 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" event={"ID":"550a101f-23d7-47ec-a00e-c4f42dfcb152","Type":"ContainerDied","Data":"2b87260b047e0c9e5b0efdf7386e202f98409f3bcc00f3c0769ee2eb248d3b4b"} Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.958412 4945 scope.go:117] "RemoveContainer" containerID="0abee2ddd5abe7a6474d9f586528933819e9faa2ac025666cd8cec0f4fc45568" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.958425 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d899dbfc5-7jsvg" Jan 09 01:07:13 crc kubenswrapper[4945]: I0109 01:07:13.988722 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" podStartSLOduration=2.988701974 podStartE2EDuration="2.988701974s" podCreationTimestamp="2026-01-09 01:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:07:13.972201378 +0000 UTC m=+6704.283360324" watchObservedRunningTime="2026-01-09 01:07:13.988701974 +0000 UTC m=+6704.299860920" Jan 09 01:07:14 crc kubenswrapper[4945]: I0109 01:07:14.071682 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d899dbfc5-7jsvg"] Jan 09 01:07:14 crc kubenswrapper[4945]: I0109 01:07:14.079270 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d899dbfc5-7jsvg"] Jan 09 01:07:16 crc kubenswrapper[4945]: I0109 01:07:16.014234 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550a101f-23d7-47ec-a00e-c4f42dfcb152" path="/var/lib/kubelet/pods/550a101f-23d7-47ec-a00e-c4f42dfcb152/volumes" Jan 09 01:07:19 crc kubenswrapper[4945]: I0109 01:07:19.000839 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:07:20 crc kubenswrapper[4945]: I0109 01:07:20.057406 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"8c87d76440281855df3f1cf34d41214b66f937a34385a2dede3e17d011f602ec"} Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.052824 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b7bc899f-mv8gz" Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.145228 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59b9476d7c-64qv9"] Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.145553 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" podUID="26efb867-69f2-460f-93c1-902af82b7e4a" containerName="dnsmasq-dns" containerID="cri-o://c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b" gracePeriod=10 Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.737543 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.933479 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58p9t\" (UniqueName: \"kubernetes.io/projected/26efb867-69f2-460f-93c1-902af82b7e4a-kube-api-access-58p9t\") pod \"26efb867-69f2-460f-93c1-902af82b7e4a\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.933579 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-nb\") pod \"26efb867-69f2-460f-93c1-902af82b7e4a\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.933637 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-dns-svc\") pod \"26efb867-69f2-460f-93c1-902af82b7e4a\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.933739 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-config\") pod \"26efb867-69f2-460f-93c1-902af82b7e4a\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.933843 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-sb\") pod \"26efb867-69f2-460f-93c1-902af82b7e4a\" (UID: \"26efb867-69f2-460f-93c1-902af82b7e4a\") " Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.942854 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26efb867-69f2-460f-93c1-902af82b7e4a-kube-api-access-58p9t" (OuterVolumeSpecName: "kube-api-access-58p9t") pod "26efb867-69f2-460f-93c1-902af82b7e4a" (UID: "26efb867-69f2-460f-93c1-902af82b7e4a"). InnerVolumeSpecName "kube-api-access-58p9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.992123 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-config" (OuterVolumeSpecName: "config") pod "26efb867-69f2-460f-93c1-902af82b7e4a" (UID: "26efb867-69f2-460f-93c1-902af82b7e4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.995047 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "26efb867-69f2-460f-93c1-902af82b7e4a" (UID: "26efb867-69f2-460f-93c1-902af82b7e4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:22 crc kubenswrapper[4945]: I0109 01:07:22.997965 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26efb867-69f2-460f-93c1-902af82b7e4a" (UID: "26efb867-69f2-460f-93c1-902af82b7e4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.003544 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26efb867-69f2-460f-93c1-902af82b7e4a" (UID: "26efb867-69f2-460f-93c1-902af82b7e4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.037043 4945 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-config\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.037074 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.037085 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58p9t\" (UniqueName: \"kubernetes.io/projected/26efb867-69f2-460f-93c1-902af82b7e4a-kube-api-access-58p9t\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.037095 4945 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.037104 4945 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26efb867-69f2-460f-93c1-902af82b7e4a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.087579 4945 generic.go:334] "Generic (PLEG): container finished" podID="26efb867-69f2-460f-93c1-902af82b7e4a" containerID="c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b" exitCode=0 Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.087630 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" event={"ID":"26efb867-69f2-460f-93c1-902af82b7e4a","Type":"ContainerDied","Data":"c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b"} Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.087670 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" event={"ID":"26efb867-69f2-460f-93c1-902af82b7e4a","Type":"ContainerDied","Data":"989a0c6f26226208b43471faf9629305e135dfd828a30a0fc46c98eb450c7e1f"} Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.087692 4945 scope.go:117] "RemoveContainer" containerID="c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.087911 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59b9476d7c-64qv9" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.121105 4945 scope.go:117] "RemoveContainer" containerID="61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.132247 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59b9476d7c-64qv9"] Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.142178 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59b9476d7c-64qv9"] Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.159219 4945 scope.go:117] "RemoveContainer" containerID="c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b" Jan 09 01:07:23 crc kubenswrapper[4945]: E0109 01:07:23.161775 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b\": container with ID starting with c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b not found: ID does not exist" containerID="c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.161815 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b"} err="failed to get container status \"c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b\": rpc error: code = NotFound desc = could not find container \"c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b\": container with ID starting with c137bf96061a6859295c3374f05cf461ce12de38f849c396489bbad30665596b not found: ID does not exist" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.161839 4945 scope.go:117] "RemoveContainer" containerID="61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72" Jan 09 01:07:23 crc kubenswrapper[4945]: E0109 01:07:23.162112 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72\": container with ID starting with 61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72 not found: ID does not exist" containerID="61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72" Jan 09 01:07:23 crc kubenswrapper[4945]: I0109 01:07:23.162141 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72"} err="failed to get container status \"61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72\": rpc error: code = NotFound desc = could not find container \"61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72\": container with ID starting with 61e7c6dcc4e801b444d5e932c1a05442d1612a38321a3fe99fa0f3f3b5782f72 not found: ID does not exist" Jan 09 01:07:24 crc kubenswrapper[4945]: I0109 01:07:24.022754 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26efb867-69f2-460f-93c1-902af82b7e4a" path="/var/lib/kubelet/pods/26efb867-69f2-460f-93c1-902af82b7e4a/volumes" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.539200 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44"] Jan 09 01:07:28 crc kubenswrapper[4945]: E0109 01:07:28.540309 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26efb867-69f2-460f-93c1-902af82b7e4a" containerName="init" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.540330 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="26efb867-69f2-460f-93c1-902af82b7e4a" containerName="init" Jan 09 01:07:28 crc kubenswrapper[4945]: E0109 01:07:28.540353 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550a101f-23d7-47ec-a00e-c4f42dfcb152" containerName="init" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.540362 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="550a101f-23d7-47ec-a00e-c4f42dfcb152" containerName="init" Jan 09 01:07:28 crc kubenswrapper[4945]: E0109 01:07:28.540385 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26efb867-69f2-460f-93c1-902af82b7e4a" containerName="dnsmasq-dns" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.540393 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="26efb867-69f2-460f-93c1-902af82b7e4a" containerName="dnsmasq-dns" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.540671 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="26efb867-69f2-460f-93c1-902af82b7e4a" containerName="dnsmasq-dns" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.540689 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="550a101f-23d7-47ec-a00e-c4f42dfcb152" containerName="init" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.541668 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.544983 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.545504 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.547079 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.548644 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.559563 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44"] Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.661464 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.661540 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.661570 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.662046 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.662240 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s888\" (UniqueName: \"kubernetes.io/projected/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-kube-api-access-7s888\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.765523 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.765619 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s888\" (UniqueName: \"kubernetes.io/projected/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-kube-api-access-7s888\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.765720 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.765794 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.765833 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.773253 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.775817 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.775850 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.776303 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.786393 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s888\" (UniqueName: \"kubernetes.io/projected/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-kube-api-access-7s888\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:28 crc kubenswrapper[4945]: I0109 01:07:28.864753 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:29 crc kubenswrapper[4945]: I0109 01:07:29.435259 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44"] Jan 09 01:07:29 crc kubenswrapper[4945]: W0109 01:07:29.437358 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cc86b17_cd47_4048_8171_5c4d6bbc3ea8.slice/crio-ce232ccc6e435ad783d430577e3a24f0d3b2c871545a9e86638259b65751fb5e WatchSource:0}: Error finding container ce232ccc6e435ad783d430577e3a24f0d3b2c871545a9e86638259b65751fb5e: Status 404 returned error can't find the container with id ce232ccc6e435ad783d430577e3a24f0d3b2c871545a9e86638259b65751fb5e Jan 09 01:07:29 crc kubenswrapper[4945]: I0109 01:07:29.440981 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:07:30 crc kubenswrapper[4945]: I0109 01:07:30.165555 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" event={"ID":"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8","Type":"ContainerStarted","Data":"ce232ccc6e435ad783d430577e3a24f0d3b2c871545a9e86638259b65751fb5e"} Jan 09 01:07:41 crc kubenswrapper[4945]: I0109 01:07:41.770728 4945 scope.go:117] "RemoveContainer" containerID="1e6e25672e2df6fb6afd22e3fd4a7309944afdd24f8282f65ad2646df5b506e9" Jan 09 01:07:43 crc kubenswrapper[4945]: I0109 01:07:43.494021 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:07:44 crc kubenswrapper[4945]: I0109 01:07:44.386596 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" event={"ID":"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8","Type":"ContainerStarted","Data":"2a72e0ad88e5c183326fe6c9196cd2799fc3466803732134daf0c67d020f2c2e"} Jan 09 01:07:44 crc kubenswrapper[4945]: I0109 01:07:44.406719 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" podStartSLOduration=2.356008405 podStartE2EDuration="16.406632878s" podCreationTimestamp="2026-01-09 01:07:28 +0000 UTC" firstStartedPulling="2026-01-09 01:07:29.440662141 +0000 UTC m=+6719.751821087" lastFinishedPulling="2026-01-09 01:07:43.491286614 +0000 UTC m=+6733.802445560" observedRunningTime="2026-01-09 01:07:44.40222151 +0000 UTC m=+6734.713380476" watchObservedRunningTime="2026-01-09 01:07:44.406632878 +0000 UTC m=+6734.717791834" Jan 09 01:07:57 crc kubenswrapper[4945]: I0109 01:07:57.533838 4945 generic.go:334] "Generic (PLEG): container finished" podID="2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" containerID="2a72e0ad88e5c183326fe6c9196cd2799fc3466803732134daf0c67d020f2c2e" exitCode=0 Jan 09 01:07:57 crc kubenswrapper[4945]: I0109 01:07:57.533933 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" event={"ID":"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8","Type":"ContainerDied","Data":"2a72e0ad88e5c183326fe6c9196cd2799fc3466803732134daf0c67d020f2c2e"} Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.115540 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.291428 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ssh-key-openstack-cell1\") pod \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.291571 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-inventory\") pod \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.291679 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-pre-adoption-validation-combined-ca-bundle\") pod \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.291915 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s888\" (UniqueName: \"kubernetes.io/projected/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-kube-api-access-7s888\") pod \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.291974 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ceph\") pod \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\" (UID: \"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8\") " Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.297155 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" (UID: "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.297651 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ceph" (OuterVolumeSpecName: "ceph") pod "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" (UID: "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.316015 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-kube-api-access-7s888" (OuterVolumeSpecName: "kube-api-access-7s888") pod "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" (UID: "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8"). InnerVolumeSpecName "kube-api-access-7s888". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.321817 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" (UID: "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.322170 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-inventory" (OuterVolumeSpecName: "inventory") pod "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" (UID: "2cc86b17-cd47-4048-8171-5c4d6bbc3ea8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.394448 4945 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.394481 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s888\" (UniqueName: \"kubernetes.io/projected/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-kube-api-access-7s888\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.394494 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.394502 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.394511 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cc86b17-cd47-4048-8171-5c4d6bbc3ea8-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.602649 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" event={"ID":"2cc86b17-cd47-4048-8171-5c4d6bbc3ea8","Type":"ContainerDied","Data":"ce232ccc6e435ad783d430577e3a24f0d3b2c871545a9e86638259b65751fb5e"} Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.602704 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce232ccc6e435ad783d430577e3a24f0d3b2c871545a9e86638259b65751fb5e" Jan 09 01:07:59 crc kubenswrapper[4945]: I0109 01:07:59.602773 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.375733 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx"] Jan 09 01:08:07 crc kubenswrapper[4945]: E0109 01:08:07.376757 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.376774 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.377269 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc86b17-cd47-4048-8171-5c4d6bbc3ea8" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.378271 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.381244 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.381680 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.381712 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.381760 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.409607 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx"] Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.469855 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwjw\" (UniqueName: \"kubernetes.io/projected/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-kube-api-access-blwjw\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.470165 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.470260 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.470345 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.470521 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.571987 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.572306 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.572385 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blwjw\" (UniqueName: \"kubernetes.io/projected/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-kube-api-access-blwjw\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.572426 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.572519 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.587329 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.587342 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.587539 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.587594 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.597118 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blwjw\" (UniqueName: \"kubernetes.io/projected/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-kube-api-access-blwjw\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:07 crc kubenswrapper[4945]: I0109 01:08:07.702871 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:08:08 crc kubenswrapper[4945]: I0109 01:08:08.312511 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx"] Jan 09 01:08:08 crc kubenswrapper[4945]: W0109 01:08:08.314792 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe7e1ae9_abb7_4f68_834a_5e4245dd2374.slice/crio-077493723da4b6fb209eaaed4106f53ae8eced2fb015820fa19e68d0efb225bc WatchSource:0}: Error finding container 077493723da4b6fb209eaaed4106f53ae8eced2fb015820fa19e68d0efb225bc: Status 404 returned error can't find the container with id 077493723da4b6fb209eaaed4106f53ae8eced2fb015820fa19e68d0efb225bc Jan 09 01:08:08 crc kubenswrapper[4945]: I0109 01:08:08.705038 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" event={"ID":"fe7e1ae9-abb7-4f68-834a-5e4245dd2374","Type":"ContainerStarted","Data":"077493723da4b6fb209eaaed4106f53ae8eced2fb015820fa19e68d0efb225bc"} Jan 09 01:08:09 crc kubenswrapper[4945]: I0109 01:08:09.715867 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" event={"ID":"fe7e1ae9-abb7-4f68-834a-5e4245dd2374","Type":"ContainerStarted","Data":"46c0c28b859bf4951afacd4f5a5e7d9b08107dc8c13da1868ae792a37b5a2d1e"} Jan 09 01:08:09 crc kubenswrapper[4945]: I0109 01:08:09.741506 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" podStartSLOduration=2.542280237 podStartE2EDuration="2.741447278s" podCreationTimestamp="2026-01-09 01:08:07 +0000 UTC" firstStartedPulling="2026-01-09 01:08:08.318684357 +0000 UTC m=+6758.629843303" lastFinishedPulling="2026-01-09 01:08:08.517851398 +0000 UTC m=+6758.829010344" observedRunningTime="2026-01-09 01:08:09.731562925 +0000 UTC m=+6760.042721861" watchObservedRunningTime="2026-01-09 01:08:09.741447278 +0000 UTC m=+6760.052606244" Jan 09 01:08:43 crc kubenswrapper[4945]: I0109 01:08:43.485659 4945 scope.go:117] "RemoveContainer" containerID="7520ac7a93f942478a577dd83cde7a793049e19329433227b40b90bbbaf8772e" Jan 09 01:08:43 crc kubenswrapper[4945]: I0109 01:08:43.756734 4945 scope.go:117] "RemoveContainer" containerID="514bf6416e24d77ddf11939d5984ef053244e062747f4e81aed0c1bc41d86ca4" Jan 09 01:09:27 crc kubenswrapper[4945]: I0109 01:09:27.061119 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-bxwnn"] Jan 09 01:09:27 crc kubenswrapper[4945]: I0109 01:09:27.075908 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-bxwnn"] Jan 09 01:09:28 crc kubenswrapper[4945]: I0109 01:09:28.014848 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4efab81a-6b40-47bd-b236-8039779c4933" path="/var/lib/kubelet/pods/4efab81a-6b40-47bd-b236-8039779c4933/volumes" Jan 09 01:09:28 crc kubenswrapper[4945]: I0109 01:09:28.034473 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-f2a5-account-create-update-25zgt"] Jan 09 01:09:28 crc kubenswrapper[4945]: I0109 01:09:28.046034 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-f2a5-account-create-update-25zgt"] Jan 09 01:09:30 crc kubenswrapper[4945]: I0109 01:09:30.032289 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f69a96-868a-468d-9e07-53445536bc34" path="/var/lib/kubelet/pods/e1f69a96-868a-468d-9e07-53445536bc34/volumes" Jan 09 01:09:34 crc kubenswrapper[4945]: I0109 01:09:34.045658 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-l6hdm"] Jan 09 01:09:34 crc kubenswrapper[4945]: I0109 01:09:34.057610 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-l6hdm"] Jan 09 01:09:35 crc kubenswrapper[4945]: I0109 01:09:35.039045 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db3e-account-create-update-hkwdz"] Jan 09 01:09:35 crc kubenswrapper[4945]: I0109 01:09:35.049784 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db3e-account-create-update-hkwdz"] Jan 09 01:09:36 crc kubenswrapper[4945]: I0109 01:09:36.050664 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4442f9cc-03ad-40e5-b649-d397e7938e98" path="/var/lib/kubelet/pods/4442f9cc-03ad-40e5-b649-d397e7938e98/volumes" Jan 09 01:09:36 crc kubenswrapper[4945]: I0109 01:09:36.051585 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c3b70e-849b-45cb-8ced-891e7755cda5" path="/var/lib/kubelet/pods/a9c3b70e-849b-45cb-8ced-891e7755cda5/volumes" Jan 09 01:09:43 crc kubenswrapper[4945]: I0109 01:09:43.577909 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:09:43 crc kubenswrapper[4945]: I0109 01:09:43.578481 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:09:43 crc kubenswrapper[4945]: I0109 01:09:43.824868 4945 scope.go:117] "RemoveContainer" containerID="f4101c5d14e8a616e90b73822f5e90a04b1a8fcea1230909ab61ce112c26279f" Jan 09 01:09:43 crc kubenswrapper[4945]: I0109 01:09:43.864669 4945 scope.go:117] "RemoveContainer" containerID="2bea1279cb454f01584a896a6aae361c992fe4f7d329b8f6c949b50c490f1e6a" Jan 09 01:09:43 crc kubenswrapper[4945]: I0109 01:09:43.910265 4945 scope.go:117] "RemoveContainer" containerID="7f1c62919922dd47c242aa63d0d8290af7c95f99b731affc6df736504ad4b96e" Jan 09 01:09:43 crc kubenswrapper[4945]: I0109 01:09:43.977407 4945 scope.go:117] "RemoveContainer" containerID="a0fb92e6c3f6c28d07bd9953eeff44ffea3391ae8ebe5c317c1024cdfc8ef3b4" Jan 09 01:10:10 crc kubenswrapper[4945]: I0109 01:10:10.052741 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-4qtv7"] Jan 09 01:10:10 crc kubenswrapper[4945]: I0109 01:10:10.061359 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-4qtv7"] Jan 09 01:10:12 crc kubenswrapper[4945]: I0109 01:10:12.018697 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93bb14f-1fda-48f9-9251-e23763538847" path="/var/lib/kubelet/pods/d93bb14f-1fda-48f9-9251-e23763538847/volumes" Jan 09 01:10:13 crc kubenswrapper[4945]: I0109 01:10:13.578047 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:10:13 crc kubenswrapper[4945]: I0109 01:10:13.578120 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.675417 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kj2gb"] Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.679416 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.687279 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj2gb"] Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.868770 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcqhw\" (UniqueName: \"kubernetes.io/projected/82133c3c-02ca-4f64-acc2-9e5a0d34d905-kube-api-access-kcqhw\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.869123 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-utilities\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.869185 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-catalog-content\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.970778 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-utilities\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.970855 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-catalog-content\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.970922 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcqhw\" (UniqueName: \"kubernetes.io/projected/82133c3c-02ca-4f64-acc2-9e5a0d34d905-kube-api-access-kcqhw\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.971859 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-utilities\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.972166 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-catalog-content\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:34 crc kubenswrapper[4945]: I0109 01:10:34.996892 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcqhw\" (UniqueName: \"kubernetes.io/projected/82133c3c-02ca-4f64-acc2-9e5a0d34d905-kube-api-access-kcqhw\") pod \"certified-operators-kj2gb\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:35 crc kubenswrapper[4945]: I0109 01:10:35.004902 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:35 crc kubenswrapper[4945]: I0109 01:10:35.677188 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj2gb"] Jan 09 01:10:35 crc kubenswrapper[4945]: W0109 01:10:35.705823 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82133c3c_02ca_4f64_acc2_9e5a0d34d905.slice/crio-93d235abd36a4148e673e56a50f4df1544f663a9c6324d382c85fc73d161d629 WatchSource:0}: Error finding container 93d235abd36a4148e673e56a50f4df1544f663a9c6324d382c85fc73d161d629: Status 404 returned error can't find the container with id 93d235abd36a4148e673e56a50f4df1544f663a9c6324d382c85fc73d161d629 Jan 09 01:10:36 crc kubenswrapper[4945]: I0109 01:10:36.297078 4945 generic.go:334] "Generic (PLEG): container finished" podID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerID="ef277440cab30a712e2c2f8fdf2e1af4046e0af90e23e04c2196587cb744f216" exitCode=0 Jan 09 01:10:36 crc kubenswrapper[4945]: I0109 01:10:36.297157 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj2gb" event={"ID":"82133c3c-02ca-4f64-acc2-9e5a0d34d905","Type":"ContainerDied","Data":"ef277440cab30a712e2c2f8fdf2e1af4046e0af90e23e04c2196587cb744f216"} Jan 09 01:10:36 crc kubenswrapper[4945]: I0109 01:10:36.297503 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj2gb" event={"ID":"82133c3c-02ca-4f64-acc2-9e5a0d34d905","Type":"ContainerStarted","Data":"93d235abd36a4148e673e56a50f4df1544f663a9c6324d382c85fc73d161d629"} Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.073528 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k4l5n"] Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.076304 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.088367 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4l5n"] Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.251355 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdpff\" (UniqueName: \"kubernetes.io/projected/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-kube-api-access-mdpff\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.251751 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-catalog-content\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.251914 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-utilities\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.316919 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj2gb" event={"ID":"82133c3c-02ca-4f64-acc2-9e5a0d34d905","Type":"ContainerStarted","Data":"a41e3e33697ad89ffb83aa93429cc081bd066f47dc11bd39f7df2804f91d8f74"} Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.353631 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-utilities\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.353769 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdpff\" (UniqueName: \"kubernetes.io/projected/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-kube-api-access-mdpff\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.353827 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-catalog-content\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.354108 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-utilities\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.354402 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-catalog-content\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.388986 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdpff\" (UniqueName: \"kubernetes.io/projected/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-kube-api-access-mdpff\") pod \"redhat-operators-k4l5n\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:38 crc kubenswrapper[4945]: I0109 01:10:38.395612 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:39 crc kubenswrapper[4945]: I0109 01:10:39.004237 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4l5n"] Jan 09 01:10:39 crc kubenswrapper[4945]: I0109 01:10:39.331613 4945 generic.go:334] "Generic (PLEG): container finished" podID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerID="a41e3e33697ad89ffb83aa93429cc081bd066f47dc11bd39f7df2804f91d8f74" exitCode=0 Jan 09 01:10:39 crc kubenswrapper[4945]: I0109 01:10:39.331922 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj2gb" event={"ID":"82133c3c-02ca-4f64-acc2-9e5a0d34d905","Type":"ContainerDied","Data":"a41e3e33697ad89ffb83aa93429cc081bd066f47dc11bd39f7df2804f91d8f74"} Jan 09 01:10:39 crc kubenswrapper[4945]: I0109 01:10:39.338376 4945 generic.go:334] "Generic (PLEG): container finished" podID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerID="4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77" exitCode=0 Jan 09 01:10:39 crc kubenswrapper[4945]: I0109 01:10:39.338431 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l5n" event={"ID":"d2e4e63c-59d9-4f98-b5fd-5845ccc79580","Type":"ContainerDied","Data":"4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77"} Jan 09 01:10:39 crc kubenswrapper[4945]: I0109 01:10:39.338464 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l5n" event={"ID":"d2e4e63c-59d9-4f98-b5fd-5845ccc79580","Type":"ContainerStarted","Data":"7d313ed25abb86766bc0f2f9ecb1856179c79ccb9b373cb320be0bc53ef6e8de"} Jan 09 01:10:40 crc kubenswrapper[4945]: I0109 01:10:40.356411 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj2gb" event={"ID":"82133c3c-02ca-4f64-acc2-9e5a0d34d905","Type":"ContainerStarted","Data":"50cb03f382227f07003a48fcae63224d561503d186647fbe056eabc577ccc081"} Jan 09 01:10:40 crc kubenswrapper[4945]: I0109 01:10:40.378254 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l5n" event={"ID":"d2e4e63c-59d9-4f98-b5fd-5845ccc79580","Type":"ContainerStarted","Data":"addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8"} Jan 09 01:10:40 crc kubenswrapper[4945]: I0109 01:10:40.386301 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kj2gb" podStartSLOduration=2.940970717 podStartE2EDuration="6.386283097s" podCreationTimestamp="2026-01-09 01:10:34 +0000 UTC" firstStartedPulling="2026-01-09 01:10:36.300277712 +0000 UTC m=+6906.611436658" lastFinishedPulling="2026-01-09 01:10:39.745590092 +0000 UTC m=+6910.056749038" observedRunningTime="2026-01-09 01:10:40.382780811 +0000 UTC m=+6910.693939787" watchObservedRunningTime="2026-01-09 01:10:40.386283097 +0000 UTC m=+6910.697442043" Jan 09 01:10:43 crc kubenswrapper[4945]: I0109 01:10:43.577943 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:10:43 crc kubenswrapper[4945]: I0109 01:10:43.578810 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:10:43 crc kubenswrapper[4945]: I0109 01:10:43.579019 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:10:43 crc kubenswrapper[4945]: I0109 01:10:43.581164 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c87d76440281855df3f1cf34d41214b66f937a34385a2dede3e17d011f602ec"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:10:43 crc kubenswrapper[4945]: I0109 01:10:43.581267 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://8c87d76440281855df3f1cf34d41214b66f937a34385a2dede3e17d011f602ec" gracePeriod=600 Jan 09 01:10:44 crc kubenswrapper[4945]: I0109 01:10:44.111260 4945 scope.go:117] "RemoveContainer" containerID="79bf42946599a67dccd52f20998cd7c9c10aebbc44232b1cfdfafa899f7dc311" Jan 09 01:10:44 crc kubenswrapper[4945]: I0109 01:10:44.262372 4945 scope.go:117] "RemoveContainer" containerID="5a15fa223f5684d35ba51017a32814d722494dbbae6fd01eb5e8b9d2087ddf0b" Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.006327 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.006385 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.131333 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.432273 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="8c87d76440281855df3f1cf34d41214b66f937a34385a2dede3e17d011f602ec" exitCode=0 Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.432333 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"8c87d76440281855df3f1cf34d41214b66f937a34385a2dede3e17d011f602ec"} Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.432394 4945 scope.go:117] "RemoveContainer" containerID="c68e788e434a7bb156c8d155ccbcd9a7788894948eac575e5643675bbdd68c31" Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.435295 4945 generic.go:334] "Generic (PLEG): container finished" podID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerID="addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8" exitCode=0 Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.435320 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l5n" event={"ID":"d2e4e63c-59d9-4f98-b5fd-5845ccc79580","Type":"ContainerDied","Data":"addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8"} Jan 09 01:10:45 crc kubenswrapper[4945]: I0109 01:10:45.496344 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:46 crc kubenswrapper[4945]: I0109 01:10:46.447470 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670"} Jan 09 01:10:46 crc kubenswrapper[4945]: I0109 01:10:46.456859 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l5n" event={"ID":"d2e4e63c-59d9-4f98-b5fd-5845ccc79580","Type":"ContainerStarted","Data":"83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6"} Jan 09 01:10:46 crc kubenswrapper[4945]: I0109 01:10:46.513140 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k4l5n" podStartSLOduration=1.677133817 podStartE2EDuration="8.513115684s" podCreationTimestamp="2026-01-09 01:10:38 +0000 UTC" firstStartedPulling="2026-01-09 01:10:39.340397511 +0000 UTC m=+6909.651556457" lastFinishedPulling="2026-01-09 01:10:46.176379378 +0000 UTC m=+6916.487538324" observedRunningTime="2026-01-09 01:10:46.504078192 +0000 UTC m=+6916.815237158" watchObservedRunningTime="2026-01-09 01:10:46.513115684 +0000 UTC m=+6916.824274630" Jan 09 01:10:48 crc kubenswrapper[4945]: I0109 01:10:48.396511 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:48 crc kubenswrapper[4945]: I0109 01:10:48.398193 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:49 crc kubenswrapper[4945]: I0109 01:10:49.454107 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l5n" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="registry-server" probeResult="failure" output=< Jan 09 01:10:49 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 01:10:49 crc kubenswrapper[4945]: > Jan 09 01:10:50 crc kubenswrapper[4945]: I0109 01:10:50.263499 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj2gb"] Jan 09 01:10:50 crc kubenswrapper[4945]: I0109 01:10:50.264102 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kj2gb" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerName="registry-server" containerID="cri-o://50cb03f382227f07003a48fcae63224d561503d186647fbe056eabc577ccc081" gracePeriod=2 Jan 09 01:10:50 crc kubenswrapper[4945]: I0109 01:10:50.494717 4945 generic.go:334] "Generic (PLEG): container finished" podID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerID="50cb03f382227f07003a48fcae63224d561503d186647fbe056eabc577ccc081" exitCode=0 Jan 09 01:10:50 crc kubenswrapper[4945]: I0109 01:10:50.495819 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj2gb" event={"ID":"82133c3c-02ca-4f64-acc2-9e5a0d34d905","Type":"ContainerDied","Data":"50cb03f382227f07003a48fcae63224d561503d186647fbe056eabc577ccc081"} Jan 09 01:10:50 crc kubenswrapper[4945]: I0109 01:10:50.936787 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.077837 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-utilities\") pod \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.078050 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcqhw\" (UniqueName: \"kubernetes.io/projected/82133c3c-02ca-4f64-acc2-9e5a0d34d905-kube-api-access-kcqhw\") pod \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.078085 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-catalog-content\") pod \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\" (UID: \"82133c3c-02ca-4f64-acc2-9e5a0d34d905\") " Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.086148 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-utilities" (OuterVolumeSpecName: "utilities") pod "82133c3c-02ca-4f64-acc2-9e5a0d34d905" (UID: "82133c3c-02ca-4f64-acc2-9e5a0d34d905"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.100823 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82133c3c-02ca-4f64-acc2-9e5a0d34d905-kube-api-access-kcqhw" (OuterVolumeSpecName: "kube-api-access-kcqhw") pod "82133c3c-02ca-4f64-acc2-9e5a0d34d905" (UID: "82133c3c-02ca-4f64-acc2-9e5a0d34d905"). InnerVolumeSpecName "kube-api-access-kcqhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.153485 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82133c3c-02ca-4f64-acc2-9e5a0d34d905" (UID: "82133c3c-02ca-4f64-acc2-9e5a0d34d905"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.183541 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcqhw\" (UniqueName: \"kubernetes.io/projected/82133c3c-02ca-4f64-acc2-9e5a0d34d905-kube-api-access-kcqhw\") on node \"crc\" DevicePath \"\"" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.183575 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.183585 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82133c3c-02ca-4f64-acc2-9e5a0d34d905-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.507429 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj2gb" event={"ID":"82133c3c-02ca-4f64-acc2-9e5a0d34d905","Type":"ContainerDied","Data":"93d235abd36a4148e673e56a50f4df1544f663a9c6324d382c85fc73d161d629"} Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.507501 4945 scope.go:117] "RemoveContainer" containerID="50cb03f382227f07003a48fcae63224d561503d186647fbe056eabc577ccc081" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.507590 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj2gb" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.543257 4945 scope.go:117] "RemoveContainer" containerID="a41e3e33697ad89ffb83aa93429cc081bd066f47dc11bd39f7df2804f91d8f74" Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.567950 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj2gb"] Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.576310 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kj2gb"] Jan 09 01:10:51 crc kubenswrapper[4945]: I0109 01:10:51.580323 4945 scope.go:117] "RemoveContainer" containerID="ef277440cab30a712e2c2f8fdf2e1af4046e0af90e23e04c2196587cb744f216" Jan 09 01:10:52 crc kubenswrapper[4945]: I0109 01:10:52.016128 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" path="/var/lib/kubelet/pods/82133c3c-02ca-4f64-acc2-9e5a0d34d905/volumes" Jan 09 01:10:58 crc kubenswrapper[4945]: I0109 01:10:58.453698 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:10:58 crc kubenswrapper[4945]: I0109 01:10:58.511326 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.063531 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4l5n"] Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.064649 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4l5n" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="registry-server" containerID="cri-o://83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6" gracePeriod=2 Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.596888 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.626769 4945 generic.go:334] "Generic (PLEG): container finished" podID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerID="83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6" exitCode=0 Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.626831 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l5n" event={"ID":"d2e4e63c-59d9-4f98-b5fd-5845ccc79580","Type":"ContainerDied","Data":"83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6"} Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.627179 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l5n" event={"ID":"d2e4e63c-59d9-4f98-b5fd-5845ccc79580","Type":"ContainerDied","Data":"7d313ed25abb86766bc0f2f9ecb1856179c79ccb9b373cb320be0bc53ef6e8de"} Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.627225 4945 scope.go:117] "RemoveContainer" containerID="83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.626888 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l5n" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.659317 4945 scope.go:117] "RemoveContainer" containerID="addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.696164 4945 scope.go:117] "RemoveContainer" containerID="4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.730013 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdpff\" (UniqueName: \"kubernetes.io/projected/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-kube-api-access-mdpff\") pod \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.730096 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-utilities\") pod \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.730308 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-catalog-content\") pod \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\" (UID: \"d2e4e63c-59d9-4f98-b5fd-5845ccc79580\") " Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.731841 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-utilities" (OuterVolumeSpecName: "utilities") pod "d2e4e63c-59d9-4f98-b5fd-5845ccc79580" (UID: "d2e4e63c-59d9-4f98-b5fd-5845ccc79580"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.736404 4945 scope.go:117] "RemoveContainer" containerID="83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.736821 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-kube-api-access-mdpff" (OuterVolumeSpecName: "kube-api-access-mdpff") pod "d2e4e63c-59d9-4f98-b5fd-5845ccc79580" (UID: "d2e4e63c-59d9-4f98-b5fd-5845ccc79580"). InnerVolumeSpecName "kube-api-access-mdpff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:11:02 crc kubenswrapper[4945]: E0109 01:11:02.741601 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6\": container with ID starting with 83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6 not found: ID does not exist" containerID="83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.741666 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6"} err="failed to get container status \"83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6\": rpc error: code = NotFound desc = could not find container \"83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6\": container with ID starting with 83b2c50aaa8335a124b528b58aba2854263a8b2e1a9ab2d9084f0295d500a4d6 not found: ID does not exist" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.741714 4945 scope.go:117] "RemoveContainer" containerID="addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8" Jan 09 01:11:02 crc kubenswrapper[4945]: E0109 01:11:02.742144 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8\": container with ID starting with addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8 not found: ID does not exist" containerID="addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.742190 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8"} err="failed to get container status \"addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8\": rpc error: code = NotFound desc = could not find container \"addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8\": container with ID starting with addf3cb46eea6093102f8766c263f32ac930cedd57284feadfa031b9d83ea1b8 not found: ID does not exist" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.742224 4945 scope.go:117] "RemoveContainer" containerID="4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77" Jan 09 01:11:02 crc kubenswrapper[4945]: E0109 01:11:02.742543 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77\": container with ID starting with 4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77 not found: ID does not exist" containerID="4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.742576 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77"} err="failed to get container status \"4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77\": rpc error: code = NotFound desc = could not find container \"4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77\": container with ID starting with 4585e35ac029d6fbad9cf8695b8c89da61f22dce295a7aa39d52c3b2c64afe77 not found: ID does not exist" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.833149 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdpff\" (UniqueName: \"kubernetes.io/projected/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-kube-api-access-mdpff\") on node \"crc\" DevicePath \"\"" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.833204 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.871391 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2e4e63c-59d9-4f98-b5fd-5845ccc79580" (UID: "d2e4e63c-59d9-4f98-b5fd-5845ccc79580"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.935818 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2e4e63c-59d9-4f98-b5fd-5845ccc79580-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:11:02 crc kubenswrapper[4945]: I0109 01:11:02.995379 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4l5n"] Jan 09 01:11:03 crc kubenswrapper[4945]: I0109 01:11:03.016808 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k4l5n"] Jan 09 01:11:04 crc kubenswrapper[4945]: I0109 01:11:04.011309 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" path="/var/lib/kubelet/pods/d2e4e63c-59d9-4f98-b5fd-5845ccc79580/volumes" Jan 09 01:12:51 crc kubenswrapper[4945]: I0109 01:12:51.044503 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-2bfl4"] Jan 09 01:12:51 crc kubenswrapper[4945]: I0109 01:12:51.061321 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-c46f-account-create-update-ngbqv"] Jan 09 01:12:51 crc kubenswrapper[4945]: I0109 01:12:51.070634 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-2bfl4"] Jan 09 01:12:51 crc kubenswrapper[4945]: I0109 01:12:51.080649 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-c46f-account-create-update-ngbqv"] Jan 09 01:12:52 crc kubenswrapper[4945]: I0109 01:12:52.024732 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e15f948-cd94-4e42-9b32-e7d3e77920a0" path="/var/lib/kubelet/pods/3e15f948-cd94-4e42-9b32-e7d3e77920a0/volumes" Jan 09 01:12:52 crc kubenswrapper[4945]: I0109 01:12:52.026578 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d" path="/var/lib/kubelet/pods/bf7d2aa4-f30f-4cf4-a7a5-f7f7c708de8d/volumes" Jan 09 01:13:05 crc kubenswrapper[4945]: I0109 01:13:05.087158 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-hvf5r"] Jan 09 01:13:05 crc kubenswrapper[4945]: I0109 01:13:05.111016 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-hvf5r"] Jan 09 01:13:06 crc kubenswrapper[4945]: I0109 01:13:06.018494 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="702288b9-1bd9-476d-bf96-6bf8f3e73b7c" path="/var/lib/kubelet/pods/702288b9-1bd9-476d-bf96-6bf8f3e73b7c/volumes" Jan 09 01:13:13 crc kubenswrapper[4945]: I0109 01:13:13.578312 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:13:13 crc kubenswrapper[4945]: I0109 01:13:13.578783 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:13:43 crc kubenswrapper[4945]: I0109 01:13:43.578348 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:13:43 crc kubenswrapper[4945]: I0109 01:13:43.579024 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:13:44 crc kubenswrapper[4945]: I0109 01:13:44.478797 4945 scope.go:117] "RemoveContainer" containerID="2b427fbe98d03533682547f241f5eafd820bb2b12a59e1892efaebe2d8173550" Jan 09 01:13:44 crc kubenswrapper[4945]: I0109 01:13:44.509735 4945 scope.go:117] "RemoveContainer" containerID="85c5685ce3d0f743a22885a94a9631fdb92a45d6cde0a3eee906955b1d3c09b7" Jan 09 01:13:44 crc kubenswrapper[4945]: I0109 01:13:44.554121 4945 scope.go:117] "RemoveContainer" containerID="49c4b59fb0868de203733b1ff55eab6ef0d934f9be5edd89e5a2147425773346" Jan 09 01:14:13 crc kubenswrapper[4945]: I0109 01:14:13.578818 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:14:13 crc kubenswrapper[4945]: I0109 01:14:13.579483 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:14:13 crc kubenswrapper[4945]: I0109 01:14:13.579563 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:14:13 crc kubenswrapper[4945]: I0109 01:14:13.580543 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:14:13 crc kubenswrapper[4945]: I0109 01:14:13.580698 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" gracePeriod=600 Jan 09 01:14:13 crc kubenswrapper[4945]: E0109 01:14:13.720177 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:14:14 crc kubenswrapper[4945]: I0109 01:14:14.533486 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" exitCode=0 Jan 09 01:14:14 crc kubenswrapper[4945]: I0109 01:14:14.533743 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670"} Jan 09 01:14:14 crc kubenswrapper[4945]: I0109 01:14:14.534234 4945 scope.go:117] "RemoveContainer" containerID="8c87d76440281855df3f1cf34d41214b66f937a34385a2dede3e17d011f602ec" Jan 09 01:14:14 crc kubenswrapper[4945]: I0109 01:14:14.535332 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:14:14 crc kubenswrapper[4945]: E0109 01:14:14.535922 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:14:27 crc kubenswrapper[4945]: I0109 01:14:27.001109 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:14:27 crc kubenswrapper[4945]: E0109 01:14:27.002801 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:14:41 crc kubenswrapper[4945]: I0109 01:14:41.000067 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:14:41 crc kubenswrapper[4945]: E0109 01:14:41.001652 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:14:55 crc kubenswrapper[4945]: I0109 01:14:55.001825 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:14:55 crc kubenswrapper[4945]: E0109 01:14:55.002962 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:14:57 crc kubenswrapper[4945]: E0109 01:14:57.847017 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.154258 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5"] Jan 09 01:15:00 crc kubenswrapper[4945]: E0109 01:15:00.155364 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="extract-content" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.155381 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="extract-content" Jan 09 01:15:00 crc kubenswrapper[4945]: E0109 01:15:00.155415 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="extract-utilities" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.155423 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="extract-utilities" Jan 09 01:15:00 crc kubenswrapper[4945]: E0109 01:15:00.155454 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerName="registry-server" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.155462 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerName="registry-server" Jan 09 01:15:00 crc kubenswrapper[4945]: E0109 01:15:00.155474 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerName="extract-content" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.155481 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerName="extract-content" Jan 09 01:15:00 crc kubenswrapper[4945]: E0109 01:15:00.155491 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerName="extract-utilities" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.155498 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerName="extract-utilities" Jan 09 01:15:00 crc kubenswrapper[4945]: E0109 01:15:00.155514 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="registry-server" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.155521 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="registry-server" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.155779 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="82133c3c-02ca-4f64-acc2-9e5a0d34d905" containerName="registry-server" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.155798 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e4e63c-59d9-4f98-b5fd-5845ccc79580" containerName="registry-server" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.156855 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.163263 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.163328 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.176234 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5"] Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.239158 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59174838-ea08-4d97-91d2-9193dabd276f-secret-volume\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.239339 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24spp\" (UniqueName: \"kubernetes.io/projected/59174838-ea08-4d97-91d2-9193dabd276f-kube-api-access-24spp\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.239375 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59174838-ea08-4d97-91d2-9193dabd276f-config-volume\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.341248 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24spp\" (UniqueName: \"kubernetes.io/projected/59174838-ea08-4d97-91d2-9193dabd276f-kube-api-access-24spp\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.341331 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59174838-ea08-4d97-91d2-9193dabd276f-config-volume\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.341404 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59174838-ea08-4d97-91d2-9193dabd276f-secret-volume\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.342444 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59174838-ea08-4d97-91d2-9193dabd276f-config-volume\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.347539 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59174838-ea08-4d97-91d2-9193dabd276f-secret-volume\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.359539 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24spp\" (UniqueName: \"kubernetes.io/projected/59174838-ea08-4d97-91d2-9193dabd276f-kube-api-access-24spp\") pod \"collect-profiles-29465355-bhrg5\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.479803 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:00 crc kubenswrapper[4945]: I0109 01:15:00.974784 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5"] Jan 09 01:15:01 crc kubenswrapper[4945]: I0109 01:15:01.081665 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" event={"ID":"59174838-ea08-4d97-91d2-9193dabd276f","Type":"ContainerStarted","Data":"e7f10938a7e6a7b2471d43e55774d410e0d3117563b384b90b1c2148c16c3f74"} Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.097105 4945 generic.go:334] "Generic (PLEG): container finished" podID="59174838-ea08-4d97-91d2-9193dabd276f" containerID="092555a7a8cf0423b416dec48b7eea54b7f4d80695cee1dd4efa1413828babed" exitCode=0 Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.097200 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" event={"ID":"59174838-ea08-4d97-91d2-9193dabd276f","Type":"ContainerDied","Data":"092555a7a8cf0423b416dec48b7eea54b7f4d80695cee1dd4efa1413828babed"} Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.248110 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mwx94"] Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.251120 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.256804 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwx94"] Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.385714 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-utilities\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.385877 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-catalog-content\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.385909 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvtm6\" (UniqueName: \"kubernetes.io/projected/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-kube-api-access-tvtm6\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.488807 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-utilities\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.488879 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-catalog-content\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.488906 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvtm6\" (UniqueName: \"kubernetes.io/projected/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-kube-api-access-tvtm6\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.489400 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-catalog-content\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.489512 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-utilities\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.508906 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvtm6\" (UniqueName: \"kubernetes.io/projected/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-kube-api-access-tvtm6\") pod \"redhat-marketplace-mwx94\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:02 crc kubenswrapper[4945]: I0109 01:15:02.579465 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.092588 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwx94"] Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.108118 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwx94" event={"ID":"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f","Type":"ContainerStarted","Data":"3bb9c242370833544b1d54e3180755bceb5570b980ece4f29286f6829f8fe8c8"} Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.447345 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.512450 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59174838-ea08-4d97-91d2-9193dabd276f-config-volume\") pod \"59174838-ea08-4d97-91d2-9193dabd276f\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.512607 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59174838-ea08-4d97-91d2-9193dabd276f-secret-volume\") pod \"59174838-ea08-4d97-91d2-9193dabd276f\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.512928 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24spp\" (UniqueName: \"kubernetes.io/projected/59174838-ea08-4d97-91d2-9193dabd276f-kube-api-access-24spp\") pod \"59174838-ea08-4d97-91d2-9193dabd276f\" (UID: \"59174838-ea08-4d97-91d2-9193dabd276f\") " Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.513422 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59174838-ea08-4d97-91d2-9193dabd276f-config-volume" (OuterVolumeSpecName: "config-volume") pod "59174838-ea08-4d97-91d2-9193dabd276f" (UID: "59174838-ea08-4d97-91d2-9193dabd276f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.514151 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59174838-ea08-4d97-91d2-9193dabd276f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.518743 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59174838-ea08-4d97-91d2-9193dabd276f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "59174838-ea08-4d97-91d2-9193dabd276f" (UID: "59174838-ea08-4d97-91d2-9193dabd276f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.519383 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59174838-ea08-4d97-91d2-9193dabd276f-kube-api-access-24spp" (OuterVolumeSpecName: "kube-api-access-24spp") pod "59174838-ea08-4d97-91d2-9193dabd276f" (UID: "59174838-ea08-4d97-91d2-9193dabd276f"). InnerVolumeSpecName "kube-api-access-24spp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.616358 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59174838-ea08-4d97-91d2-9193dabd276f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:03 crc kubenswrapper[4945]: I0109 01:15:03.617224 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24spp\" (UniqueName: \"kubernetes.io/projected/59174838-ea08-4d97-91d2-9193dabd276f-kube-api-access-24spp\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:04 crc kubenswrapper[4945]: I0109 01:15:04.121170 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" event={"ID":"59174838-ea08-4d97-91d2-9193dabd276f","Type":"ContainerDied","Data":"e7f10938a7e6a7b2471d43e55774d410e0d3117563b384b90b1c2148c16c3f74"} Jan 09 01:15:04 crc kubenswrapper[4945]: I0109 01:15:04.121268 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7f10938a7e6a7b2471d43e55774d410e0d3117563b384b90b1c2148c16c3f74" Jan 09 01:15:04 crc kubenswrapper[4945]: I0109 01:15:04.121200 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5" Jan 09 01:15:04 crc kubenswrapper[4945]: I0109 01:15:04.123325 4945 generic.go:334] "Generic (PLEG): container finished" podID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerID="30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc" exitCode=0 Jan 09 01:15:04 crc kubenswrapper[4945]: I0109 01:15:04.123401 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwx94" event={"ID":"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f","Type":"ContainerDied","Data":"30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc"} Jan 09 01:15:04 crc kubenswrapper[4945]: I0109 01:15:04.128958 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:15:04 crc kubenswrapper[4945]: I0109 01:15:04.520634 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w"] Jan 09 01:15:04 crc kubenswrapper[4945]: I0109 01:15:04.530982 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465310-68s9w"] Jan 09 01:15:05 crc kubenswrapper[4945]: I0109 01:15:05.135119 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwx94" event={"ID":"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f","Type":"ContainerStarted","Data":"953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777"} Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.001313 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:15:06 crc kubenswrapper[4945]: E0109 01:15:06.002136 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.017140 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d9dc44b-8369-43f6-8c5e-56e2a496ef1a" path="/var/lib/kubelet/pods/2d9dc44b-8369-43f6-8c5e-56e2a496ef1a/volumes" Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.149986 4945 generic.go:334] "Generic (PLEG): container finished" podID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerID="953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777" exitCode=0 Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.150026 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwx94" event={"ID":"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f","Type":"ContainerDied","Data":"953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777"} Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.968150 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tz2v2"] Jan 09 01:15:06 crc kubenswrapper[4945]: E0109 01:15:06.969084 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59174838-ea08-4d97-91d2-9193dabd276f" containerName="collect-profiles" Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.969104 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="59174838-ea08-4d97-91d2-9193dabd276f" containerName="collect-profiles" Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.969435 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="59174838-ea08-4d97-91d2-9193dabd276f" containerName="collect-profiles" Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.971422 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:06 crc kubenswrapper[4945]: I0109 01:15:06.980897 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tz2v2"] Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.105265 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtbm\" (UniqueName: \"kubernetes.io/projected/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-kube-api-access-5vtbm\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.105333 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-utilities\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.105551 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-catalog-content\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.165399 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwx94" event={"ID":"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f","Type":"ContainerStarted","Data":"f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1"} Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.189033 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mwx94" podStartSLOduration=2.591344471 podStartE2EDuration="5.188976414s" podCreationTimestamp="2026-01-09 01:15:02 +0000 UTC" firstStartedPulling="2026-01-09 01:15:04.126878847 +0000 UTC m=+7174.438037793" lastFinishedPulling="2026-01-09 01:15:06.72451078 +0000 UTC m=+7177.035669736" observedRunningTime="2026-01-09 01:15:07.185753984 +0000 UTC m=+7177.496912940" watchObservedRunningTime="2026-01-09 01:15:07.188976414 +0000 UTC m=+7177.500135360" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.208185 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vtbm\" (UniqueName: \"kubernetes.io/projected/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-kube-api-access-5vtbm\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.208241 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-utilities\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.208278 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-catalog-content\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.208724 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-catalog-content\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.208910 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-utilities\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.233871 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vtbm\" (UniqueName: \"kubernetes.io/projected/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-kube-api-access-5vtbm\") pod \"community-operators-tz2v2\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.303518 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:07 crc kubenswrapper[4945]: I0109 01:15:07.907217 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tz2v2"] Jan 09 01:15:07 crc kubenswrapper[4945]: W0109 01:15:07.916380 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95dd7765_b9c2_4a42_b7c3_b407f3f267d1.slice/crio-f781921aa132695188148cace2518eb2819b1766a8f4815d547320811f3f54a9 WatchSource:0}: Error finding container f781921aa132695188148cace2518eb2819b1766a8f4815d547320811f3f54a9: Status 404 returned error can't find the container with id f781921aa132695188148cace2518eb2819b1766a8f4815d547320811f3f54a9 Jan 09 01:15:08 crc kubenswrapper[4945]: I0109 01:15:08.175948 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz2v2" event={"ID":"95dd7765-b9c2-4a42-b7c3-b407f3f267d1","Type":"ContainerStarted","Data":"f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404"} Jan 09 01:15:08 crc kubenswrapper[4945]: I0109 01:15:08.176433 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz2v2" event={"ID":"95dd7765-b9c2-4a42-b7c3-b407f3f267d1","Type":"ContainerStarted","Data":"f781921aa132695188148cace2518eb2819b1766a8f4815d547320811f3f54a9"} Jan 09 01:15:09 crc kubenswrapper[4945]: I0109 01:15:09.189127 4945 generic.go:334] "Generic (PLEG): container finished" podID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerID="f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404" exitCode=0 Jan 09 01:15:09 crc kubenswrapper[4945]: I0109 01:15:09.189278 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz2v2" event={"ID":"95dd7765-b9c2-4a42-b7c3-b407f3f267d1","Type":"ContainerDied","Data":"f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404"} Jan 09 01:15:10 crc kubenswrapper[4945]: I0109 01:15:10.050716 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-xpp5z"] Jan 09 01:15:10 crc kubenswrapper[4945]: I0109 01:15:10.059762 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-f55d-account-create-update-8bjjl"] Jan 09 01:15:10 crc kubenswrapper[4945]: I0109 01:15:10.068111 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-xpp5z"] Jan 09 01:15:10 crc kubenswrapper[4945]: I0109 01:15:10.077005 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-f55d-account-create-update-8bjjl"] Jan 09 01:15:11 crc kubenswrapper[4945]: I0109 01:15:11.219522 4945 generic.go:334] "Generic (PLEG): container finished" podID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerID="bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd" exitCode=0 Jan 09 01:15:11 crc kubenswrapper[4945]: I0109 01:15:11.219638 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz2v2" event={"ID":"95dd7765-b9c2-4a42-b7c3-b407f3f267d1","Type":"ContainerDied","Data":"bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd"} Jan 09 01:15:12 crc kubenswrapper[4945]: I0109 01:15:12.021799 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2379e260-5c37-4fb1-9216-a8b2037dcdc4" path="/var/lib/kubelet/pods/2379e260-5c37-4fb1-9216-a8b2037dcdc4/volumes" Jan 09 01:15:12 crc kubenswrapper[4945]: I0109 01:15:12.023968 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b4bbab-8905-41f8-8226-da26a5d644aa" path="/var/lib/kubelet/pods/93b4bbab-8905-41f8-8226-da26a5d644aa/volumes" Jan 09 01:15:12 crc kubenswrapper[4945]: I0109 01:15:12.243767 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz2v2" event={"ID":"95dd7765-b9c2-4a42-b7c3-b407f3f267d1","Type":"ContainerStarted","Data":"14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e"} Jan 09 01:15:12 crc kubenswrapper[4945]: I0109 01:15:12.271030 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tz2v2" podStartSLOduration=3.681997837 podStartE2EDuration="6.270981756s" podCreationTimestamp="2026-01-09 01:15:06 +0000 UTC" firstStartedPulling="2026-01-09 01:15:09.193698834 +0000 UTC m=+7179.504857780" lastFinishedPulling="2026-01-09 01:15:11.782682753 +0000 UTC m=+7182.093841699" observedRunningTime="2026-01-09 01:15:12.261145553 +0000 UTC m=+7182.572304529" watchObservedRunningTime="2026-01-09 01:15:12.270981756 +0000 UTC m=+7182.582140722" Jan 09 01:15:12 crc kubenswrapper[4945]: I0109 01:15:12.579917 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:12 crc kubenswrapper[4945]: I0109 01:15:12.579958 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:12 crc kubenswrapper[4945]: I0109 01:15:12.652786 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:13 crc kubenswrapper[4945]: I0109 01:15:13.310170 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:14 crc kubenswrapper[4945]: I0109 01:15:14.756231 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwx94"] Jan 09 01:15:15 crc kubenswrapper[4945]: I0109 01:15:15.274712 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mwx94" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerName="registry-server" containerID="cri-o://f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1" gracePeriod=2 Jan 09 01:15:15 crc kubenswrapper[4945]: I0109 01:15:15.895090 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.007490 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-catalog-content\") pod \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.007750 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvtm6\" (UniqueName: \"kubernetes.io/projected/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-kube-api-access-tvtm6\") pod \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.007825 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-utilities\") pod \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\" (UID: \"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f\") " Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.009975 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-utilities" (OuterVolumeSpecName: "utilities") pod "a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" (UID: "a0cd4ba8-a18d-49f1-a0fa-41d3da46128f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.013581 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-kube-api-access-tvtm6" (OuterVolumeSpecName: "kube-api-access-tvtm6") pod "a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" (UID: "a0cd4ba8-a18d-49f1-a0fa-41d3da46128f"). InnerVolumeSpecName "kube-api-access-tvtm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.030830 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" (UID: "a0cd4ba8-a18d-49f1-a0fa-41d3da46128f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.110807 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.110836 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvtm6\" (UniqueName: \"kubernetes.io/projected/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-kube-api-access-tvtm6\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.110846 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.287107 4945 generic.go:334] "Generic (PLEG): container finished" podID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerID="f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1" exitCode=0 Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.287171 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwx94" event={"ID":"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f","Type":"ContainerDied","Data":"f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1"} Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.287496 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwx94" event={"ID":"a0cd4ba8-a18d-49f1-a0fa-41d3da46128f","Type":"ContainerDied","Data":"3bb9c242370833544b1d54e3180755bceb5570b980ece4f29286f6829f8fe8c8"} Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.287605 4945 scope.go:117] "RemoveContainer" containerID="f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.287241 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwx94" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.308004 4945 scope.go:117] "RemoveContainer" containerID="953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.323572 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwx94"] Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.333474 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwx94"] Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.336476 4945 scope.go:117] "RemoveContainer" containerID="30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.403971 4945 scope.go:117] "RemoveContainer" containerID="f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1" Jan 09 01:15:16 crc kubenswrapper[4945]: E0109 01:15:16.404451 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1\": container with ID starting with f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1 not found: ID does not exist" containerID="f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.404506 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1"} err="failed to get container status \"f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1\": rpc error: code = NotFound desc = could not find container \"f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1\": container with ID starting with f96e36be866160418c5564dfe285dedf226b1f296992d316abad2004bb6925c1 not found: ID does not exist" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.404542 4945 scope.go:117] "RemoveContainer" containerID="953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777" Jan 09 01:15:16 crc kubenswrapper[4945]: E0109 01:15:16.404976 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777\": container with ID starting with 953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777 not found: ID does not exist" containerID="953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.405049 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777"} err="failed to get container status \"953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777\": rpc error: code = NotFound desc = could not find container \"953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777\": container with ID starting with 953ca4d8ecf26f76d27ee4e8c93e2e790ee41a785ead291dd354413cb21cb777 not found: ID does not exist" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.405086 4945 scope.go:117] "RemoveContainer" containerID="30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc" Jan 09 01:15:16 crc kubenswrapper[4945]: E0109 01:15:16.405525 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc\": container with ID starting with 30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc not found: ID does not exist" containerID="30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc" Jan 09 01:15:16 crc kubenswrapper[4945]: I0109 01:15:16.405562 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc"} err="failed to get container status \"30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc\": rpc error: code = NotFound desc = could not find container \"30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc\": container with ID starting with 30ae0970f19fa425eaa40c551aa81b7b1a8ebe4d918ef6e3f75c85b7b0a2a8cc not found: ID does not exist" Jan 09 01:15:17 crc kubenswrapper[4945]: I0109 01:15:17.303882 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:17 crc kubenswrapper[4945]: I0109 01:15:17.304120 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:17 crc kubenswrapper[4945]: I0109 01:15:17.349922 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:18 crc kubenswrapper[4945]: I0109 01:15:18.021641 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" path="/var/lib/kubelet/pods/a0cd4ba8-a18d-49f1-a0fa-41d3da46128f/volumes" Jan 09 01:15:18 crc kubenswrapper[4945]: I0109 01:15:18.381771 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:19 crc kubenswrapper[4945]: I0109 01:15:19.001196 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:15:19 crc kubenswrapper[4945]: E0109 01:15:19.002119 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:15:19 crc kubenswrapper[4945]: I0109 01:15:19.159712 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tz2v2"] Jan 09 01:15:20 crc kubenswrapper[4945]: I0109 01:15:20.332487 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tz2v2" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerName="registry-server" containerID="cri-o://14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e" gracePeriod=2 Jan 09 01:15:20 crc kubenswrapper[4945]: I0109 01:15:20.822258 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:20 crc kubenswrapper[4945]: I0109 01:15:20.927262 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vtbm\" (UniqueName: \"kubernetes.io/projected/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-kube-api-access-5vtbm\") pod \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " Jan 09 01:15:20 crc kubenswrapper[4945]: I0109 01:15:20.927379 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-catalog-content\") pod \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " Jan 09 01:15:20 crc kubenswrapper[4945]: I0109 01:15:20.927640 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-utilities\") pod \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\" (UID: \"95dd7765-b9c2-4a42-b7c3-b407f3f267d1\") " Jan 09 01:15:20 crc kubenswrapper[4945]: I0109 01:15:20.928511 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-utilities" (OuterVolumeSpecName: "utilities") pod "95dd7765-b9c2-4a42-b7c3-b407f3f267d1" (UID: "95dd7765-b9c2-4a42-b7c3-b407f3f267d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:15:20 crc kubenswrapper[4945]: I0109 01:15:20.934494 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-kube-api-access-5vtbm" (OuterVolumeSpecName: "kube-api-access-5vtbm") pod "95dd7765-b9c2-4a42-b7c3-b407f3f267d1" (UID: "95dd7765-b9c2-4a42-b7c3-b407f3f267d1"). InnerVolumeSpecName "kube-api-access-5vtbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.040568 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vtbm\" (UniqueName: \"kubernetes.io/projected/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-kube-api-access-5vtbm\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.040739 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.064970 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95dd7765-b9c2-4a42-b7c3-b407f3f267d1" (UID: "95dd7765-b9c2-4a42-b7c3-b407f3f267d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.142485 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95dd7765-b9c2-4a42-b7c3-b407f3f267d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.347943 4945 generic.go:334] "Generic (PLEG): container finished" podID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerID="14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e" exitCode=0 Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.348095 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz2v2" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.348155 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz2v2" event={"ID":"95dd7765-b9c2-4a42-b7c3-b407f3f267d1","Type":"ContainerDied","Data":"14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e"} Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.349084 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz2v2" event={"ID":"95dd7765-b9c2-4a42-b7c3-b407f3f267d1","Type":"ContainerDied","Data":"f781921aa132695188148cace2518eb2819b1766a8f4815d547320811f3f54a9"} Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.349128 4945 scope.go:117] "RemoveContainer" containerID="14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.398124 4945 scope.go:117] "RemoveContainer" containerID="bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.403320 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tz2v2"] Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.415704 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tz2v2"] Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.428544 4945 scope.go:117] "RemoveContainer" containerID="f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.488142 4945 scope.go:117] "RemoveContainer" containerID="14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e" Jan 09 01:15:21 crc kubenswrapper[4945]: E0109 01:15:21.489627 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e\": container with ID starting with 14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e not found: ID does not exist" containerID="14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.489682 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e"} err="failed to get container status \"14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e\": rpc error: code = NotFound desc = could not find container \"14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e\": container with ID starting with 14554c98bf08c40960886bdf0c90505b073a53bf67d05da803780ba5a401a27e not found: ID does not exist" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.489711 4945 scope.go:117] "RemoveContainer" containerID="bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd" Jan 09 01:15:21 crc kubenswrapper[4945]: E0109 01:15:21.490058 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd\": container with ID starting with bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd not found: ID does not exist" containerID="bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.490091 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd"} err="failed to get container status \"bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd\": rpc error: code = NotFound desc = could not find container \"bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd\": container with ID starting with bdecbdfcaa5892f4a0e0f90d74030e2e3a3ae81bb75f508a2bff2929283beabd not found: ID does not exist" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.490113 4945 scope.go:117] "RemoveContainer" containerID="f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404" Jan 09 01:15:21 crc kubenswrapper[4945]: E0109 01:15:21.491271 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404\": container with ID starting with f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404 not found: ID does not exist" containerID="f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404" Jan 09 01:15:21 crc kubenswrapper[4945]: I0109 01:15:21.491340 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404"} err="failed to get container status \"f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404\": rpc error: code = NotFound desc = could not find container \"f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404\": container with ID starting with f4cc7d35f47c7dc520b281b5a69711c0f189054b6db57ed475317ce460dcf404 not found: ID does not exist" Jan 09 01:15:22 crc kubenswrapper[4945]: I0109 01:15:22.037174 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" path="/var/lib/kubelet/pods/95dd7765-b9c2-4a42-b7c3-b407f3f267d1/volumes" Jan 09 01:15:22 crc kubenswrapper[4945]: I0109 01:15:22.053985 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-cp4nf"] Jan 09 01:15:22 crc kubenswrapper[4945]: I0109 01:15:22.073527 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-cp4nf"] Jan 09 01:15:24 crc kubenswrapper[4945]: I0109 01:15:24.018042 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe16e63-4938-4e6e-86d6-96b8ccfc95cb" path="/var/lib/kubelet/pods/afe16e63-4938-4e6e-86d6-96b8ccfc95cb/volumes" Jan 09 01:15:34 crc kubenswrapper[4945]: I0109 01:15:34.000566 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:15:34 crc kubenswrapper[4945]: E0109 01:15:34.001793 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:15:43 crc kubenswrapper[4945]: I0109 01:15:43.541193 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-tcq8p" podUID="27da940b-5334-4559-8cf3-754a90037ef5" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 01:15:44 crc kubenswrapper[4945]: I0109 01:15:44.684958 4945 scope.go:117] "RemoveContainer" containerID="9d758a92d2249cf72cd4fa253393cd510675c16432039506ef4f6552ab30507c" Jan 09 01:15:44 crc kubenswrapper[4945]: I0109 01:15:44.731224 4945 scope.go:117] "RemoveContainer" containerID="36e7722651311a55865afe553a6d99e8b531d87ed3d2178adb9552161d49fc25" Jan 09 01:15:44 crc kubenswrapper[4945]: I0109 01:15:44.768189 4945 scope.go:117] "RemoveContainer" containerID="d8117ca988674b0caec3962135db77f3a796ead121139b92264c2af3a0e9010f" Jan 09 01:15:44 crc kubenswrapper[4945]: I0109 01:15:44.811325 4945 scope.go:117] "RemoveContainer" containerID="99f28ad374f8b7277676403e8b10e27f1bd49668e13e2c6c7d1c80f32399f1e3" Jan 09 01:15:46 crc kubenswrapper[4945]: I0109 01:15:46.062092 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-488w9"] Jan 09 01:15:46 crc kubenswrapper[4945]: I0109 01:15:46.070861 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-243c-account-create-update-2tqbr"] Jan 09 01:15:46 crc kubenswrapper[4945]: I0109 01:15:46.081889 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-243c-account-create-update-2tqbr"] Jan 09 01:15:46 crc kubenswrapper[4945]: I0109 01:15:46.089921 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-488w9"] Jan 09 01:15:48 crc kubenswrapper[4945]: I0109 01:15:48.004268 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:15:48 crc kubenswrapper[4945]: E0109 01:15:48.004815 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:15:48 crc kubenswrapper[4945]: I0109 01:15:48.024687 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e839fcf-036c-420a-b4b4-d71d554fd7e2" path="/var/lib/kubelet/pods/5e839fcf-036c-420a-b4b4-d71d554fd7e2/volumes" Jan 09 01:15:48 crc kubenswrapper[4945]: I0109 01:15:48.027150 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b18f9a-4b09-4370-822b-f7036ce59f70" path="/var/lib/kubelet/pods/86b18f9a-4b09-4370-822b-f7036ce59f70/volumes" Jan 09 01:15:58 crc kubenswrapper[4945]: I0109 01:15:58.039679 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-grnpk"] Jan 09 01:15:58 crc kubenswrapper[4945]: I0109 01:15:58.050181 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-grnpk"] Jan 09 01:16:00 crc kubenswrapper[4945]: I0109 01:16:00.021687 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7608530a-71f8-40ed-b897-a55a6a23b021" path="/var/lib/kubelet/pods/7608530a-71f8-40ed-b897-a55a6a23b021/volumes" Jan 09 01:16:01 crc kubenswrapper[4945]: I0109 01:16:01.000473 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:16:01 crc kubenswrapper[4945]: E0109 01:16:01.000909 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:16:16 crc kubenswrapper[4945]: I0109 01:16:16.000957 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:16:16 crc kubenswrapper[4945]: E0109 01:16:16.002160 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:16:31 crc kubenswrapper[4945]: I0109 01:16:31.000549 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:16:31 crc kubenswrapper[4945]: E0109 01:16:31.001773 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:16:44 crc kubenswrapper[4945]: I0109 01:16:44.002278 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:16:44 crc kubenswrapper[4945]: E0109 01:16:44.004478 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:16:45 crc kubenswrapper[4945]: I0109 01:16:45.013285 4945 scope.go:117] "RemoveContainer" containerID="71080c2dbc035bf08e9b21e66dd23f8e9df061e8848ccfc8328e096a1567d14d" Jan 09 01:16:45 crc kubenswrapper[4945]: I0109 01:16:45.045392 4945 scope.go:117] "RemoveContainer" containerID="d07a8c05d394b3d653e5b0e7c466605d842cf70cfef827248a86851d7e365bdc" Jan 09 01:16:45 crc kubenswrapper[4945]: I0109 01:16:45.092636 4945 scope.go:117] "RemoveContainer" containerID="9737a6da3c2441ef90696522bd6b18d46a553b34640ee9ef51f8dde20813d3c6" Jan 09 01:16:57 crc kubenswrapper[4945]: I0109 01:16:57.000732 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:16:57 crc kubenswrapper[4945]: E0109 01:16:57.001570 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:17:09 crc kubenswrapper[4945]: I0109 01:17:09.001091 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:17:09 crc kubenswrapper[4945]: E0109 01:17:09.001926 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:17:24 crc kubenswrapper[4945]: I0109 01:17:24.007374 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:17:24 crc kubenswrapper[4945]: E0109 01:17:24.008734 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:17:38 crc kubenswrapper[4945]: I0109 01:17:38.018231 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:17:38 crc kubenswrapper[4945]: E0109 01:17:38.019559 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:17:53 crc kubenswrapper[4945]: I0109 01:17:53.001201 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:17:53 crc kubenswrapper[4945]: E0109 01:17:53.002528 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:18:08 crc kubenswrapper[4945]: I0109 01:18:08.003771 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:18:08 crc kubenswrapper[4945]: E0109 01:18:08.005179 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:18:19 crc kubenswrapper[4945]: I0109 01:18:19.001246 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:18:19 crc kubenswrapper[4945]: E0109 01:18:19.002423 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:18:32 crc kubenswrapper[4945]: I0109 01:18:32.000249 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:18:32 crc kubenswrapper[4945]: E0109 01:18:32.001146 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:18:43 crc kubenswrapper[4945]: I0109 01:18:43.001988 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:18:43 crc kubenswrapper[4945]: E0109 01:18:43.003334 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:18:57 crc kubenswrapper[4945]: I0109 01:18:57.001445 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:18:57 crc kubenswrapper[4945]: E0109 01:18:57.002269 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:18:59 crc kubenswrapper[4945]: I0109 01:18:59.798984 4945 generic.go:334] "Generic (PLEG): container finished" podID="fe7e1ae9-abb7-4f68-834a-5e4245dd2374" containerID="46c0c28b859bf4951afacd4f5a5e7d9b08107dc8c13da1868ae792a37b5a2d1e" exitCode=0 Jan 09 01:18:59 crc kubenswrapper[4945]: I0109 01:18:59.799071 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" event={"ID":"fe7e1ae9-abb7-4f68-834a-5e4245dd2374","Type":"ContainerDied","Data":"46c0c28b859bf4951afacd4f5a5e7d9b08107dc8c13da1868ae792a37b5a2d1e"} Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.319116 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.394530 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-inventory\") pod \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.394762 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blwjw\" (UniqueName: \"kubernetes.io/projected/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-kube-api-access-blwjw\") pod \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.394869 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ceph\") pod \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.394926 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-tripleo-cleanup-combined-ca-bundle\") pod \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.394983 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ssh-key-openstack-cell1\") pod \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\" (UID: \"fe7e1ae9-abb7-4f68-834a-5e4245dd2374\") " Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.401588 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "fe7e1ae9-abb7-4f68-834a-5e4245dd2374" (UID: "fe7e1ae9-abb7-4f68-834a-5e4245dd2374"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.401623 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ceph" (OuterVolumeSpecName: "ceph") pod "fe7e1ae9-abb7-4f68-834a-5e4245dd2374" (UID: "fe7e1ae9-abb7-4f68-834a-5e4245dd2374"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.406168 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-kube-api-access-blwjw" (OuterVolumeSpecName: "kube-api-access-blwjw") pod "fe7e1ae9-abb7-4f68-834a-5e4245dd2374" (UID: "fe7e1ae9-abb7-4f68-834a-5e4245dd2374"). InnerVolumeSpecName "kube-api-access-blwjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.435731 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-inventory" (OuterVolumeSpecName: "inventory") pod "fe7e1ae9-abb7-4f68-834a-5e4245dd2374" (UID: "fe7e1ae9-abb7-4f68-834a-5e4245dd2374"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.436395 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "fe7e1ae9-abb7-4f68-834a-5e4245dd2374" (UID: "fe7e1ae9-abb7-4f68-834a-5e4245dd2374"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.497887 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.497932 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blwjw\" (UniqueName: \"kubernetes.io/projected/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-kube-api-access-blwjw\") on node \"crc\" DevicePath \"\"" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.497943 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.497953 4945 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.497965 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/fe7e1ae9-abb7-4f68-834a-5e4245dd2374-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.823766 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" event={"ID":"fe7e1ae9-abb7-4f68-834a-5e4245dd2374","Type":"ContainerDied","Data":"077493723da4b6fb209eaaed4106f53ae8eced2fb015820fa19e68d0efb225bc"} Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.823809 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="077493723da4b6fb209eaaed4106f53ae8eced2fb015820fa19e68d0efb225bc" Jan 09 01:19:01 crc kubenswrapper[4945]: I0109 01:19:01.823815 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx" Jan 09 01:19:10 crc kubenswrapper[4945]: I0109 01:19:10.013627 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:19:10 crc kubenswrapper[4945]: E0109 01:19:10.014643 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.188377 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-qbvtp"] Jan 09 01:19:12 crc kubenswrapper[4945]: E0109 01:19:12.189524 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerName="registry-server" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.189542 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerName="registry-server" Jan 09 01:19:12 crc kubenswrapper[4945]: E0109 01:19:12.189573 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerName="extract-utilities" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.189584 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerName="extract-utilities" Jan 09 01:19:12 crc kubenswrapper[4945]: E0109 01:19:12.189607 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerName="extract-content" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.189616 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerName="extract-content" Jan 09 01:19:12 crc kubenswrapper[4945]: E0109 01:19:12.189645 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerName="registry-server" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.189655 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerName="registry-server" Jan 09 01:19:12 crc kubenswrapper[4945]: E0109 01:19:12.189668 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerName="extract-content" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.189678 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerName="extract-content" Jan 09 01:19:12 crc kubenswrapper[4945]: E0109 01:19:12.189704 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe7e1ae9-abb7-4f68-834a-5e4245dd2374" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.189716 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe7e1ae9-abb7-4f68-834a-5e4245dd2374" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 09 01:19:12 crc kubenswrapper[4945]: E0109 01:19:12.189744 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerName="extract-utilities" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.189754 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerName="extract-utilities" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.190091 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="95dd7765-b9c2-4a42-b7c3-b407f3f267d1" containerName="registry-server" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.190126 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cd4ba8-a18d-49f1-a0fa-41d3da46128f" containerName="registry-server" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.190162 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe7e1ae9-abb7-4f68-834a-5e4245dd2374" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.191267 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.205432 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-qbvtp"] Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.223952 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.224396 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.224581 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.224741 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.308976 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjmh\" (UniqueName: \"kubernetes.io/projected/1cf89d43-4ef8-44ce-9196-bed933905f35-kube-api-access-qwjmh\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.309392 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.309487 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-inventory\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.309572 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ceph\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.309817 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.411728 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwjmh\" (UniqueName: \"kubernetes.io/projected/1cf89d43-4ef8-44ce-9196-bed933905f35-kube-api-access-qwjmh\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.411915 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.411966 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-inventory\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.412038 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ceph\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.412171 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.419932 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-inventory\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.419950 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ssh-key-openstack-cell1\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.420537 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ceph\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.425475 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.440754 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwjmh\" (UniqueName: \"kubernetes.io/projected/1cf89d43-4ef8-44ce-9196-bed933905f35-kube-api-access-qwjmh\") pod \"bootstrap-openstack-openstack-cell1-qbvtp\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:12 crc kubenswrapper[4945]: I0109 01:19:12.555356 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:19:13 crc kubenswrapper[4945]: I0109 01:19:13.180315 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-qbvtp"] Jan 09 01:19:13 crc kubenswrapper[4945]: W0109 01:19:13.183046 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cf89d43_4ef8_44ce_9196_bed933905f35.slice/crio-b72dfb86fd23f8bd9762dfb52150230958afea7b8a7eb98a3cabae89fda62ad7 WatchSource:0}: Error finding container b72dfb86fd23f8bd9762dfb52150230958afea7b8a7eb98a3cabae89fda62ad7: Status 404 returned error can't find the container with id b72dfb86fd23f8bd9762dfb52150230958afea7b8a7eb98a3cabae89fda62ad7 Jan 09 01:19:13 crc kubenswrapper[4945]: I0109 01:19:13.964735 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" event={"ID":"1cf89d43-4ef8-44ce-9196-bed933905f35","Type":"ContainerStarted","Data":"5e29900d134fe12913508c4ea0727e33d030bb88b3794556f96b1f9318f227ee"} Jan 09 01:19:13 crc kubenswrapper[4945]: I0109 01:19:13.965373 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" event={"ID":"1cf89d43-4ef8-44ce-9196-bed933905f35","Type":"ContainerStarted","Data":"b72dfb86fd23f8bd9762dfb52150230958afea7b8a7eb98a3cabae89fda62ad7"} Jan 09 01:19:13 crc kubenswrapper[4945]: I0109 01:19:13.986350 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" podStartSLOduration=1.658984023 podStartE2EDuration="1.986327654s" podCreationTimestamp="2026-01-09 01:19:12 +0000 UTC" firstStartedPulling="2026-01-09 01:19:13.186310766 +0000 UTC m=+7423.497469732" lastFinishedPulling="2026-01-09 01:19:13.513654417 +0000 UTC m=+7423.824813363" observedRunningTime="2026-01-09 01:19:13.977744432 +0000 UTC m=+7424.288903398" watchObservedRunningTime="2026-01-09 01:19:13.986327654 +0000 UTC m=+7424.297486600" Jan 09 01:19:23 crc kubenswrapper[4945]: I0109 01:19:23.000326 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:19:24 crc kubenswrapper[4945]: I0109 01:19:24.090011 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"fcd3e0c4a5eaf338ded1a32a4cba747f0799b31c8563ade8fd3b0809adf0d742"} Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.243259 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fbwgk"] Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.249130 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.258364 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbwgk"] Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.414054 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-catalog-content\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.414097 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v7m4\" (UniqueName: \"kubernetes.io/projected/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-kube-api-access-5v7m4\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.414119 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-utilities\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.515623 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-catalog-content\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.515668 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v7m4\" (UniqueName: \"kubernetes.io/projected/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-kube-api-access-5v7m4\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.515692 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-utilities\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.516196 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-catalog-content\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.516201 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-utilities\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.540092 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v7m4\" (UniqueName: \"kubernetes.io/projected/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-kube-api-access-5v7m4\") pod \"redhat-operators-fbwgk\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:18 crc kubenswrapper[4945]: I0109 01:21:18.590859 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:19 crc kubenswrapper[4945]: I0109 01:21:19.118202 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fbwgk"] Jan 09 01:21:19 crc kubenswrapper[4945]: I0109 01:21:19.322188 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbwgk" event={"ID":"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab","Type":"ContainerStarted","Data":"4d97aaff5ddc92af8adba94e4ea7c5affe4edbbffb4d46583582133eea343715"} Jan 09 01:21:20 crc kubenswrapper[4945]: I0109 01:21:20.333020 4945 generic.go:334] "Generic (PLEG): container finished" podID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerID="567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721" exitCode=0 Jan 09 01:21:20 crc kubenswrapper[4945]: I0109 01:21:20.333229 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbwgk" event={"ID":"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab","Type":"ContainerDied","Data":"567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721"} Jan 09 01:21:20 crc kubenswrapper[4945]: I0109 01:21:20.335701 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:21:21 crc kubenswrapper[4945]: I0109 01:21:21.346195 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbwgk" event={"ID":"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab","Type":"ContainerStarted","Data":"fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5"} Jan 09 01:21:24 crc kubenswrapper[4945]: I0109 01:21:24.385556 4945 generic.go:334] "Generic (PLEG): container finished" podID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerID="fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5" exitCode=0 Jan 09 01:21:24 crc kubenswrapper[4945]: I0109 01:21:24.386110 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbwgk" event={"ID":"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab","Type":"ContainerDied","Data":"fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5"} Jan 09 01:21:25 crc kubenswrapper[4945]: I0109 01:21:25.398613 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbwgk" event={"ID":"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab","Type":"ContainerStarted","Data":"9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04"} Jan 09 01:21:25 crc kubenswrapper[4945]: I0109 01:21:25.428250 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fbwgk" podStartSLOduration=2.905334506 podStartE2EDuration="7.428226072s" podCreationTimestamp="2026-01-09 01:21:18 +0000 UTC" firstStartedPulling="2026-01-09 01:21:20.335459043 +0000 UTC m=+7550.646617989" lastFinishedPulling="2026-01-09 01:21:24.858350609 +0000 UTC m=+7555.169509555" observedRunningTime="2026-01-09 01:21:25.423438593 +0000 UTC m=+7555.734597549" watchObservedRunningTime="2026-01-09 01:21:25.428226072 +0000 UTC m=+7555.739385028" Jan 09 01:21:28 crc kubenswrapper[4945]: I0109 01:21:28.591474 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:28 crc kubenswrapper[4945]: I0109 01:21:28.592060 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:29 crc kubenswrapper[4945]: I0109 01:21:29.650640 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fbwgk" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="registry-server" probeResult="failure" output=< Jan 09 01:21:29 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 01:21:29 crc kubenswrapper[4945]: > Jan 09 01:21:38 crc kubenswrapper[4945]: I0109 01:21:38.679278 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:38 crc kubenswrapper[4945]: I0109 01:21:38.761600 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:38 crc kubenswrapper[4945]: I0109 01:21:38.947575 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbwgk"] Jan 09 01:21:40 crc kubenswrapper[4945]: I0109 01:21:40.552536 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fbwgk" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="registry-server" containerID="cri-o://9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04" gracePeriod=2 Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.156075 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.289813 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v7m4\" (UniqueName: \"kubernetes.io/projected/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-kube-api-access-5v7m4\") pod \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.290097 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-catalog-content\") pod \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.290215 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-utilities\") pod \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\" (UID: \"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab\") " Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.291644 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-utilities" (OuterVolumeSpecName: "utilities") pod "9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" (UID: "9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.297349 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-kube-api-access-5v7m4" (OuterVolumeSpecName: "kube-api-access-5v7m4") pod "9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" (UID: "9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab"). InnerVolumeSpecName "kube-api-access-5v7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.392258 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.392295 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v7m4\" (UniqueName: \"kubernetes.io/projected/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-kube-api-access-5v7m4\") on node \"crc\" DevicePath \"\"" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.420474 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" (UID: "9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.494253 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.563370 4945 generic.go:334] "Generic (PLEG): container finished" podID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerID="9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04" exitCode=0 Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.563424 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbwgk" event={"ID":"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab","Type":"ContainerDied","Data":"9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04"} Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.563451 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fbwgk" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.563473 4945 scope.go:117] "RemoveContainer" containerID="9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.563458 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fbwgk" event={"ID":"9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab","Type":"ContainerDied","Data":"4d97aaff5ddc92af8adba94e4ea7c5affe4edbbffb4d46583582133eea343715"} Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.589447 4945 scope.go:117] "RemoveContainer" containerID="fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.616207 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fbwgk"] Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.627284 4945 scope.go:117] "RemoveContainer" containerID="567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.631561 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fbwgk"] Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.673988 4945 scope.go:117] "RemoveContainer" containerID="9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04" Jan 09 01:21:41 crc kubenswrapper[4945]: E0109 01:21:41.674666 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04\": container with ID starting with 9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04 not found: ID does not exist" containerID="9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.674722 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04"} err="failed to get container status \"9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04\": rpc error: code = NotFound desc = could not find container \"9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04\": container with ID starting with 9ad93cf7659ad606f83edd33abc29ea11ffc9a803cebac68db6be28cebde1f04 not found: ID does not exist" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.674762 4945 scope.go:117] "RemoveContainer" containerID="fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5" Jan 09 01:21:41 crc kubenswrapper[4945]: E0109 01:21:41.675353 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5\": container with ID starting with fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5 not found: ID does not exist" containerID="fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.675460 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5"} err="failed to get container status \"fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5\": rpc error: code = NotFound desc = could not find container \"fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5\": container with ID starting with fd05c77125f8b41163a88fd281231ef57476c1c76dcf02e7c54e924c63bebcc5 not found: ID does not exist" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.675540 4945 scope.go:117] "RemoveContainer" containerID="567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721" Jan 09 01:21:41 crc kubenswrapper[4945]: E0109 01:21:41.676311 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721\": container with ID starting with 567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721 not found: ID does not exist" containerID="567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721" Jan 09 01:21:41 crc kubenswrapper[4945]: I0109 01:21:41.676373 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721"} err="failed to get container status \"567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721\": rpc error: code = NotFound desc = could not find container \"567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721\": container with ID starting with 567d434e65aa85a287ca66fdead9a981b7b57cd45ab1fce4855c3557ef54b721 not found: ID does not exist" Jan 09 01:21:42 crc kubenswrapper[4945]: I0109 01:21:42.014135 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" path="/var/lib/kubelet/pods/9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab/volumes" Jan 09 01:21:43 crc kubenswrapper[4945]: I0109 01:21:43.578298 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:21:43 crc kubenswrapper[4945]: I0109 01:21:43.578827 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:22:13 crc kubenswrapper[4945]: I0109 01:22:13.577949 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:22:13 crc kubenswrapper[4945]: I0109 01:22:13.579420 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:22:24 crc kubenswrapper[4945]: I0109 01:22:24.070276 4945 generic.go:334] "Generic (PLEG): container finished" podID="1cf89d43-4ef8-44ce-9196-bed933905f35" containerID="5e29900d134fe12913508c4ea0727e33d030bb88b3794556f96b1f9318f227ee" exitCode=0 Jan 09 01:22:24 crc kubenswrapper[4945]: I0109 01:22:24.070819 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" event={"ID":"1cf89d43-4ef8-44ce-9196-bed933905f35","Type":"ContainerDied","Data":"5e29900d134fe12913508c4ea0727e33d030bb88b3794556f96b1f9318f227ee"} Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.628911 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.735070 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-bootstrap-combined-ca-bundle\") pod \"1cf89d43-4ef8-44ce-9196-bed933905f35\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.735126 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ceph\") pod \"1cf89d43-4ef8-44ce-9196-bed933905f35\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.735578 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ssh-key-openstack-cell1\") pod \"1cf89d43-4ef8-44ce-9196-bed933905f35\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.735691 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwjmh\" (UniqueName: \"kubernetes.io/projected/1cf89d43-4ef8-44ce-9196-bed933905f35-kube-api-access-qwjmh\") pod \"1cf89d43-4ef8-44ce-9196-bed933905f35\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.735852 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-inventory\") pod \"1cf89d43-4ef8-44ce-9196-bed933905f35\" (UID: \"1cf89d43-4ef8-44ce-9196-bed933905f35\") " Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.745221 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf89d43-4ef8-44ce-9196-bed933905f35-kube-api-access-qwjmh" (OuterVolumeSpecName: "kube-api-access-qwjmh") pod "1cf89d43-4ef8-44ce-9196-bed933905f35" (UID: "1cf89d43-4ef8-44ce-9196-bed933905f35"). InnerVolumeSpecName "kube-api-access-qwjmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.745222 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1cf89d43-4ef8-44ce-9196-bed933905f35" (UID: "1cf89d43-4ef8-44ce-9196-bed933905f35"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.757760 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ceph" (OuterVolumeSpecName: "ceph") pod "1cf89d43-4ef8-44ce-9196-bed933905f35" (UID: "1cf89d43-4ef8-44ce-9196-bed933905f35"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.764335 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-inventory" (OuterVolumeSpecName: "inventory") pod "1cf89d43-4ef8-44ce-9196-bed933905f35" (UID: "1cf89d43-4ef8-44ce-9196-bed933905f35"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.770565 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1cf89d43-4ef8-44ce-9196-bed933905f35" (UID: "1cf89d43-4ef8-44ce-9196-bed933905f35"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.838289 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.838803 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwjmh\" (UniqueName: \"kubernetes.io/projected/1cf89d43-4ef8-44ce-9196-bed933905f35-kube-api-access-qwjmh\") on node \"crc\" DevicePath \"\"" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.838887 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.838949 4945 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:22:25 crc kubenswrapper[4945]: I0109 01:22:25.839065 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1cf89d43-4ef8-44ce-9196-bed933905f35-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.098549 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" event={"ID":"1cf89d43-4ef8-44ce-9196-bed933905f35","Type":"ContainerDied","Data":"b72dfb86fd23f8bd9762dfb52150230958afea7b8a7eb98a3cabae89fda62ad7"} Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.098641 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b72dfb86fd23f8bd9762dfb52150230958afea7b8a7eb98a3cabae89fda62ad7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.098659 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-qbvtp" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.193798 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-dvrl7"] Jan 09 01:22:26 crc kubenswrapper[4945]: E0109 01:22:26.194321 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="extract-utilities" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.194348 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="extract-utilities" Jan 09 01:22:26 crc kubenswrapper[4945]: E0109 01:22:26.194369 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf89d43-4ef8-44ce-9196-bed933905f35" containerName="bootstrap-openstack-openstack-cell1" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.194379 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf89d43-4ef8-44ce-9196-bed933905f35" containerName="bootstrap-openstack-openstack-cell1" Jan 09 01:22:26 crc kubenswrapper[4945]: E0109 01:22:26.194424 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="extract-content" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.194434 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="extract-content" Jan 09 01:22:26 crc kubenswrapper[4945]: E0109 01:22:26.194454 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="registry-server" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.194462 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="registry-server" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.194753 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff4f48b-2999-4b6b-a814-5d3ce9cb7aab" containerName="registry-server" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.194791 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf89d43-4ef8-44ce-9196-bed933905f35" containerName="bootstrap-openstack-openstack-cell1" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.195872 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.199389 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.199637 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.199883 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.201032 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.205695 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-dvrl7"] Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.355034 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-inventory\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.355267 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ceph\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.355399 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thgv\" (UniqueName: \"kubernetes.io/projected/947b21b1-6853-466c-8115-a0680381f340-kube-api-access-4thgv\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.355554 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.457960 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-inventory\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.458086 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ceph\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.458133 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thgv\" (UniqueName: \"kubernetes.io/projected/947b21b1-6853-466c-8115-a0680381f340-kube-api-access-4thgv\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.458180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.462749 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ceph\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.466628 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ssh-key-openstack-cell1\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.470292 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-inventory\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.479411 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thgv\" (UniqueName: \"kubernetes.io/projected/947b21b1-6853-466c-8115-a0680381f340-kube-api-access-4thgv\") pod \"download-cache-openstack-openstack-cell1-dvrl7\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:26 crc kubenswrapper[4945]: I0109 01:22:26.526934 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:22:27 crc kubenswrapper[4945]: I0109 01:22:27.120923 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-dvrl7"] Jan 09 01:22:28 crc kubenswrapper[4945]: I0109 01:22:28.127095 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" event={"ID":"947b21b1-6853-466c-8115-a0680381f340","Type":"ContainerStarted","Data":"059ab5b361042b96fa6945b35e1dbd71d97c9459a31350c34e5f9dfb9cffd301"} Jan 09 01:22:28 crc kubenswrapper[4945]: I0109 01:22:28.127427 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" event={"ID":"947b21b1-6853-466c-8115-a0680381f340","Type":"ContainerStarted","Data":"86bde4285dbc19218681cc1901d7ca81a8cbe76b191e299a28b9151e61661b32"} Jan 09 01:22:28 crc kubenswrapper[4945]: I0109 01:22:28.152461 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" podStartSLOduration=1.954713455 podStartE2EDuration="2.152435798s" podCreationTimestamp="2026-01-09 01:22:26 +0000 UTC" firstStartedPulling="2026-01-09 01:22:27.122659254 +0000 UTC m=+7617.433818210" lastFinishedPulling="2026-01-09 01:22:27.320381607 +0000 UTC m=+7617.631540553" observedRunningTime="2026-01-09 01:22:28.14117713 +0000 UTC m=+7618.452336106" watchObservedRunningTime="2026-01-09 01:22:28.152435798 +0000 UTC m=+7618.463594764" Jan 09 01:22:43 crc kubenswrapper[4945]: I0109 01:22:43.578766 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:22:43 crc kubenswrapper[4945]: I0109 01:22:43.579462 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:22:43 crc kubenswrapper[4945]: I0109 01:22:43.579533 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:22:43 crc kubenswrapper[4945]: I0109 01:22:43.580510 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcd3e0c4a5eaf338ded1a32a4cba747f0799b31c8563ade8fd3b0809adf0d742"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:22:43 crc kubenswrapper[4945]: I0109 01:22:43.580632 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://fcd3e0c4a5eaf338ded1a32a4cba747f0799b31c8563ade8fd3b0809adf0d742" gracePeriod=600 Jan 09 01:22:44 crc kubenswrapper[4945]: I0109 01:22:44.289840 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="fcd3e0c4a5eaf338ded1a32a4cba747f0799b31c8563ade8fd3b0809adf0d742" exitCode=0 Jan 09 01:22:44 crc kubenswrapper[4945]: I0109 01:22:44.289905 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"fcd3e0c4a5eaf338ded1a32a4cba747f0799b31c8563ade8fd3b0809adf0d742"} Jan 09 01:22:44 crc kubenswrapper[4945]: I0109 01:22:44.290628 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400"} Jan 09 01:22:44 crc kubenswrapper[4945]: I0109 01:22:44.290661 4945 scope.go:117] "RemoveContainer" containerID="5c9eef048ecb1301685281887090f9824fb6c6a09fec46376a486b7bb572c670" Jan 09 01:24:00 crc kubenswrapper[4945]: I0109 01:24:00.118499 4945 generic.go:334] "Generic (PLEG): container finished" podID="947b21b1-6853-466c-8115-a0680381f340" containerID="059ab5b361042b96fa6945b35e1dbd71d97c9459a31350c34e5f9dfb9cffd301" exitCode=0 Jan 09 01:24:00 crc kubenswrapper[4945]: I0109 01:24:00.118580 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" event={"ID":"947b21b1-6853-466c-8115-a0680381f340","Type":"ContainerDied","Data":"059ab5b361042b96fa6945b35e1dbd71d97c9459a31350c34e5f9dfb9cffd301"} Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.604083 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.692812 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-inventory\") pod \"947b21b1-6853-466c-8115-a0680381f340\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.692975 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4thgv\" (UniqueName: \"kubernetes.io/projected/947b21b1-6853-466c-8115-a0680381f340-kube-api-access-4thgv\") pod \"947b21b1-6853-466c-8115-a0680381f340\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.693029 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ceph\") pod \"947b21b1-6853-466c-8115-a0680381f340\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.693162 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ssh-key-openstack-cell1\") pod \"947b21b1-6853-466c-8115-a0680381f340\" (UID: \"947b21b1-6853-466c-8115-a0680381f340\") " Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.699239 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ceph" (OuterVolumeSpecName: "ceph") pod "947b21b1-6853-466c-8115-a0680381f340" (UID: "947b21b1-6853-466c-8115-a0680381f340"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.699549 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947b21b1-6853-466c-8115-a0680381f340-kube-api-access-4thgv" (OuterVolumeSpecName: "kube-api-access-4thgv") pod "947b21b1-6853-466c-8115-a0680381f340" (UID: "947b21b1-6853-466c-8115-a0680381f340"). InnerVolumeSpecName "kube-api-access-4thgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.724664 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "947b21b1-6853-466c-8115-a0680381f340" (UID: "947b21b1-6853-466c-8115-a0680381f340"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.748423 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-inventory" (OuterVolumeSpecName: "inventory") pod "947b21b1-6853-466c-8115-a0680381f340" (UID: "947b21b1-6853-466c-8115-a0680381f340"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.795590 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4thgv\" (UniqueName: \"kubernetes.io/projected/947b21b1-6853-466c-8115-a0680381f340-kube-api-access-4thgv\") on node \"crc\" DevicePath \"\"" Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.795629 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.795644 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:24:01 crc kubenswrapper[4945]: I0109 01:24:01.795657 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/947b21b1-6853-466c-8115-a0680381f340-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.160779 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" event={"ID":"947b21b1-6853-466c-8115-a0680381f340","Type":"ContainerDied","Data":"86bde4285dbc19218681cc1901d7ca81a8cbe76b191e299a28b9151e61661b32"} Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.160832 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86bde4285dbc19218681cc1901d7ca81a8cbe76b191e299a28b9151e61661b32" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.160832 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-dvrl7" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.229735 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-g9b52"] Jan 09 01:24:02 crc kubenswrapper[4945]: E0109 01:24:02.230577 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947b21b1-6853-466c-8115-a0680381f340" containerName="download-cache-openstack-openstack-cell1" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.230602 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="947b21b1-6853-466c-8115-a0680381f340" containerName="download-cache-openstack-openstack-cell1" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.230858 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="947b21b1-6853-466c-8115-a0680381f340" containerName="download-cache-openstack-openstack-cell1" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.231821 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.236373 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.236530 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.236670 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.236980 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.241796 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-g9b52"] Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.309568 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ceph\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.309626 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklqg\" (UniqueName: \"kubernetes.io/projected/66c85b89-d1b3-4657-97e7-df9f3c390638-kube-api-access-pklqg\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.309714 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.309734 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-inventory\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.411436 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ceph\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.411487 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklqg\" (UniqueName: \"kubernetes.io/projected/66c85b89-d1b3-4657-97e7-df9f3c390638-kube-api-access-pklqg\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.411558 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.411577 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-inventory\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.417527 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-inventory\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.417792 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ceph\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.427727 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ssh-key-openstack-cell1\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.430509 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklqg\" (UniqueName: \"kubernetes.io/projected/66c85b89-d1b3-4657-97e7-df9f3c390638-kube-api-access-pklqg\") pod \"configure-network-openstack-openstack-cell1-g9b52\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:02 crc kubenswrapper[4945]: I0109 01:24:02.558309 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:24:03 crc kubenswrapper[4945]: I0109 01:24:03.136504 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-g9b52"] Jan 09 01:24:03 crc kubenswrapper[4945]: I0109 01:24:03.176923 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" event={"ID":"66c85b89-d1b3-4657-97e7-df9f3c390638","Type":"ContainerStarted","Data":"0f41c19e2e90ef229d3b322bf94691c320f35e30eb6a03a50d3f5c36139448e0"} Jan 09 01:24:04 crc kubenswrapper[4945]: I0109 01:24:04.199567 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" event={"ID":"66c85b89-d1b3-4657-97e7-df9f3c390638","Type":"ContainerStarted","Data":"65b51fe0a66a0369e9ffc376135eeea5821bb317e489bf505d08d6f29100b474"} Jan 09 01:24:04 crc kubenswrapper[4945]: I0109 01:24:04.226766 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" podStartSLOduration=2.022137115 podStartE2EDuration="2.226745699s" podCreationTimestamp="2026-01-09 01:24:02 +0000 UTC" firstStartedPulling="2026-01-09 01:24:03.142952178 +0000 UTC m=+7713.454111134" lastFinishedPulling="2026-01-09 01:24:03.347560742 +0000 UTC m=+7713.658719718" observedRunningTime="2026-01-09 01:24:04.21872653 +0000 UTC m=+7714.529885496" watchObservedRunningTime="2026-01-09 01:24:04.226745699 +0000 UTC m=+7714.537904645" Jan 09 01:24:43 crc kubenswrapper[4945]: I0109 01:24:43.578861 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:24:43 crc kubenswrapper[4945]: I0109 01:24:43.579619 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:25:13 crc kubenswrapper[4945]: I0109 01:25:13.578361 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:25:13 crc kubenswrapper[4945]: I0109 01:25:13.579259 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:25:30 crc kubenswrapper[4945]: I0109 01:25:30.136515 4945 generic.go:334] "Generic (PLEG): container finished" podID="66c85b89-d1b3-4657-97e7-df9f3c390638" containerID="65b51fe0a66a0369e9ffc376135eeea5821bb317e489bf505d08d6f29100b474" exitCode=0 Jan 09 01:25:30 crc kubenswrapper[4945]: I0109 01:25:30.136648 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" event={"ID":"66c85b89-d1b3-4657-97e7-df9f3c390638","Type":"ContainerDied","Data":"65b51fe0a66a0369e9ffc376135eeea5821bb317e489bf505d08d6f29100b474"} Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.592802 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.743090 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pklqg\" (UniqueName: \"kubernetes.io/projected/66c85b89-d1b3-4657-97e7-df9f3c390638-kube-api-access-pklqg\") pod \"66c85b89-d1b3-4657-97e7-df9f3c390638\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.743722 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-inventory\") pod \"66c85b89-d1b3-4657-97e7-df9f3c390638\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.744054 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ceph\") pod \"66c85b89-d1b3-4657-97e7-df9f3c390638\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.744118 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ssh-key-openstack-cell1\") pod \"66c85b89-d1b3-4657-97e7-df9f3c390638\" (UID: \"66c85b89-d1b3-4657-97e7-df9f3c390638\") " Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.751213 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ceph" (OuterVolumeSpecName: "ceph") pod "66c85b89-d1b3-4657-97e7-df9f3c390638" (UID: "66c85b89-d1b3-4657-97e7-df9f3c390638"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.751253 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c85b89-d1b3-4657-97e7-df9f3c390638-kube-api-access-pklqg" (OuterVolumeSpecName: "kube-api-access-pklqg") pod "66c85b89-d1b3-4657-97e7-df9f3c390638" (UID: "66c85b89-d1b3-4657-97e7-df9f3c390638"). InnerVolumeSpecName "kube-api-access-pklqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.778097 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "66c85b89-d1b3-4657-97e7-df9f3c390638" (UID: "66c85b89-d1b3-4657-97e7-df9f3c390638"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.783600 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-inventory" (OuterVolumeSpecName: "inventory") pod "66c85b89-d1b3-4657-97e7-df9f3c390638" (UID: "66c85b89-d1b3-4657-97e7-df9f3c390638"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.848044 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.848087 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.848103 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/66c85b89-d1b3-4657-97e7-df9f3c390638-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:25:31 crc kubenswrapper[4945]: I0109 01:25:31.848119 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pklqg\" (UniqueName: \"kubernetes.io/projected/66c85b89-d1b3-4657-97e7-df9f3c390638-kube-api-access-pklqg\") on node \"crc\" DevicePath \"\"" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.158596 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" event={"ID":"66c85b89-d1b3-4657-97e7-df9f3c390638","Type":"ContainerDied","Data":"0f41c19e2e90ef229d3b322bf94691c320f35e30eb6a03a50d3f5c36139448e0"} Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.158652 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f41c19e2e90ef229d3b322bf94691c320f35e30eb6a03a50d3f5c36139448e0" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.158649 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-g9b52" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.260635 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hpvhk"] Jan 09 01:25:32 crc kubenswrapper[4945]: E0109 01:25:32.261208 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c85b89-d1b3-4657-97e7-df9f3c390638" containerName="configure-network-openstack-openstack-cell1" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.261226 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c85b89-d1b3-4657-97e7-df9f3c390638" containerName="configure-network-openstack-openstack-cell1" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.261501 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c85b89-d1b3-4657-97e7-df9f3c390638" containerName="configure-network-openstack-openstack-cell1" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.262655 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.269627 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.272296 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.272860 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.272988 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.298263 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hpvhk"] Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.358769 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-inventory\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.359125 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fxkt\" (UniqueName: \"kubernetes.io/projected/7ca909c2-0c38-4762-aaa7-f15abf2e5548-kube-api-access-2fxkt\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.359205 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ceph\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.359648 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.462403 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.462657 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-inventory\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.462764 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fxkt\" (UniqueName: \"kubernetes.io/projected/7ca909c2-0c38-4762-aaa7-f15abf2e5548-kube-api-access-2fxkt\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.462823 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ceph\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.468389 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-inventory\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.469578 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ceph\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.474652 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ssh-key-openstack-cell1\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.496482 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fxkt\" (UniqueName: \"kubernetes.io/projected/7ca909c2-0c38-4762-aaa7-f15abf2e5548-kube-api-access-2fxkt\") pod \"validate-network-openstack-openstack-cell1-hpvhk\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:32 crc kubenswrapper[4945]: I0109 01:25:32.584913 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:33 crc kubenswrapper[4945]: I0109 01:25:33.174155 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hpvhk"] Jan 09 01:25:33 crc kubenswrapper[4945]: W0109 01:25:33.189942 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ca909c2_0c38_4762_aaa7_f15abf2e5548.slice/crio-b967c53fc89c2904f3b4c5e1fd95c5724451240250531e2113773c0959742295 WatchSource:0}: Error finding container b967c53fc89c2904f3b4c5e1fd95c5724451240250531e2113773c0959742295: Status 404 returned error can't find the container with id b967c53fc89c2904f3b4c5e1fd95c5724451240250531e2113773c0959742295 Jan 09 01:25:34 crc kubenswrapper[4945]: I0109 01:25:34.183390 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" event={"ID":"7ca909c2-0c38-4762-aaa7-f15abf2e5548","Type":"ContainerStarted","Data":"306d7d21c22ab6c07c5cc1cc3f73951eeae53de730dea373ea6977f8aa862240"} Jan 09 01:25:34 crc kubenswrapper[4945]: I0109 01:25:34.183751 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" event={"ID":"7ca909c2-0c38-4762-aaa7-f15abf2e5548","Type":"ContainerStarted","Data":"b967c53fc89c2904f3b4c5e1fd95c5724451240250531e2113773c0959742295"} Jan 09 01:25:34 crc kubenswrapper[4945]: I0109 01:25:34.206231 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" podStartSLOduration=2.054320521 podStartE2EDuration="2.206207079s" podCreationTimestamp="2026-01-09 01:25:32 +0000 UTC" firstStartedPulling="2026-01-09 01:25:33.195293332 +0000 UTC m=+7803.506452278" lastFinishedPulling="2026-01-09 01:25:33.34717989 +0000 UTC m=+7803.658338836" observedRunningTime="2026-01-09 01:25:34.203788569 +0000 UTC m=+7804.514947515" watchObservedRunningTime="2026-01-09 01:25:34.206207079 +0000 UTC m=+7804.517366025" Jan 09 01:25:39 crc kubenswrapper[4945]: I0109 01:25:39.234078 4945 generic.go:334] "Generic (PLEG): container finished" podID="7ca909c2-0c38-4762-aaa7-f15abf2e5548" containerID="306d7d21c22ab6c07c5cc1cc3f73951eeae53de730dea373ea6977f8aa862240" exitCode=0 Jan 09 01:25:39 crc kubenswrapper[4945]: I0109 01:25:39.234183 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" event={"ID":"7ca909c2-0c38-4762-aaa7-f15abf2e5548","Type":"ContainerDied","Data":"306d7d21c22ab6c07c5cc1cc3f73951eeae53de730dea373ea6977f8aa862240"} Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.750028 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.856090 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fxkt\" (UniqueName: \"kubernetes.io/projected/7ca909c2-0c38-4762-aaa7-f15abf2e5548-kube-api-access-2fxkt\") pod \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.856243 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ssh-key-openstack-cell1\") pod \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.856266 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ceph\") pod \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.856354 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-inventory\") pod \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\" (UID: \"7ca909c2-0c38-4762-aaa7-f15abf2e5548\") " Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.866146 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca909c2-0c38-4762-aaa7-f15abf2e5548-kube-api-access-2fxkt" (OuterVolumeSpecName: "kube-api-access-2fxkt") pod "7ca909c2-0c38-4762-aaa7-f15abf2e5548" (UID: "7ca909c2-0c38-4762-aaa7-f15abf2e5548"). InnerVolumeSpecName "kube-api-access-2fxkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.868074 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ceph" (OuterVolumeSpecName: "ceph") pod "7ca909c2-0c38-4762-aaa7-f15abf2e5548" (UID: "7ca909c2-0c38-4762-aaa7-f15abf2e5548"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.917594 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "7ca909c2-0c38-4762-aaa7-f15abf2e5548" (UID: "7ca909c2-0c38-4762-aaa7-f15abf2e5548"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.921547 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-inventory" (OuterVolumeSpecName: "inventory") pod "7ca909c2-0c38-4762-aaa7-f15abf2e5548" (UID: "7ca909c2-0c38-4762-aaa7-f15abf2e5548"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.959285 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fxkt\" (UniqueName: \"kubernetes.io/projected/7ca909c2-0c38-4762-aaa7-f15abf2e5548-kube-api-access-2fxkt\") on node \"crc\" DevicePath \"\"" Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.959341 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.959356 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:25:40 crc kubenswrapper[4945]: I0109 01:25:40.959370 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ca909c2-0c38-4762-aaa7-f15abf2e5548-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.263588 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" event={"ID":"7ca909c2-0c38-4762-aaa7-f15abf2e5548","Type":"ContainerDied","Data":"b967c53fc89c2904f3b4c5e1fd95c5724451240250531e2113773c0959742295"} Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.264036 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b967c53fc89c2904f3b4c5e1fd95c5724451240250531e2113773c0959742295" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.263762 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hpvhk" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.352713 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-rn5sr"] Jan 09 01:25:41 crc kubenswrapper[4945]: E0109 01:25:41.353152 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca909c2-0c38-4762-aaa7-f15abf2e5548" containerName="validate-network-openstack-openstack-cell1" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.353165 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca909c2-0c38-4762-aaa7-f15abf2e5548" containerName="validate-network-openstack-openstack-cell1" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.353392 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca909c2-0c38-4762-aaa7-f15abf2e5548" containerName="validate-network-openstack-openstack-cell1" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.354313 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.357110 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.357282 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.357302 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.357606 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.371657 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ceph\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.371794 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmp5\" (UniqueName: \"kubernetes.io/projected/d2b23121-c02d-4a5a-b40c-8331347f5911-kube-api-access-whmp5\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.371836 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.371956 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-inventory\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.392380 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-rn5sr"] Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.473587 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-inventory\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.473673 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ceph\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.473816 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whmp5\" (UniqueName: \"kubernetes.io/projected/d2b23121-c02d-4a5a-b40c-8331347f5911-kube-api-access-whmp5\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.473841 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.477792 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-inventory\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.478054 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ssh-key-openstack-cell1\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.479124 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ceph\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.493342 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whmp5\" (UniqueName: \"kubernetes.io/projected/d2b23121-c02d-4a5a-b40c-8331347f5911-kube-api-access-whmp5\") pod \"install-os-openstack-openstack-cell1-rn5sr\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:41 crc kubenswrapper[4945]: I0109 01:25:41.687052 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:25:42 crc kubenswrapper[4945]: I0109 01:25:42.215324 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-rn5sr"] Jan 09 01:25:42 crc kubenswrapper[4945]: I0109 01:25:42.276177 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" event={"ID":"d2b23121-c02d-4a5a-b40c-8331347f5911","Type":"ContainerStarted","Data":"74cd2e99c8ece83deb48bbc53cc1bd735be6a55c3dbe74cc44f5ae6a934dd9c0"} Jan 09 01:25:43 crc kubenswrapper[4945]: I0109 01:25:43.294076 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" event={"ID":"d2b23121-c02d-4a5a-b40c-8331347f5911","Type":"ContainerStarted","Data":"1a511655e5c1435c6c20df99d300a401ada62329681c7bd8fba0baeca70a9355"} Jan 09 01:25:43 crc kubenswrapper[4945]: I0109 01:25:43.352571 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" podStartSLOduration=2.188247674 podStartE2EDuration="2.35254736s" podCreationTimestamp="2026-01-09 01:25:41 +0000 UTC" firstStartedPulling="2026-01-09 01:25:42.218853964 +0000 UTC m=+7812.530012910" lastFinishedPulling="2026-01-09 01:25:42.38315365 +0000 UTC m=+7812.694312596" observedRunningTime="2026-01-09 01:25:43.348431008 +0000 UTC m=+7813.659589964" watchObservedRunningTime="2026-01-09 01:25:43.35254736 +0000 UTC m=+7813.663706326" Jan 09 01:25:43 crc kubenswrapper[4945]: I0109 01:25:43.577945 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:25:43 crc kubenswrapper[4945]: I0109 01:25:43.578056 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:25:43 crc kubenswrapper[4945]: I0109 01:25:43.578125 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:25:43 crc kubenswrapper[4945]: I0109 01:25:43.579159 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:25:43 crc kubenswrapper[4945]: I0109 01:25:43.579237 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" gracePeriod=600 Jan 09 01:25:43 crc kubenswrapper[4945]: E0109 01:25:43.725791 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:25:44 crc kubenswrapper[4945]: I0109 01:25:44.306025 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" exitCode=0 Jan 09 01:25:44 crc kubenswrapper[4945]: I0109 01:25:44.306149 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400"} Jan 09 01:25:44 crc kubenswrapper[4945]: I0109 01:25:44.306250 4945 scope.go:117] "RemoveContainer" containerID="fcd3e0c4a5eaf338ded1a32a4cba747f0799b31c8563ade8fd3b0809adf0d742" Jan 09 01:25:44 crc kubenswrapper[4945]: I0109 01:25:44.307617 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:25:44 crc kubenswrapper[4945]: E0109 01:25:44.308125 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:25:59 crc kubenswrapper[4945]: I0109 01:25:59.000895 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:25:59 crc kubenswrapper[4945]: E0109 01:25:59.001585 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:26:11 crc kubenswrapper[4945]: I0109 01:26:11.000892 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:26:11 crc kubenswrapper[4945]: E0109 01:26:11.001746 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.559058 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sjltf"] Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.562363 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.585833 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjltf"] Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.654669 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-utilities\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.654743 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rglp2\" (UniqueName: \"kubernetes.io/projected/49aa6475-2f5f-4f2f-a616-231fe89aaab4-kube-api-access-rglp2\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.654770 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-catalog-content\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.757850 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-utilities\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.757928 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rglp2\" (UniqueName: \"kubernetes.io/projected/49aa6475-2f5f-4f2f-a616-231fe89aaab4-kube-api-access-rglp2\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.757954 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-catalog-content\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.758546 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-catalog-content\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.758770 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-utilities\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.781909 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rglp2\" (UniqueName: \"kubernetes.io/projected/49aa6475-2f5f-4f2f-a616-231fe89aaab4-kube-api-access-rglp2\") pod \"redhat-marketplace-sjltf\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:13 crc kubenswrapper[4945]: I0109 01:26:13.899659 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:14 crc kubenswrapper[4945]: W0109 01:26:14.407306 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49aa6475_2f5f_4f2f_a616_231fe89aaab4.slice/crio-efab4af5197de9702950aebb4895125318c3ea41c2fe8b542b039ddd0e0a92fe WatchSource:0}: Error finding container efab4af5197de9702950aebb4895125318c3ea41c2fe8b542b039ddd0e0a92fe: Status 404 returned error can't find the container with id efab4af5197de9702950aebb4895125318c3ea41c2fe8b542b039ddd0e0a92fe Jan 09 01:26:14 crc kubenswrapper[4945]: I0109 01:26:14.408822 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjltf"] Jan 09 01:26:14 crc kubenswrapper[4945]: I0109 01:26:14.675665 4945 generic.go:334] "Generic (PLEG): container finished" podID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerID="e97d25bfd544541cf96243a94ff7ea14206b1d05d0d52de355c7159305bc2adc" exitCode=0 Jan 09 01:26:14 crc kubenswrapper[4945]: I0109 01:26:14.675720 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjltf" event={"ID":"49aa6475-2f5f-4f2f-a616-231fe89aaab4","Type":"ContainerDied","Data":"e97d25bfd544541cf96243a94ff7ea14206b1d05d0d52de355c7159305bc2adc"} Jan 09 01:26:14 crc kubenswrapper[4945]: I0109 01:26:14.676024 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjltf" event={"ID":"49aa6475-2f5f-4f2f-a616-231fe89aaab4","Type":"ContainerStarted","Data":"efab4af5197de9702950aebb4895125318c3ea41c2fe8b542b039ddd0e0a92fe"} Jan 09 01:26:15 crc kubenswrapper[4945]: I0109 01:26:15.686013 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjltf" event={"ID":"49aa6475-2f5f-4f2f-a616-231fe89aaab4","Type":"ContainerStarted","Data":"b9c2a173f84eb95cdd7649ee402d1264d11db4e818f0ff1da2662bce2ba2c57f"} Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.545480 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rlbth"] Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.547775 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.568193 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlbth"] Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.618572 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-utilities\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.618709 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-catalog-content\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.618766 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zw9r\" (UniqueName: \"kubernetes.io/projected/3287411e-1e2d-40c8-858f-297fd27f2587-kube-api-access-8zw9r\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.697404 4945 generic.go:334] "Generic (PLEG): container finished" podID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerID="b9c2a173f84eb95cdd7649ee402d1264d11db4e818f0ff1da2662bce2ba2c57f" exitCode=0 Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.697473 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjltf" event={"ID":"49aa6475-2f5f-4f2f-a616-231fe89aaab4","Type":"ContainerDied","Data":"b9c2a173f84eb95cdd7649ee402d1264d11db4e818f0ff1da2662bce2ba2c57f"} Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.721531 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-catalog-content\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.721644 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zw9r\" (UniqueName: \"kubernetes.io/projected/3287411e-1e2d-40c8-858f-297fd27f2587-kube-api-access-8zw9r\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.721855 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-utilities\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.722153 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-catalog-content\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.722450 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-utilities\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.747398 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zw9r\" (UniqueName: \"kubernetes.io/projected/3287411e-1e2d-40c8-858f-297fd27f2587-kube-api-access-8zw9r\") pod \"certified-operators-rlbth\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:16 crc kubenswrapper[4945]: I0109 01:26:16.868474 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.159909 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jvqqh"] Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.165153 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.194979 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jvqqh"] Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.236090 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-utilities\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.236168 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cflj\" (UniqueName: \"kubernetes.io/projected/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-kube-api-access-5cflj\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.236261 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-catalog-content\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.337962 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-utilities\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.338063 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cflj\" (UniqueName: \"kubernetes.io/projected/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-kube-api-access-5cflj\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.338151 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-catalog-content\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.338571 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-utilities\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.338631 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-catalog-content\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.367207 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cflj\" (UniqueName: \"kubernetes.io/projected/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-kube-api-access-5cflj\") pod \"community-operators-jvqqh\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.446411 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rlbth"] Jan 09 01:26:17 crc kubenswrapper[4945]: W0109 01:26:17.446669 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3287411e_1e2d_40c8_858f_297fd27f2587.slice/crio-664b7bf0cf642c11dc5354b39ae45e4707bb6f4ed8be3a0c515a3861967e5ed2 WatchSource:0}: Error finding container 664b7bf0cf642c11dc5354b39ae45e4707bb6f4ed8be3a0c515a3861967e5ed2: Status 404 returned error can't find the container with id 664b7bf0cf642c11dc5354b39ae45e4707bb6f4ed8be3a0c515a3861967e5ed2 Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.506780 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.742591 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjltf" event={"ID":"49aa6475-2f5f-4f2f-a616-231fe89aaab4","Type":"ContainerStarted","Data":"a281ffafdde9a7e7022d4b9bb253ab4af1dc03ae35d5bc1e58d8dfd00dae66a6"} Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.751050 4945 generic.go:334] "Generic (PLEG): container finished" podID="3287411e-1e2d-40c8-858f-297fd27f2587" containerID="35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01" exitCode=0 Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.751093 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlbth" event={"ID":"3287411e-1e2d-40c8-858f-297fd27f2587","Type":"ContainerDied","Data":"35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01"} Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.751121 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlbth" event={"ID":"3287411e-1e2d-40c8-858f-297fd27f2587","Type":"ContainerStarted","Data":"664b7bf0cf642c11dc5354b39ae45e4707bb6f4ed8be3a0c515a3861967e5ed2"} Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.762581 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sjltf" podStartSLOduration=2.235190949 podStartE2EDuration="4.762534123s" podCreationTimestamp="2026-01-09 01:26:13 +0000 UTC" firstStartedPulling="2026-01-09 01:26:14.677840076 +0000 UTC m=+7844.988999022" lastFinishedPulling="2026-01-09 01:26:17.20518325 +0000 UTC m=+7847.516342196" observedRunningTime="2026-01-09 01:26:17.761775304 +0000 UTC m=+7848.072934250" watchObservedRunningTime="2026-01-09 01:26:17.762534123 +0000 UTC m=+7848.073693069" Jan 09 01:26:17 crc kubenswrapper[4945]: W0109 01:26:17.808956 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3fe98c6_19d7_46d4_ab1c_c06aa8dbeba9.slice/crio-7bfe64c2c1a26f785206869773e1686f32704b4f4721d9e9f01e5e9bed28fde6 WatchSource:0}: Error finding container 7bfe64c2c1a26f785206869773e1686f32704b4f4721d9e9f01e5e9bed28fde6: Status 404 returned error can't find the container with id 7bfe64c2c1a26f785206869773e1686f32704b4f4721d9e9f01e5e9bed28fde6 Jan 09 01:26:17 crc kubenswrapper[4945]: I0109 01:26:17.809592 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jvqqh"] Jan 09 01:26:18 crc kubenswrapper[4945]: I0109 01:26:18.764888 4945 generic.go:334] "Generic (PLEG): container finished" podID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerID="f476b3ce3c48148541b3384551f2cb504a6f921749c4cfc0dd10e2083205a4b1" exitCode=0 Jan 09 01:26:18 crc kubenswrapper[4945]: I0109 01:26:18.764938 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvqqh" event={"ID":"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9","Type":"ContainerDied","Data":"f476b3ce3c48148541b3384551f2cb504a6f921749c4cfc0dd10e2083205a4b1"} Jan 09 01:26:18 crc kubenswrapper[4945]: I0109 01:26:18.765684 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvqqh" event={"ID":"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9","Type":"ContainerStarted","Data":"7bfe64c2c1a26f785206869773e1686f32704b4f4721d9e9f01e5e9bed28fde6"} Jan 09 01:26:19 crc kubenswrapper[4945]: I0109 01:26:19.779647 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvqqh" event={"ID":"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9","Type":"ContainerStarted","Data":"3d17a63b02f85fb22ecb5135e7148513464f2bc443d8804ba07d89e73ab71160"} Jan 09 01:26:19 crc kubenswrapper[4945]: I0109 01:26:19.782359 4945 generic.go:334] "Generic (PLEG): container finished" podID="3287411e-1e2d-40c8-858f-297fd27f2587" containerID="8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec" exitCode=0 Jan 09 01:26:19 crc kubenswrapper[4945]: I0109 01:26:19.782396 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlbth" event={"ID":"3287411e-1e2d-40c8-858f-297fd27f2587","Type":"ContainerDied","Data":"8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec"} Jan 09 01:26:21 crc kubenswrapper[4945]: I0109 01:26:21.807901 4945 generic.go:334] "Generic (PLEG): container finished" podID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerID="3d17a63b02f85fb22ecb5135e7148513464f2bc443d8804ba07d89e73ab71160" exitCode=0 Jan 09 01:26:21 crc kubenswrapper[4945]: I0109 01:26:21.808055 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvqqh" event={"ID":"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9","Type":"ContainerDied","Data":"3d17a63b02f85fb22ecb5135e7148513464f2bc443d8804ba07d89e73ab71160"} Jan 09 01:26:21 crc kubenswrapper[4945]: I0109 01:26:21.811679 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlbth" event={"ID":"3287411e-1e2d-40c8-858f-297fd27f2587","Type":"ContainerStarted","Data":"14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba"} Jan 09 01:26:21 crc kubenswrapper[4945]: I0109 01:26:21.811757 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:26:21 crc kubenswrapper[4945]: I0109 01:26:21.863134 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rlbth" podStartSLOduration=2.945933488 podStartE2EDuration="5.863109128s" podCreationTimestamp="2026-01-09 01:26:16 +0000 UTC" firstStartedPulling="2026-01-09 01:26:17.753169641 +0000 UTC m=+7848.064328577" lastFinishedPulling="2026-01-09 01:26:20.670345261 +0000 UTC m=+7850.981504217" observedRunningTime="2026-01-09 01:26:21.85268541 +0000 UTC m=+7852.163844346" watchObservedRunningTime="2026-01-09 01:26:21.863109128 +0000 UTC m=+7852.174268074" Jan 09 01:26:22 crc kubenswrapper[4945]: I0109 01:26:22.825800 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvqqh" event={"ID":"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9","Type":"ContainerStarted","Data":"26ad680ba9fe495fad0097092f62cab91199c374f53ef0aa37cd90905972fddf"} Jan 09 01:26:22 crc kubenswrapper[4945]: I0109 01:26:22.848799 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jvqqh" podStartSLOduration=2.205112431 podStartE2EDuration="5.84878335s" podCreationTimestamp="2026-01-09 01:26:17 +0000 UTC" firstStartedPulling="2026-01-09 01:26:18.767221615 +0000 UTC m=+7849.078380561" lastFinishedPulling="2026-01-09 01:26:22.410892534 +0000 UTC m=+7852.722051480" observedRunningTime="2026-01-09 01:26:22.844477354 +0000 UTC m=+7853.155636330" watchObservedRunningTime="2026-01-09 01:26:22.84878335 +0000 UTC m=+7853.159942296" Jan 09 01:26:23 crc kubenswrapper[4945]: I0109 01:26:23.900318 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:23 crc kubenswrapper[4945]: I0109 01:26:23.900733 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:23 crc kubenswrapper[4945]: I0109 01:26:23.960790 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:24 crc kubenswrapper[4945]: I0109 01:26:24.915957 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:25 crc kubenswrapper[4945]: I0109 01:26:25.855478 4945 generic.go:334] "Generic (PLEG): container finished" podID="d2b23121-c02d-4a5a-b40c-8331347f5911" containerID="1a511655e5c1435c6c20df99d300a401ada62329681c7bd8fba0baeca70a9355" exitCode=0 Jan 09 01:26:25 crc kubenswrapper[4945]: I0109 01:26:25.855563 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" event={"ID":"d2b23121-c02d-4a5a-b40c-8331347f5911","Type":"ContainerDied","Data":"1a511655e5c1435c6c20df99d300a401ada62329681c7bd8fba0baeca70a9355"} Jan 09 01:26:26 crc kubenswrapper[4945]: I0109 01:26:26.001202 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:26:26 crc kubenswrapper[4945]: E0109 01:26:26.001936 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:26:26 crc kubenswrapper[4945]: I0109 01:26:26.868633 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:26 crc kubenswrapper[4945]: I0109 01:26:26.868908 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:26 crc kubenswrapper[4945]: I0109 01:26:26.942300 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.331764 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.369037 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ssh-key-openstack-cell1\") pod \"d2b23121-c02d-4a5a-b40c-8331347f5911\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.369192 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ceph\") pod \"d2b23121-c02d-4a5a-b40c-8331347f5911\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.369245 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whmp5\" (UniqueName: \"kubernetes.io/projected/d2b23121-c02d-4a5a-b40c-8331347f5911-kube-api-access-whmp5\") pod \"d2b23121-c02d-4a5a-b40c-8331347f5911\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.369401 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-inventory\") pod \"d2b23121-c02d-4a5a-b40c-8331347f5911\" (UID: \"d2b23121-c02d-4a5a-b40c-8331347f5911\") " Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.374899 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ceph" (OuterVolumeSpecName: "ceph") pod "d2b23121-c02d-4a5a-b40c-8331347f5911" (UID: "d2b23121-c02d-4a5a-b40c-8331347f5911"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.377279 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b23121-c02d-4a5a-b40c-8331347f5911-kube-api-access-whmp5" (OuterVolumeSpecName: "kube-api-access-whmp5") pod "d2b23121-c02d-4a5a-b40c-8331347f5911" (UID: "d2b23121-c02d-4a5a-b40c-8331347f5911"). InnerVolumeSpecName "kube-api-access-whmp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.403344 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-inventory" (OuterVolumeSpecName: "inventory") pod "d2b23121-c02d-4a5a-b40c-8331347f5911" (UID: "d2b23121-c02d-4a5a-b40c-8331347f5911"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.406203 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "d2b23121-c02d-4a5a-b40c-8331347f5911" (UID: "d2b23121-c02d-4a5a-b40c-8331347f5911"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.473410 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.473799 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.473811 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d2b23121-c02d-4a5a-b40c-8331347f5911-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.473820 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whmp5\" (UniqueName: \"kubernetes.io/projected/d2b23121-c02d-4a5a-b40c-8331347f5911-kube-api-access-whmp5\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.507132 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.507177 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.557180 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.876302 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" event={"ID":"d2b23121-c02d-4a5a-b40c-8331347f5911","Type":"ContainerDied","Data":"74cd2e99c8ece83deb48bbc53cc1bd735be6a55c3dbe74cc44f5ae6a934dd9c0"} Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.876367 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cd2e99c8ece83deb48bbc53cc1bd735be6a55c3dbe74cc44f5ae6a934dd9c0" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.876703 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-rn5sr" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.941935 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:27 crc kubenswrapper[4945]: I0109 01:26:27.946956 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.014206 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-gg6v4"] Jan 09 01:26:28 crc kubenswrapper[4945]: E0109 01:26:28.014653 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b23121-c02d-4a5a-b40c-8331347f5911" containerName="install-os-openstack-openstack-cell1" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.014685 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b23121-c02d-4a5a-b40c-8331347f5911" containerName="install-os-openstack-openstack-cell1" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.014967 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b23121-c02d-4a5a-b40c-8331347f5911" containerName="install-os-openstack-openstack-cell1" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.016134 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.023035 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-gg6v4"] Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.024856 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.025007 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.025194 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.025407 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.089293 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ceph\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.089557 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzlbf\" (UniqueName: \"kubernetes.io/projected/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-kube-api-access-hzlbf\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.089613 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.089634 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-inventory\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.191897 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.191956 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-inventory\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.192046 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ceph\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.192194 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzlbf\" (UniqueName: \"kubernetes.io/projected/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-kube-api-access-hzlbf\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.197078 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ceph\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.197289 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-inventory\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.197350 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ssh-key-openstack-cell1\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.214138 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzlbf\" (UniqueName: \"kubernetes.io/projected/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-kube-api-access-hzlbf\") pod \"configure-os-openstack-openstack-cell1-gg6v4\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.339149 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.849778 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-gg6v4"] Jan 09 01:26:28 crc kubenswrapper[4945]: I0109 01:26:28.885674 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" event={"ID":"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d","Type":"ContainerStarted","Data":"6cbc19ab7e6ffd4890be5cf69a86e2eb5dc7fdc67ceb8967ec85cd473ecd6f9c"} Jan 09 01:26:29 crc kubenswrapper[4945]: I0109 01:26:29.745847 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjltf"] Jan 09 01:26:29 crc kubenswrapper[4945]: I0109 01:26:29.746487 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sjltf" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerName="registry-server" containerID="cri-o://a281ffafdde9a7e7022d4b9bb253ab4af1dc03ae35d5bc1e58d8dfd00dae66a6" gracePeriod=2 Jan 09 01:26:29 crc kubenswrapper[4945]: I0109 01:26:29.905904 4945 generic.go:334] "Generic (PLEG): container finished" podID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerID="a281ffafdde9a7e7022d4b9bb253ab4af1dc03ae35d5bc1e58d8dfd00dae66a6" exitCode=0 Jan 09 01:26:29 crc kubenswrapper[4945]: I0109 01:26:29.905976 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjltf" event={"ID":"49aa6475-2f5f-4f2f-a616-231fe89aaab4","Type":"ContainerDied","Data":"a281ffafdde9a7e7022d4b9bb253ab4af1dc03ae35d5bc1e58d8dfd00dae66a6"} Jan 09 01:26:29 crc kubenswrapper[4945]: I0109 01:26:29.909396 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" event={"ID":"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d","Type":"ContainerStarted","Data":"e198b6a4cb3fc204a9872233de27c55d2e3da4eb8b51fd93dfbdf745b57c7498"} Jan 09 01:26:29 crc kubenswrapper[4945]: I0109 01:26:29.934013 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" podStartSLOduration=2.759531048 podStartE2EDuration="2.933981425s" podCreationTimestamp="2026-01-09 01:26:27 +0000 UTC" firstStartedPulling="2026-01-09 01:26:28.852159973 +0000 UTC m=+7859.163318919" lastFinishedPulling="2026-01-09 01:26:29.02661035 +0000 UTC m=+7859.337769296" observedRunningTime="2026-01-09 01:26:29.924746276 +0000 UTC m=+7860.235905222" watchObservedRunningTime="2026-01-09 01:26:29.933981425 +0000 UTC m=+7860.245140371" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.278796 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.361576 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rglp2\" (UniqueName: \"kubernetes.io/projected/49aa6475-2f5f-4f2f-a616-231fe89aaab4-kube-api-access-rglp2\") pod \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.361691 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-utilities\") pod \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.361710 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-catalog-content\") pod \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\" (UID: \"49aa6475-2f5f-4f2f-a616-231fe89aaab4\") " Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.362824 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-utilities" (OuterVolumeSpecName: "utilities") pod "49aa6475-2f5f-4f2f-a616-231fe89aaab4" (UID: "49aa6475-2f5f-4f2f-a616-231fe89aaab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.363678 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.366518 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49aa6475-2f5f-4f2f-a616-231fe89aaab4-kube-api-access-rglp2" (OuterVolumeSpecName: "kube-api-access-rglp2") pod "49aa6475-2f5f-4f2f-a616-231fe89aaab4" (UID: "49aa6475-2f5f-4f2f-a616-231fe89aaab4"). InnerVolumeSpecName "kube-api-access-rglp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.387061 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49aa6475-2f5f-4f2f-a616-231fe89aaab4" (UID: "49aa6475-2f5f-4f2f-a616-231fe89aaab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.465155 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49aa6475-2f5f-4f2f-a616-231fe89aaab4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.465412 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rglp2\" (UniqueName: \"kubernetes.io/projected/49aa6475-2f5f-4f2f-a616-231fe89aaab4-kube-api-access-rglp2\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.739933 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlbth"] Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.922295 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjltf" event={"ID":"49aa6475-2f5f-4f2f-a616-231fe89aaab4","Type":"ContainerDied","Data":"efab4af5197de9702950aebb4895125318c3ea41c2fe8b542b039ddd0e0a92fe"} Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.922753 4945 scope.go:117] "RemoveContainer" containerID="a281ffafdde9a7e7022d4b9bb253ab4af1dc03ae35d5bc1e58d8dfd00dae66a6" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.922406 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjltf" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.922588 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rlbth" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" containerName="registry-server" containerID="cri-o://14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba" gracePeriod=2 Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.969021 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjltf"] Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.979794 4945 scope.go:117] "RemoveContainer" containerID="b9c2a173f84eb95cdd7649ee402d1264d11db4e818f0ff1da2662bce2ba2c57f" Jan 09 01:26:30 crc kubenswrapper[4945]: I0109 01:26:30.981173 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjltf"] Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.001885 4945 scope.go:117] "RemoveContainer" containerID="e97d25bfd544541cf96243a94ff7ea14206b1d05d0d52de355c7159305bc2adc" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.441917 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.487134 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-catalog-content\") pod \"3287411e-1e2d-40c8-858f-297fd27f2587\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.487495 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zw9r\" (UniqueName: \"kubernetes.io/projected/3287411e-1e2d-40c8-858f-297fd27f2587-kube-api-access-8zw9r\") pod \"3287411e-1e2d-40c8-858f-297fd27f2587\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.487563 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-utilities\") pod \"3287411e-1e2d-40c8-858f-297fd27f2587\" (UID: \"3287411e-1e2d-40c8-858f-297fd27f2587\") " Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.488700 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-utilities" (OuterVolumeSpecName: "utilities") pod "3287411e-1e2d-40c8-858f-297fd27f2587" (UID: "3287411e-1e2d-40c8-858f-297fd27f2587"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.489121 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.493304 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3287411e-1e2d-40c8-858f-297fd27f2587-kube-api-access-8zw9r" (OuterVolumeSpecName: "kube-api-access-8zw9r") pod "3287411e-1e2d-40c8-858f-297fd27f2587" (UID: "3287411e-1e2d-40c8-858f-297fd27f2587"). InnerVolumeSpecName "kube-api-access-8zw9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.530832 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3287411e-1e2d-40c8-858f-297fd27f2587" (UID: "3287411e-1e2d-40c8-858f-297fd27f2587"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.591219 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zw9r\" (UniqueName: \"kubernetes.io/projected/3287411e-1e2d-40c8-858f-297fd27f2587-kube-api-access-8zw9r\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.591245 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3287411e-1e2d-40c8-858f-297fd27f2587-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.934287 4945 generic.go:334] "Generic (PLEG): container finished" podID="3287411e-1e2d-40c8-858f-297fd27f2587" containerID="14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba" exitCode=0 Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.934325 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlbth" event={"ID":"3287411e-1e2d-40c8-858f-297fd27f2587","Type":"ContainerDied","Data":"14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba"} Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.934368 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rlbth" event={"ID":"3287411e-1e2d-40c8-858f-297fd27f2587","Type":"ContainerDied","Data":"664b7bf0cf642c11dc5354b39ae45e4707bb6f4ed8be3a0c515a3861967e5ed2"} Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.934389 4945 scope.go:117] "RemoveContainer" containerID="14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.934392 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rlbth" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.964931 4945 scope.go:117] "RemoveContainer" containerID="8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec" Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.989918 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rlbth"] Jan 09 01:26:31 crc kubenswrapper[4945]: I0109 01:26:31.995932 4945 scope.go:117] "RemoveContainer" containerID="35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01" Jan 09 01:26:32 crc kubenswrapper[4945]: I0109 01:26:32.016945 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" path="/var/lib/kubelet/pods/49aa6475-2f5f-4f2f-a616-231fe89aaab4/volumes" Jan 09 01:26:32 crc kubenswrapper[4945]: I0109 01:26:32.018686 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rlbth"] Jan 09 01:26:32 crc kubenswrapper[4945]: I0109 01:26:32.029465 4945 scope.go:117] "RemoveContainer" containerID="14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba" Jan 09 01:26:32 crc kubenswrapper[4945]: E0109 01:26:32.029889 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba\": container with ID starting with 14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba not found: ID does not exist" containerID="14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba" Jan 09 01:26:32 crc kubenswrapper[4945]: I0109 01:26:32.029944 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba"} err="failed to get container status \"14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba\": rpc error: code = NotFound desc = could not find container \"14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba\": container with ID starting with 14b27b1d7d752fc4a41c4b2d9a6804a58712a518780619eca9038b1fbe4dc1ba not found: ID does not exist" Jan 09 01:26:32 crc kubenswrapper[4945]: I0109 01:26:32.029983 4945 scope.go:117] "RemoveContainer" containerID="8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec" Jan 09 01:26:32 crc kubenswrapper[4945]: E0109 01:26:32.030392 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec\": container with ID starting with 8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec not found: ID does not exist" containerID="8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec" Jan 09 01:26:32 crc kubenswrapper[4945]: I0109 01:26:32.030475 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec"} err="failed to get container status \"8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec\": rpc error: code = NotFound desc = could not find container \"8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec\": container with ID starting with 8cdff764e4d4cc2c7f0c03274281adfa83f88fcc0f6c7dfc5e33eaa5ed9e3fec not found: ID does not exist" Jan 09 01:26:32 crc kubenswrapper[4945]: I0109 01:26:32.030533 4945 scope.go:117] "RemoveContainer" containerID="35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01" Jan 09 01:26:32 crc kubenswrapper[4945]: E0109 01:26:32.031188 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01\": container with ID starting with 35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01 not found: ID does not exist" containerID="35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01" Jan 09 01:26:32 crc kubenswrapper[4945]: I0109 01:26:32.031222 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01"} err="failed to get container status \"35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01\": rpc error: code = NotFound desc = could not find container \"35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01\": container with ID starting with 35af743655f066fe56b592c0de4b30dd63437d6dcc554ec4491265fd4d897b01 not found: ID does not exist" Jan 09 01:26:34 crc kubenswrapper[4945]: I0109 01:26:34.038093 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" path="/var/lib/kubelet/pods/3287411e-1e2d-40c8-858f-297fd27f2587/volumes" Jan 09 01:26:34 crc kubenswrapper[4945]: I0109 01:26:34.542606 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jvqqh"] Jan 09 01:26:34 crc kubenswrapper[4945]: I0109 01:26:34.543167 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jvqqh" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerName="registry-server" containerID="cri-o://26ad680ba9fe495fad0097092f62cab91199c374f53ef0aa37cd90905972fddf" gracePeriod=2 Jan 09 01:26:34 crc kubenswrapper[4945]: I0109 01:26:34.978968 4945 generic.go:334] "Generic (PLEG): container finished" podID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerID="26ad680ba9fe495fad0097092f62cab91199c374f53ef0aa37cd90905972fddf" exitCode=0 Jan 09 01:26:34 crc kubenswrapper[4945]: I0109 01:26:34.979029 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvqqh" event={"ID":"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9","Type":"ContainerDied","Data":"26ad680ba9fe495fad0097092f62cab91199c374f53ef0aa37cd90905972fddf"} Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.102717 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.174269 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cflj\" (UniqueName: \"kubernetes.io/projected/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-kube-api-access-5cflj\") pod \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.174337 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-catalog-content\") pod \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.174401 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-utilities\") pod \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\" (UID: \"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9\") " Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.175411 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-utilities" (OuterVolumeSpecName: "utilities") pod "f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" (UID: "f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.179581 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-kube-api-access-5cflj" (OuterVolumeSpecName: "kube-api-access-5cflj") pod "f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" (UID: "f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9"). InnerVolumeSpecName "kube-api-access-5cflj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.232325 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" (UID: "f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.277585 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cflj\" (UniqueName: \"kubernetes.io/projected/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-kube-api-access-5cflj\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.277635 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.277655 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.997685 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jvqqh" event={"ID":"f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9","Type":"ContainerDied","Data":"7bfe64c2c1a26f785206869773e1686f32704b4f4721d9e9f01e5e9bed28fde6"} Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.997737 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jvqqh" Jan 09 01:26:35 crc kubenswrapper[4945]: I0109 01:26:35.997754 4945 scope.go:117] "RemoveContainer" containerID="26ad680ba9fe495fad0097092f62cab91199c374f53ef0aa37cd90905972fddf" Jan 09 01:26:36 crc kubenswrapper[4945]: I0109 01:26:36.031044 4945 scope.go:117] "RemoveContainer" containerID="3d17a63b02f85fb22ecb5135e7148513464f2bc443d8804ba07d89e73ab71160" Jan 09 01:26:36 crc kubenswrapper[4945]: I0109 01:26:36.041378 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jvqqh"] Jan 09 01:26:36 crc kubenswrapper[4945]: I0109 01:26:36.052340 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jvqqh"] Jan 09 01:26:36 crc kubenswrapper[4945]: I0109 01:26:36.065149 4945 scope.go:117] "RemoveContainer" containerID="f476b3ce3c48148541b3384551f2cb504a6f921749c4cfc0dd10e2083205a4b1" Jan 09 01:26:38 crc kubenswrapper[4945]: I0109 01:26:38.016826 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" path="/var/lib/kubelet/pods/f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9/volumes" Jan 09 01:26:41 crc kubenswrapper[4945]: I0109 01:26:41.001163 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:26:41 crc kubenswrapper[4945]: E0109 01:26:41.002279 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:26:56 crc kubenswrapper[4945]: I0109 01:26:56.000136 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:26:56 crc kubenswrapper[4945]: E0109 01:26:56.000958 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:27:07 crc kubenswrapper[4945]: I0109 01:27:07.001363 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:27:07 crc kubenswrapper[4945]: E0109 01:27:07.002538 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:27:13 crc kubenswrapper[4945]: I0109 01:27:13.418516 4945 generic.go:334] "Generic (PLEG): container finished" podID="90ecc2d7-7681-462f-b6d8-25eeaaae8e5d" containerID="e198b6a4cb3fc204a9872233de27c55d2e3da4eb8b51fd93dfbdf745b57c7498" exitCode=0 Jan 09 01:27:13 crc kubenswrapper[4945]: I0109 01:27:13.418649 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" event={"ID":"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d","Type":"ContainerDied","Data":"e198b6a4cb3fc204a9872233de27c55d2e3da4eb8b51fd93dfbdf745b57c7498"} Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.015733 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.211572 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzlbf\" (UniqueName: \"kubernetes.io/projected/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-kube-api-access-hzlbf\") pod \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.212258 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ssh-key-openstack-cell1\") pod \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.212353 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-inventory\") pod \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.212403 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ceph\") pod \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\" (UID: \"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d\") " Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.219601 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ceph" (OuterVolumeSpecName: "ceph") pod "90ecc2d7-7681-462f-b6d8-25eeaaae8e5d" (UID: "90ecc2d7-7681-462f-b6d8-25eeaaae8e5d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.223345 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-kube-api-access-hzlbf" (OuterVolumeSpecName: "kube-api-access-hzlbf") pod "90ecc2d7-7681-462f-b6d8-25eeaaae8e5d" (UID: "90ecc2d7-7681-462f-b6d8-25eeaaae8e5d"). InnerVolumeSpecName "kube-api-access-hzlbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.249449 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "90ecc2d7-7681-462f-b6d8-25eeaaae8e5d" (UID: "90ecc2d7-7681-462f-b6d8-25eeaaae8e5d"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.253628 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-inventory" (OuterVolumeSpecName: "inventory") pod "90ecc2d7-7681-462f-b6d8-25eeaaae8e5d" (UID: "90ecc2d7-7681-462f-b6d8-25eeaaae8e5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.315958 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.316068 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.316088 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.316108 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzlbf\" (UniqueName: \"kubernetes.io/projected/90ecc2d7-7681-462f-b6d8-25eeaaae8e5d-kube-api-access-hzlbf\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.442876 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" event={"ID":"90ecc2d7-7681-462f-b6d8-25eeaaae8e5d","Type":"ContainerDied","Data":"6cbc19ab7e6ffd4890be5cf69a86e2eb5dc7fdc67ceb8967ec85cd473ecd6f9c"} Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.442941 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cbc19ab7e6ffd4890be5cf69a86e2eb5dc7fdc67ceb8967ec85cd473ecd6f9c" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.443088 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-gg6v4" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.552335 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-rkcjd"] Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553322 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553367 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553407 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553425 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553483 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerName="extract-content" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553501 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerName="extract-content" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553525 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerName="extract-content" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553545 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerName="extract-content" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553566 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553584 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553629 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerName="extract-utilities" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553646 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerName="extract-utilities" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553690 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" containerName="extract-utilities" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553708 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" containerName="extract-utilities" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553744 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ecc2d7-7681-462f-b6d8-25eeaaae8e5d" containerName="configure-os-openstack-openstack-cell1" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553762 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ecc2d7-7681-462f-b6d8-25eeaaae8e5d" containerName="configure-os-openstack-openstack-cell1" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553794 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerName="extract-utilities" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553812 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerName="extract-utilities" Jan 09 01:27:15 crc kubenswrapper[4945]: E0109 01:27:15.553840 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" containerName="extract-content" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.553856 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" containerName="extract-content" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.554406 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ecc2d7-7681-462f-b6d8-25eeaaae8e5d" containerName="configure-os-openstack-openstack-cell1" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.554455 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fe98c6-19d7-46d4-ab1c-c06aa8dbeba9" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.554490 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="49aa6475-2f5f-4f2f-a616-231fe89aaab4" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.554545 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="3287411e-1e2d-40c8-858f-297fd27f2587" containerName="registry-server" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.556456 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.560782 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.561370 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.561667 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.562737 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.597419 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-rkcjd"] Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.725675 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ceph\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.725762 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.725833 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-inventory-0\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.725879 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmmx\" (UniqueName: \"kubernetes.io/projected/75d9260e-736f-4062-a544-5dc637a3a7da-kube-api-access-snmmx\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.828107 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ceph\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.828157 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.828204 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-inventory-0\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.828234 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snmmx\" (UniqueName: \"kubernetes.io/projected/75d9260e-736f-4062-a544-5dc637a3a7da-kube-api-access-snmmx\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.832265 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-inventory-0\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.833168 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ceph\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.837508 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.849303 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmmx\" (UniqueName: \"kubernetes.io/projected/75d9260e-736f-4062-a544-5dc637a3a7da-kube-api-access-snmmx\") pod \"ssh-known-hosts-openstack-rkcjd\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:15 crc kubenswrapper[4945]: I0109 01:27:15.880835 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:16 crc kubenswrapper[4945]: I0109 01:27:16.302325 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-rkcjd"] Jan 09 01:27:16 crc kubenswrapper[4945]: I0109 01:27:16.454986 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-rkcjd" event={"ID":"75d9260e-736f-4062-a544-5dc637a3a7da","Type":"ContainerStarted","Data":"d5136ca2ce9a2aea13e95d779159178c3e723ade398344e8e4011059a3333780"} Jan 09 01:27:17 crc kubenswrapper[4945]: I0109 01:27:17.464267 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-rkcjd" event={"ID":"75d9260e-736f-4062-a544-5dc637a3a7da","Type":"ContainerStarted","Data":"61dd4bb9795c9517f7a044048437235a7394a46b0c5da04ad6bcf9c2aca89f3d"} Jan 09 01:27:17 crc kubenswrapper[4945]: I0109 01:27:17.483893 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-rkcjd" podStartSLOduration=2.316794742 podStartE2EDuration="2.483875587s" podCreationTimestamp="2026-01-09 01:27:15 +0000 UTC" firstStartedPulling="2026-01-09 01:27:16.308773057 +0000 UTC m=+7906.619931993" lastFinishedPulling="2026-01-09 01:27:16.475853882 +0000 UTC m=+7906.787012838" observedRunningTime="2026-01-09 01:27:17.479516759 +0000 UTC m=+7907.790675705" watchObservedRunningTime="2026-01-09 01:27:17.483875587 +0000 UTC m=+7907.795034533" Jan 09 01:27:21 crc kubenswrapper[4945]: I0109 01:27:21.004707 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:27:21 crc kubenswrapper[4945]: E0109 01:27:21.005573 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:27:25 crc kubenswrapper[4945]: I0109 01:27:25.546129 4945 generic.go:334] "Generic (PLEG): container finished" podID="75d9260e-736f-4062-a544-5dc637a3a7da" containerID="61dd4bb9795c9517f7a044048437235a7394a46b0c5da04ad6bcf9c2aca89f3d" exitCode=0 Jan 09 01:27:25 crc kubenswrapper[4945]: I0109 01:27:25.546277 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-rkcjd" event={"ID":"75d9260e-736f-4062-a544-5dc637a3a7da","Type":"ContainerDied","Data":"61dd4bb9795c9517f7a044048437235a7394a46b0c5da04ad6bcf9c2aca89f3d"} Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.480206 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.566125 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-rkcjd" event={"ID":"75d9260e-736f-4062-a544-5dc637a3a7da","Type":"ContainerDied","Data":"d5136ca2ce9a2aea13e95d779159178c3e723ade398344e8e4011059a3333780"} Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.566508 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5136ca2ce9a2aea13e95d779159178c3e723ade398344e8e4011059a3333780" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.566188 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-rkcjd" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.601943 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ssh-key-openstack-cell1\") pod \"75d9260e-736f-4062-a544-5dc637a3a7da\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.602014 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ceph\") pod \"75d9260e-736f-4062-a544-5dc637a3a7da\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.602193 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-inventory-0\") pod \"75d9260e-736f-4062-a544-5dc637a3a7da\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.602234 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snmmx\" (UniqueName: \"kubernetes.io/projected/75d9260e-736f-4062-a544-5dc637a3a7da-kube-api-access-snmmx\") pod \"75d9260e-736f-4062-a544-5dc637a3a7da\" (UID: \"75d9260e-736f-4062-a544-5dc637a3a7da\") " Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.607844 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ceph" (OuterVolumeSpecName: "ceph") pod "75d9260e-736f-4062-a544-5dc637a3a7da" (UID: "75d9260e-736f-4062-a544-5dc637a3a7da"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.610813 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d9260e-736f-4062-a544-5dc637a3a7da-kube-api-access-snmmx" (OuterVolumeSpecName: "kube-api-access-snmmx") pod "75d9260e-736f-4062-a544-5dc637a3a7da" (UID: "75d9260e-736f-4062-a544-5dc637a3a7da"). InnerVolumeSpecName "kube-api-access-snmmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.634361 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-qfkwk"] Jan 09 01:27:27 crc kubenswrapper[4945]: E0109 01:27:27.634916 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d9260e-736f-4062-a544-5dc637a3a7da" containerName="ssh-known-hosts-openstack" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.634935 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d9260e-736f-4062-a544-5dc637a3a7da" containerName="ssh-known-hosts-openstack" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.635209 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d9260e-736f-4062-a544-5dc637a3a7da" containerName="ssh-known-hosts-openstack" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.640930 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.655879 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-qfkwk"] Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.661124 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "75d9260e-736f-4062-a544-5dc637a3a7da" (UID: "75d9260e-736f-4062-a544-5dc637a3a7da"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.677084 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "75d9260e-736f-4062-a544-5dc637a3a7da" (UID: "75d9260e-736f-4062-a544-5dc637a3a7da"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.704553 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.704649 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bnl\" (UniqueName: \"kubernetes.io/projected/de4ab205-3743-46f1-8922-cfd9c5e6f54d-kube-api-access-b6bnl\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.704731 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-inventory\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.704791 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ceph\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.704907 4945 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.704921 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snmmx\" (UniqueName: \"kubernetes.io/projected/75d9260e-736f-4062-a544-5dc637a3a7da-kube-api-access-snmmx\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.704930 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.704939 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/75d9260e-736f-4062-a544-5dc637a3a7da-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.806293 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-inventory\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.806388 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ceph\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.806470 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.806529 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6bnl\" (UniqueName: \"kubernetes.io/projected/de4ab205-3743-46f1-8922-cfd9c5e6f54d-kube-api-access-b6bnl\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.809831 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ceph\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.810305 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-inventory\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.810650 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ssh-key-openstack-cell1\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:27 crc kubenswrapper[4945]: I0109 01:27:27.822080 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6bnl\" (UniqueName: \"kubernetes.io/projected/de4ab205-3743-46f1-8922-cfd9c5e6f54d-kube-api-access-b6bnl\") pod \"run-os-openstack-openstack-cell1-qfkwk\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:28 crc kubenswrapper[4945]: I0109 01:27:28.061485 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:28 crc kubenswrapper[4945]: I0109 01:27:28.652322 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-qfkwk"] Jan 09 01:27:29 crc kubenswrapper[4945]: I0109 01:27:29.593022 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" event={"ID":"de4ab205-3743-46f1-8922-cfd9c5e6f54d","Type":"ContainerStarted","Data":"709f5d759a7deffd2f34ebce4112d09753d9f3577eda413cd05efd3457a999dc"} Jan 09 01:27:29 crc kubenswrapper[4945]: I0109 01:27:29.593563 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" event={"ID":"de4ab205-3743-46f1-8922-cfd9c5e6f54d","Type":"ContainerStarted","Data":"f32bb54518bca0a6a18d9da8e253dd77a22e4468cc3c40c3aab4b634752772de"} Jan 09 01:27:29 crc kubenswrapper[4945]: I0109 01:27:29.616838 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" podStartSLOduration=2.441148938 podStartE2EDuration="2.616813485s" podCreationTimestamp="2026-01-09 01:27:27 +0000 UTC" firstStartedPulling="2026-01-09 01:27:28.65929349 +0000 UTC m=+7918.970452436" lastFinishedPulling="2026-01-09 01:27:28.834958037 +0000 UTC m=+7919.146116983" observedRunningTime="2026-01-09 01:27:29.609214507 +0000 UTC m=+7919.920373463" watchObservedRunningTime="2026-01-09 01:27:29.616813485 +0000 UTC m=+7919.927972431" Jan 09 01:27:35 crc kubenswrapper[4945]: I0109 01:27:35.000255 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:27:35 crc kubenswrapper[4945]: E0109 01:27:35.001096 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:27:37 crc kubenswrapper[4945]: I0109 01:27:37.705508 4945 generic.go:334] "Generic (PLEG): container finished" podID="de4ab205-3743-46f1-8922-cfd9c5e6f54d" containerID="709f5d759a7deffd2f34ebce4112d09753d9f3577eda413cd05efd3457a999dc" exitCode=0 Jan 09 01:27:37 crc kubenswrapper[4945]: I0109 01:27:37.705569 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" event={"ID":"de4ab205-3743-46f1-8922-cfd9c5e6f54d","Type":"ContainerDied","Data":"709f5d759a7deffd2f34ebce4112d09753d9f3577eda413cd05efd3457a999dc"} Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.140045 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.260659 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-inventory\") pod \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.261052 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6bnl\" (UniqueName: \"kubernetes.io/projected/de4ab205-3743-46f1-8922-cfd9c5e6f54d-kube-api-access-b6bnl\") pod \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.261132 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ceph\") pod \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.261182 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ssh-key-openstack-cell1\") pod \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\" (UID: \"de4ab205-3743-46f1-8922-cfd9c5e6f54d\") " Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.266617 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4ab205-3743-46f1-8922-cfd9c5e6f54d-kube-api-access-b6bnl" (OuterVolumeSpecName: "kube-api-access-b6bnl") pod "de4ab205-3743-46f1-8922-cfd9c5e6f54d" (UID: "de4ab205-3743-46f1-8922-cfd9c5e6f54d"). InnerVolumeSpecName "kube-api-access-b6bnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.275116 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ceph" (OuterVolumeSpecName: "ceph") pod "de4ab205-3743-46f1-8922-cfd9c5e6f54d" (UID: "de4ab205-3743-46f1-8922-cfd9c5e6f54d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.289248 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-inventory" (OuterVolumeSpecName: "inventory") pod "de4ab205-3743-46f1-8922-cfd9c5e6f54d" (UID: "de4ab205-3743-46f1-8922-cfd9c5e6f54d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.310560 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "de4ab205-3743-46f1-8922-cfd9c5e6f54d" (UID: "de4ab205-3743-46f1-8922-cfd9c5e6f54d"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.364172 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6bnl\" (UniqueName: \"kubernetes.io/projected/de4ab205-3743-46f1-8922-cfd9c5e6f54d-kube-api-access-b6bnl\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.364209 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.364219 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.364228 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de4ab205-3743-46f1-8922-cfd9c5e6f54d-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.727052 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" event={"ID":"de4ab205-3743-46f1-8922-cfd9c5e6f54d","Type":"ContainerDied","Data":"f32bb54518bca0a6a18d9da8e253dd77a22e4468cc3c40c3aab4b634752772de"} Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.727417 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f32bb54518bca0a6a18d9da8e253dd77a22e4468cc3c40c3aab4b634752772de" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.727197 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-qfkwk" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.921179 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-szs94"] Jan 09 01:27:39 crc kubenswrapper[4945]: E0109 01:27:39.921707 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4ab205-3743-46f1-8922-cfd9c5e6f54d" containerName="run-os-openstack-openstack-cell1" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.921733 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4ab205-3743-46f1-8922-cfd9c5e6f54d" containerName="run-os-openstack-openstack-cell1" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.922016 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4ab205-3743-46f1-8922-cfd9c5e6f54d" containerName="run-os-openstack-openstack-cell1" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.922877 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.927367 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.927494 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.927545 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.927511 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:27:39 crc kubenswrapper[4945]: I0109 01:27:39.939907 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-szs94"] Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.081238 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.081351 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-inventory\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.081373 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwdd\" (UniqueName: \"kubernetes.io/projected/f686f621-ce60-4be5-9671-2a9d2a6c6990-kube-api-access-rcwdd\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.081429 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ceph\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.183539 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-inventory\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.184812 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwdd\" (UniqueName: \"kubernetes.io/projected/f686f621-ce60-4be5-9671-2a9d2a6c6990-kube-api-access-rcwdd\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.184949 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ceph\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.185211 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.193454 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-inventory\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.194837 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ssh-key-openstack-cell1\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.206261 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ceph\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.212441 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwdd\" (UniqueName: \"kubernetes.io/projected/f686f621-ce60-4be5-9671-2a9d2a6c6990-kube-api-access-rcwdd\") pod \"reboot-os-openstack-openstack-cell1-szs94\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.245865 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:40 crc kubenswrapper[4945]: I0109 01:27:40.832225 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-szs94"] Jan 09 01:27:41 crc kubenswrapper[4945]: I0109 01:27:41.750125 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" event={"ID":"f686f621-ce60-4be5-9671-2a9d2a6c6990","Type":"ContainerStarted","Data":"3131c061a240d9959e25167c7db6ca5e298bc0ce4c7bfe214fa8eba951b1de53"} Jan 09 01:27:41 crc kubenswrapper[4945]: I0109 01:27:41.750390 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" event={"ID":"f686f621-ce60-4be5-9671-2a9d2a6c6990","Type":"ContainerStarted","Data":"dc9e2bcbf8b36408baa602b002941835bb7fb155f4c76fd9d53f1f1a0df9e361"} Jan 09 01:27:41 crc kubenswrapper[4945]: I0109 01:27:41.775357 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" podStartSLOduration=2.59881573 podStartE2EDuration="2.775336288s" podCreationTimestamp="2026-01-09 01:27:39 +0000 UTC" firstStartedPulling="2026-01-09 01:27:40.836530896 +0000 UTC m=+7931.147689842" lastFinishedPulling="2026-01-09 01:27:41.013051464 +0000 UTC m=+7931.324210400" observedRunningTime="2026-01-09 01:27:41.770577831 +0000 UTC m=+7932.081736767" watchObservedRunningTime="2026-01-09 01:27:41.775336288 +0000 UTC m=+7932.086495244" Jan 09 01:27:49 crc kubenswrapper[4945]: I0109 01:27:49.000162 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:27:49 crc kubenswrapper[4945]: E0109 01:27:49.000880 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:27:57 crc kubenswrapper[4945]: I0109 01:27:57.918475 4945 generic.go:334] "Generic (PLEG): container finished" podID="f686f621-ce60-4be5-9671-2a9d2a6c6990" containerID="3131c061a240d9959e25167c7db6ca5e298bc0ce4c7bfe214fa8eba951b1de53" exitCode=0 Jan 09 01:27:57 crc kubenswrapper[4945]: I0109 01:27:57.918584 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" event={"ID":"f686f621-ce60-4be5-9671-2a9d2a6c6990","Type":"ContainerDied","Data":"3131c061a240d9959e25167c7db6ca5e298bc0ce4c7bfe214fa8eba951b1de53"} Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.510038 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.560321 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcwdd\" (UniqueName: \"kubernetes.io/projected/f686f621-ce60-4be5-9671-2a9d2a6c6990-kube-api-access-rcwdd\") pod \"f686f621-ce60-4be5-9671-2a9d2a6c6990\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.560513 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ssh-key-openstack-cell1\") pod \"f686f621-ce60-4be5-9671-2a9d2a6c6990\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.560589 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ceph\") pod \"f686f621-ce60-4be5-9671-2a9d2a6c6990\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.560821 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-inventory\") pod \"f686f621-ce60-4be5-9671-2a9d2a6c6990\" (UID: \"f686f621-ce60-4be5-9671-2a9d2a6c6990\") " Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.566350 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f686f621-ce60-4be5-9671-2a9d2a6c6990-kube-api-access-rcwdd" (OuterVolumeSpecName: "kube-api-access-rcwdd") pod "f686f621-ce60-4be5-9671-2a9d2a6c6990" (UID: "f686f621-ce60-4be5-9671-2a9d2a6c6990"). InnerVolumeSpecName "kube-api-access-rcwdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.581465 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ceph" (OuterVolumeSpecName: "ceph") pod "f686f621-ce60-4be5-9671-2a9d2a6c6990" (UID: "f686f621-ce60-4be5-9671-2a9d2a6c6990"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.588945 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "f686f621-ce60-4be5-9671-2a9d2a6c6990" (UID: "f686f621-ce60-4be5-9671-2a9d2a6c6990"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.595856 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-inventory" (OuterVolumeSpecName: "inventory") pod "f686f621-ce60-4be5-9671-2a9d2a6c6990" (UID: "f686f621-ce60-4be5-9671-2a9d2a6c6990"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.663615 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.663688 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcwdd\" (UniqueName: \"kubernetes.io/projected/f686f621-ce60-4be5-9671-2a9d2a6c6990-kube-api-access-rcwdd\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.663703 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:27:59 crc kubenswrapper[4945]: I0109 01:27:59.663714 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f686f621-ce60-4be5-9671-2a9d2a6c6990-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.026919 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.102335 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-t2rj8"] Jan 09 01:28:00 crc kubenswrapper[4945]: E0109 01:28:00.102710 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f686f621-ce60-4be5-9671-2a9d2a6c6990" containerName="reboot-os-openstack-openstack-cell1" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.102728 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="f686f621-ce60-4be5-9671-2a9d2a6c6990" containerName="reboot-os-openstack-openstack-cell1" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.102928 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="f686f621-ce60-4be5-9671-2a9d2a6c6990" containerName="reboot-os-openstack-openstack-cell1" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.103636 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-szs94" event={"ID":"f686f621-ce60-4be5-9671-2a9d2a6c6990","Type":"ContainerDied","Data":"dc9e2bcbf8b36408baa602b002941835bb7fb155f4c76fd9d53f1f1a0df9e361"} Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.103669 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc9e2bcbf8b36408baa602b002941835bb7fb155f4c76fd9d53f1f1a0df9e361" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.103743 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.108871 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.109502 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.110154 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.110338 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.110801 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-t2rj8"] Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194291 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-inventory\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194376 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194410 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194426 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smvg4\" (UniqueName: \"kubernetes.io/projected/2f738e96-1625-4837-a484-f513c96dc31c-kube-api-access-smvg4\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194487 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ceph\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194541 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194558 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194584 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194616 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194653 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194693 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.194736 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.297752 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.297827 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.297897 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.297960 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.298031 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.298094 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-inventory\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.298135 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.298159 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvg4\" (UniqueName: \"kubernetes.io/projected/2f738e96-1625-4837-a484-f513c96dc31c-kube-api-access-smvg4\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.298180 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.298273 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ceph\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.298321 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.298344 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.304634 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.306289 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.310364 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.311077 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.311515 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ceph\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.312171 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.312338 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.312699 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.319586 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ssh-key-openstack-cell1\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.323013 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-inventory\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.325484 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvg4\" (UniqueName: \"kubernetes.io/projected/2f738e96-1625-4837-a484-f513c96dc31c-kube-api-access-smvg4\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.325747 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-t2rj8\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.426507 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:00 crc kubenswrapper[4945]: I0109 01:28:00.931587 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-t2rj8"] Jan 09 01:28:01 crc kubenswrapper[4945]: I0109 01:28:01.038240 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" event={"ID":"2f738e96-1625-4837-a484-f513c96dc31c","Type":"ContainerStarted","Data":"5fad3f74ac5632f30b31eee024538947dfb8a25e44d41ab52e60b8c19a0579e7"} Jan 09 01:28:02 crc kubenswrapper[4945]: I0109 01:28:02.051786 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" event={"ID":"2f738e96-1625-4837-a484-f513c96dc31c","Type":"ContainerStarted","Data":"6c049d0bc5fef4b35becea7dee4a67b3cdf24d2cd5bb3736524dda3de2f7dcc0"} Jan 09 01:28:04 crc kubenswrapper[4945]: I0109 01:28:04.000924 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:28:04 crc kubenswrapper[4945]: E0109 01:28:04.001979 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:28:19 crc kubenswrapper[4945]: I0109 01:28:19.000554 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:28:19 crc kubenswrapper[4945]: E0109 01:28:19.001852 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:28:21 crc kubenswrapper[4945]: I0109 01:28:21.258481 4945 generic.go:334] "Generic (PLEG): container finished" podID="2f738e96-1625-4837-a484-f513c96dc31c" containerID="6c049d0bc5fef4b35becea7dee4a67b3cdf24d2cd5bb3736524dda3de2f7dcc0" exitCode=0 Jan 09 01:28:21 crc kubenswrapper[4945]: I0109 01:28:21.258609 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" event={"ID":"2f738e96-1625-4837-a484-f513c96dc31c","Type":"ContainerDied","Data":"6c049d0bc5fef4b35becea7dee4a67b3cdf24d2cd5bb3736524dda3de2f7dcc0"} Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.760447 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889026 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-sriov-combined-ca-bundle\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889092 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-nova-combined-ca-bundle\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889169 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-dhcp-combined-ca-bundle\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889210 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ssh-key-openstack-cell1\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889261 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smvg4\" (UniqueName: \"kubernetes.io/projected/2f738e96-1625-4837-a484-f513c96dc31c-kube-api-access-smvg4\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889301 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-bootstrap-combined-ca-bundle\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889363 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-libvirt-combined-ca-bundle\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889487 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ceph\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889515 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-telemetry-combined-ca-bundle\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889540 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-inventory\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889579 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-metadata-combined-ca-bundle\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.889699 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ovn-combined-ca-bundle\") pod \"2f738e96-1625-4837-a484-f513c96dc31c\" (UID: \"2f738e96-1625-4837-a484-f513c96dc31c\") " Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.895133 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.896058 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.896468 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.896797 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.897174 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.897822 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.897975 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ceph" (OuterVolumeSpecName: "ceph") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.899278 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f738e96-1625-4837-a484-f513c96dc31c-kube-api-access-smvg4" (OuterVolumeSpecName: "kube-api-access-smvg4") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "kube-api-access-smvg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.902180 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.903926 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.929755 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.936235 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-inventory" (OuterVolumeSpecName: "inventory") pod "2f738e96-1625-4837-a484-f513c96dc31c" (UID: "2f738e96-1625-4837-a484-f513c96dc31c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992024 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992059 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992069 4945 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992078 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992087 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992096 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smvg4\" (UniqueName: \"kubernetes.io/projected/2f738e96-1625-4837-a484-f513c96dc31c-kube-api-access-smvg4\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992106 4945 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992114 4945 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992123 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992131 4945 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992139 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:22 crc kubenswrapper[4945]: I0109 01:28:22.992147 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f738e96-1625-4837-a484-f513c96dc31c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.280099 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" event={"ID":"2f738e96-1625-4837-a484-f513c96dc31c","Type":"ContainerDied","Data":"5fad3f74ac5632f30b31eee024538947dfb8a25e44d41ab52e60b8c19a0579e7"} Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.280406 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fad3f74ac5632f30b31eee024538947dfb8a25e44d41ab52e60b8c19a0579e7" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.280172 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-t2rj8" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.387303 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-n7l5w"] Jan 09 01:28:23 crc kubenswrapper[4945]: E0109 01:28:23.387814 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f738e96-1625-4837-a484-f513c96dc31c" containerName="install-certs-openstack-openstack-cell1" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.387830 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f738e96-1625-4837-a484-f513c96dc31c" containerName="install-certs-openstack-openstack-cell1" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.388039 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f738e96-1625-4837-a484-f513c96dc31c" containerName="install-certs-openstack-openstack-cell1" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.388838 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.398602 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-n7l5w"] Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.399550 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.399763 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.399892 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.400017 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.503608 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqlz\" (UniqueName: \"kubernetes.io/projected/a1d8b928-7aa9-475e-9da7-152c2c9590a9-kube-api-access-frqlz\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.503729 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.503782 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-inventory\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.503881 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ceph\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.605447 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.605530 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-inventory\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.605610 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ceph\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.605649 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frqlz\" (UniqueName: \"kubernetes.io/projected/a1d8b928-7aa9-475e-9da7-152c2c9590a9-kube-api-access-frqlz\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.613682 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ceph\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.614259 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ssh-key-openstack-cell1\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.622809 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-inventory\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.632589 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frqlz\" (UniqueName: \"kubernetes.io/projected/a1d8b928-7aa9-475e-9da7-152c2c9590a9-kube-api-access-frqlz\") pod \"ceph-client-openstack-openstack-cell1-n7l5w\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:23 crc kubenswrapper[4945]: I0109 01:28:23.716657 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:24 crc kubenswrapper[4945]: I0109 01:28:24.335448 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-n7l5w"] Jan 09 01:28:25 crc kubenswrapper[4945]: I0109 01:28:25.298193 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" event={"ID":"a1d8b928-7aa9-475e-9da7-152c2c9590a9","Type":"ContainerStarted","Data":"00e284eacbf2f952dd89ca532507794c018918d01545af676e9db16d6b64852a"} Jan 09 01:28:25 crc kubenswrapper[4945]: I0109 01:28:25.298700 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" event={"ID":"a1d8b928-7aa9-475e-9da7-152c2c9590a9","Type":"ContainerStarted","Data":"032dbb3653403ec2fe7ba2f435b6bb398823d5624d5bb9f878f616bc2f32b42a"} Jan 09 01:28:25 crc kubenswrapper[4945]: I0109 01:28:25.320490 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" podStartSLOduration=2.16792407 podStartE2EDuration="2.320470965s" podCreationTimestamp="2026-01-09 01:28:23 +0000 UTC" firstStartedPulling="2026-01-09 01:28:24.342332679 +0000 UTC m=+7974.653491635" lastFinishedPulling="2026-01-09 01:28:24.494879584 +0000 UTC m=+7974.806038530" observedRunningTime="2026-01-09 01:28:25.313937314 +0000 UTC m=+7975.625096260" watchObservedRunningTime="2026-01-09 01:28:25.320470965 +0000 UTC m=+7975.631629911" Jan 09 01:28:30 crc kubenswrapper[4945]: I0109 01:28:30.348277 4945 generic.go:334] "Generic (PLEG): container finished" podID="a1d8b928-7aa9-475e-9da7-152c2c9590a9" containerID="00e284eacbf2f952dd89ca532507794c018918d01545af676e9db16d6b64852a" exitCode=0 Jan 09 01:28:30 crc kubenswrapper[4945]: I0109 01:28:30.348370 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" event={"ID":"a1d8b928-7aa9-475e-9da7-152c2c9590a9","Type":"ContainerDied","Data":"00e284eacbf2f952dd89ca532507794c018918d01545af676e9db16d6b64852a"} Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.778963 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.884786 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-inventory\") pod \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.884833 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ceph\") pod \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.884887 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ssh-key-openstack-cell1\") pod \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.884936 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frqlz\" (UniqueName: \"kubernetes.io/projected/a1d8b928-7aa9-475e-9da7-152c2c9590a9-kube-api-access-frqlz\") pod \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\" (UID: \"a1d8b928-7aa9-475e-9da7-152c2c9590a9\") " Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.890748 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d8b928-7aa9-475e-9da7-152c2c9590a9-kube-api-access-frqlz" (OuterVolumeSpecName: "kube-api-access-frqlz") pod "a1d8b928-7aa9-475e-9da7-152c2c9590a9" (UID: "a1d8b928-7aa9-475e-9da7-152c2c9590a9"). InnerVolumeSpecName "kube-api-access-frqlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.891045 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ceph" (OuterVolumeSpecName: "ceph") pod "a1d8b928-7aa9-475e-9da7-152c2c9590a9" (UID: "a1d8b928-7aa9-475e-9da7-152c2c9590a9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.918599 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-inventory" (OuterVolumeSpecName: "inventory") pod "a1d8b928-7aa9-475e-9da7-152c2c9590a9" (UID: "a1d8b928-7aa9-475e-9da7-152c2c9590a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.920561 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "a1d8b928-7aa9-475e-9da7-152c2c9590a9" (UID: "a1d8b928-7aa9-475e-9da7-152c2c9590a9"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.987761 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.987799 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.987816 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/a1d8b928-7aa9-475e-9da7-152c2c9590a9-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:31 crc kubenswrapper[4945]: I0109 01:28:31.987831 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frqlz\" (UniqueName: \"kubernetes.io/projected/a1d8b928-7aa9-475e-9da7-152c2c9590a9-kube-api-access-frqlz\") on node \"crc\" DevicePath \"\"" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.383180 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" event={"ID":"a1d8b928-7aa9-475e-9da7-152c2c9590a9","Type":"ContainerDied","Data":"032dbb3653403ec2fe7ba2f435b6bb398823d5624d5bb9f878f616bc2f32b42a"} Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.383226 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="032dbb3653403ec2fe7ba2f435b6bb398823d5624d5bb9f878f616bc2f32b42a" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.383299 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-n7l5w" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.465346 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-d5mgp"] Jan 09 01:28:32 crc kubenswrapper[4945]: E0109 01:28:32.465972 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d8b928-7aa9-475e-9da7-152c2c9590a9" containerName="ceph-client-openstack-openstack-cell1" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.466015 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d8b928-7aa9-475e-9da7-152c2c9590a9" containerName="ceph-client-openstack-openstack-cell1" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.466316 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d8b928-7aa9-475e-9da7-152c2c9590a9" containerName="ceph-client-openstack-openstack-cell1" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.467716 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.470901 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.471613 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.471727 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.471975 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.481571 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.484784 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-d5mgp"] Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.601724 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ceph\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.601778 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfgl\" (UniqueName: \"kubernetes.io/projected/56c39c1b-9a6d-43c7-8d83-3d6191abf210-kube-api-access-hbfgl\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.601812 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.601947 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-inventory\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.602091 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.602134 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.704788 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.704876 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.705012 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ceph\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.705065 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfgl\" (UniqueName: \"kubernetes.io/projected/56c39c1b-9a6d-43c7-8d83-3d6191abf210-kube-api-access-hbfgl\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.705095 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.705153 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-inventory\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.706053 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.709723 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.710843 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ceph\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.717482 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-inventory\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.718183 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ssh-key-openstack-cell1\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.725347 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfgl\" (UniqueName: \"kubernetes.io/projected/56c39c1b-9a6d-43c7-8d83-3d6191abf210-kube-api-access-hbfgl\") pod \"ovn-openstack-openstack-cell1-d5mgp\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:32 crc kubenswrapper[4945]: I0109 01:28:32.790277 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:28:33 crc kubenswrapper[4945]: I0109 01:28:33.303601 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-d5mgp"] Jan 09 01:28:33 crc kubenswrapper[4945]: I0109 01:28:33.394789 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" event={"ID":"56c39c1b-9a6d-43c7-8d83-3d6191abf210","Type":"ContainerStarted","Data":"48e2916b806c45bdfbf671d33b51ecf68a1efa1685347a1b2271575424fec19d"} Jan 09 01:28:34 crc kubenswrapper[4945]: I0109 01:28:34.000733 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:28:34 crc kubenswrapper[4945]: E0109 01:28:34.001486 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:28:34 crc kubenswrapper[4945]: I0109 01:28:34.425121 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" event={"ID":"56c39c1b-9a6d-43c7-8d83-3d6191abf210","Type":"ContainerStarted","Data":"0e0446aaa3d7ac9dc4f4077d9263ab03f348e6b2711db867f82c988fa9672115"} Jan 09 01:28:34 crc kubenswrapper[4945]: I0109 01:28:34.460950 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" podStartSLOduration=2.305003811 podStartE2EDuration="2.46092147s" podCreationTimestamp="2026-01-09 01:28:32 +0000 UTC" firstStartedPulling="2026-01-09 01:28:33.295442459 +0000 UTC m=+7983.606601415" lastFinishedPulling="2026-01-09 01:28:33.451360128 +0000 UTC m=+7983.762519074" observedRunningTime="2026-01-09 01:28:34.453513897 +0000 UTC m=+7984.764672853" watchObservedRunningTime="2026-01-09 01:28:34.46092147 +0000 UTC m=+7984.772080456" Jan 09 01:28:47 crc kubenswrapper[4945]: I0109 01:28:47.001490 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:28:47 crc kubenswrapper[4945]: E0109 01:28:47.002503 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:28:58 crc kubenswrapper[4945]: I0109 01:28:58.000686 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:28:58 crc kubenswrapper[4945]: E0109 01:28:58.001450 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:29:12 crc kubenswrapper[4945]: I0109 01:29:12.004094 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:29:12 crc kubenswrapper[4945]: E0109 01:29:12.004904 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:29:25 crc kubenswrapper[4945]: I0109 01:29:25.000814 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:29:25 crc kubenswrapper[4945]: E0109 01:29:25.001543 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:29:36 crc kubenswrapper[4945]: I0109 01:29:36.001837 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:29:36 crc kubenswrapper[4945]: E0109 01:29:36.002673 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:29:38 crc kubenswrapper[4945]: I0109 01:29:38.127557 4945 generic.go:334] "Generic (PLEG): container finished" podID="56c39c1b-9a6d-43c7-8d83-3d6191abf210" containerID="0e0446aaa3d7ac9dc4f4077d9263ab03f348e6b2711db867f82c988fa9672115" exitCode=0 Jan 09 01:29:38 crc kubenswrapper[4945]: I0109 01:29:38.127640 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" event={"ID":"56c39c1b-9a6d-43c7-8d83-3d6191abf210","Type":"ContainerDied","Data":"0e0446aaa3d7ac9dc4f4077d9263ab03f348e6b2711db867f82c988fa9672115"} Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.637902 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.809511 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovncontroller-config-0\") pod \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.809622 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ssh-key-openstack-cell1\") pod \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.809722 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-inventory\") pod \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.809809 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ceph\") pod \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.809855 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfgl\" (UniqueName: \"kubernetes.io/projected/56c39c1b-9a6d-43c7-8d83-3d6191abf210-kube-api-access-hbfgl\") pod \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.809905 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovn-combined-ca-bundle\") pod \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\" (UID: \"56c39c1b-9a6d-43c7-8d83-3d6191abf210\") " Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.815600 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c39c1b-9a6d-43c7-8d83-3d6191abf210-kube-api-access-hbfgl" (OuterVolumeSpecName: "kube-api-access-hbfgl") pod "56c39c1b-9a6d-43c7-8d83-3d6191abf210" (UID: "56c39c1b-9a6d-43c7-8d83-3d6191abf210"). InnerVolumeSpecName "kube-api-access-hbfgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.835131 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "56c39c1b-9a6d-43c7-8d83-3d6191abf210" (UID: "56c39c1b-9a6d-43c7-8d83-3d6191abf210"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.835202 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ceph" (OuterVolumeSpecName: "ceph") pod "56c39c1b-9a6d-43c7-8d83-3d6191abf210" (UID: "56c39c1b-9a6d-43c7-8d83-3d6191abf210"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.841165 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "56c39c1b-9a6d-43c7-8d83-3d6191abf210" (UID: "56c39c1b-9a6d-43c7-8d83-3d6191abf210"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.843353 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-inventory" (OuterVolumeSpecName: "inventory") pod "56c39c1b-9a6d-43c7-8d83-3d6191abf210" (UID: "56c39c1b-9a6d-43c7-8d83-3d6191abf210"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.866066 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "56c39c1b-9a6d-43c7-8d83-3d6191abf210" (UID: "56c39c1b-9a6d-43c7-8d83-3d6191abf210"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.912386 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.912412 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.912421 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.912432 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfgl\" (UniqueName: \"kubernetes.io/projected/56c39c1b-9a6d-43c7-8d83-3d6191abf210-kube-api-access-hbfgl\") on node \"crc\" DevicePath \"\"" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.912440 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:29:39 crc kubenswrapper[4945]: I0109 01:29:39.912450 4945 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/56c39c1b-9a6d-43c7-8d83-3d6191abf210-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.147327 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" event={"ID":"56c39c1b-9a6d-43c7-8d83-3d6191abf210","Type":"ContainerDied","Data":"48e2916b806c45bdfbf671d33b51ecf68a1efa1685347a1b2271575424fec19d"} Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.147385 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e2916b806c45bdfbf671d33b51ecf68a1efa1685347a1b2271575424fec19d" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.147471 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-d5mgp" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.270657 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-f9588"] Jan 09 01:29:40 crc kubenswrapper[4945]: E0109 01:29:40.271203 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c39c1b-9a6d-43c7-8d83-3d6191abf210" containerName="ovn-openstack-openstack-cell1" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.271220 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c39c1b-9a6d-43c7-8d83-3d6191abf210" containerName="ovn-openstack-openstack-cell1" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.271482 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c39c1b-9a6d-43c7-8d83-3d6191abf210" containerName="ovn-openstack-openstack-cell1" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.288313 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-f9588"] Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.288467 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.325290 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.325457 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.325528 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.325722 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.325737 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.325882 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.434203 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.434302 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.434381 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.434417 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.434491 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.434542 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.434589 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7bbj\" (UniqueName: \"kubernetes.io/projected/386d3c0c-3552-4efb-8581-1a39c6f992dd-kube-api-access-z7bbj\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.536608 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.536667 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.536726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.536768 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.536811 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7bbj\" (UniqueName: \"kubernetes.io/projected/386d3c0c-3552-4efb-8581-1a39c6f992dd-kube-api-access-z7bbj\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.536919 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.536958 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.540661 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.540822 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.541952 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.542734 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ssh-key-openstack-cell1\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.543348 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.555072 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7bbj\" (UniqueName: \"kubernetes.io/projected/386d3c0c-3552-4efb-8581-1a39c6f992dd-kube-api-access-z7bbj\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.556889 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-f9588\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:40 crc kubenswrapper[4945]: I0109 01:29:40.642312 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:29:41 crc kubenswrapper[4945]: I0109 01:29:41.230951 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-f9588"] Jan 09 01:29:42 crc kubenswrapper[4945]: I0109 01:29:42.176958 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" event={"ID":"386d3c0c-3552-4efb-8581-1a39c6f992dd","Type":"ContainerStarted","Data":"38741bcf3802da5cd47722126f056b0ad7ef368589256f930e300584f597360f"} Jan 09 01:29:42 crc kubenswrapper[4945]: I0109 01:29:42.177293 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" event={"ID":"386d3c0c-3552-4efb-8581-1a39c6f992dd","Type":"ContainerStarted","Data":"9a96da226be8b8eb660ce443a4ac11d2c17bc0669333c062ccc839fdd69f953b"} Jan 09 01:29:50 crc kubenswrapper[4945]: I0109 01:29:50.007379 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:29:50 crc kubenswrapper[4945]: E0109 01:29:50.008566 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.156372 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" podStartSLOduration=19.959260296 podStartE2EDuration="20.156341883s" podCreationTimestamp="2026-01-09 01:29:40 +0000 UTC" firstStartedPulling="2026-01-09 01:29:41.236677275 +0000 UTC m=+8051.547836221" lastFinishedPulling="2026-01-09 01:29:41.433758862 +0000 UTC m=+8051.744917808" observedRunningTime="2026-01-09 01:29:42.206722329 +0000 UTC m=+8052.517881275" watchObservedRunningTime="2026-01-09 01:30:00.156341883 +0000 UTC m=+8070.467500849" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.159597 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs"] Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.161481 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.163914 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.164238 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.169912 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs"] Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.268276 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae36acc-1683-4dbf-b765-b3582d7ac626-config-volume\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.268703 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d759m\" (UniqueName: \"kubernetes.io/projected/6ae36acc-1683-4dbf-b765-b3582d7ac626-kube-api-access-d759m\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.268786 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae36acc-1683-4dbf-b765-b3582d7ac626-secret-volume\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.371037 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae36acc-1683-4dbf-b765-b3582d7ac626-config-volume\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.371104 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d759m\" (UniqueName: \"kubernetes.io/projected/6ae36acc-1683-4dbf-b765-b3582d7ac626-kube-api-access-d759m\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.371149 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae36acc-1683-4dbf-b765-b3582d7ac626-secret-volume\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.373327 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae36acc-1683-4dbf-b765-b3582d7ac626-config-volume\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.385692 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae36acc-1683-4dbf-b765-b3582d7ac626-secret-volume\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.388595 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d759m\" (UniqueName: \"kubernetes.io/projected/6ae36acc-1683-4dbf-b765-b3582d7ac626-kube-api-access-d759m\") pod \"collect-profiles-29465370-xw9cs\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:00 crc kubenswrapper[4945]: I0109 01:30:00.531305 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:01 crc kubenswrapper[4945]: I0109 01:30:01.016381 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs"] Jan 09 01:30:01 crc kubenswrapper[4945]: I0109 01:30:01.383168 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" event={"ID":"6ae36acc-1683-4dbf-b765-b3582d7ac626","Type":"ContainerStarted","Data":"2ca85450a05abf61b038e4f059e931e7eb64cec70f7878b0df839e9f1814448c"} Jan 09 01:30:01 crc kubenswrapper[4945]: I0109 01:30:01.384383 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" event={"ID":"6ae36acc-1683-4dbf-b765-b3582d7ac626","Type":"ContainerStarted","Data":"b90d3a14eb867840e999ee47d850dacc9af3424fca2016ac282f5acd4dd79fbd"} Jan 09 01:30:01 crc kubenswrapper[4945]: I0109 01:30:01.404387 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" podStartSLOduration=1.404367997 podStartE2EDuration="1.404367997s" podCreationTimestamp="2026-01-09 01:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:30:01.398365959 +0000 UTC m=+8071.709524915" watchObservedRunningTime="2026-01-09 01:30:01.404367997 +0000 UTC m=+8071.715526943" Jan 09 01:30:02 crc kubenswrapper[4945]: I0109 01:30:02.396765 4945 generic.go:334] "Generic (PLEG): container finished" podID="6ae36acc-1683-4dbf-b765-b3582d7ac626" containerID="2ca85450a05abf61b038e4f059e931e7eb64cec70f7878b0df839e9f1814448c" exitCode=0 Jan 09 01:30:02 crc kubenswrapper[4945]: I0109 01:30:02.396856 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" event={"ID":"6ae36acc-1683-4dbf-b765-b3582d7ac626","Type":"ContainerDied","Data":"2ca85450a05abf61b038e4f059e931e7eb64cec70f7878b0df839e9f1814448c"} Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.759912 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.838441 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae36acc-1683-4dbf-b765-b3582d7ac626-secret-volume\") pod \"6ae36acc-1683-4dbf-b765-b3582d7ac626\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.838604 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d759m\" (UniqueName: \"kubernetes.io/projected/6ae36acc-1683-4dbf-b765-b3582d7ac626-kube-api-access-d759m\") pod \"6ae36acc-1683-4dbf-b765-b3582d7ac626\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.838661 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae36acc-1683-4dbf-b765-b3582d7ac626-config-volume\") pod \"6ae36acc-1683-4dbf-b765-b3582d7ac626\" (UID: \"6ae36acc-1683-4dbf-b765-b3582d7ac626\") " Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.839417 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae36acc-1683-4dbf-b765-b3582d7ac626-config-volume" (OuterVolumeSpecName: "config-volume") pod "6ae36acc-1683-4dbf-b765-b3582d7ac626" (UID: "6ae36acc-1683-4dbf-b765-b3582d7ac626"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.844659 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae36acc-1683-4dbf-b765-b3582d7ac626-kube-api-access-d759m" (OuterVolumeSpecName: "kube-api-access-d759m") pod "6ae36acc-1683-4dbf-b765-b3582d7ac626" (UID: "6ae36acc-1683-4dbf-b765-b3582d7ac626"). InnerVolumeSpecName "kube-api-access-d759m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.848143 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae36acc-1683-4dbf-b765-b3582d7ac626-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6ae36acc-1683-4dbf-b765-b3582d7ac626" (UID: "6ae36acc-1683-4dbf-b765-b3582d7ac626"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.941206 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d759m\" (UniqueName: \"kubernetes.io/projected/6ae36acc-1683-4dbf-b765-b3582d7ac626-kube-api-access-d759m\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.941244 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae36acc-1683-4dbf-b765-b3582d7ac626-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:03 crc kubenswrapper[4945]: I0109 01:30:03.941253 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6ae36acc-1683-4dbf-b765-b3582d7ac626-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:04 crc kubenswrapper[4945]: I0109 01:30:04.424231 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" event={"ID":"6ae36acc-1683-4dbf-b765-b3582d7ac626","Type":"ContainerDied","Data":"b90d3a14eb867840e999ee47d850dacc9af3424fca2016ac282f5acd4dd79fbd"} Jan 09 01:30:04 crc kubenswrapper[4945]: I0109 01:30:04.424532 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90d3a14eb867840e999ee47d850dacc9af3424fca2016ac282f5acd4dd79fbd" Jan 09 01:30:04 crc kubenswrapper[4945]: I0109 01:30:04.424341 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465370-xw9cs" Jan 09 01:30:04 crc kubenswrapper[4945]: I0109 01:30:04.484166 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb"] Jan 09 01:30:04 crc kubenswrapper[4945]: I0109 01:30:04.498109 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465325-vdhrb"] Jan 09 01:30:05 crc kubenswrapper[4945]: I0109 01:30:05.000454 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:30:05 crc kubenswrapper[4945]: E0109 01:30:05.000908 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:30:06 crc kubenswrapper[4945]: I0109 01:30:06.017086 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf06166-4b8d-4bd5-b20a-f77619a52a56" path="/var/lib/kubelet/pods/7cf06166-4b8d-4bd5-b20a-f77619a52a56/volumes" Jan 09 01:30:16 crc kubenswrapper[4945]: I0109 01:30:16.002108 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:30:16 crc kubenswrapper[4945]: E0109 01:30:16.003548 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:30:31 crc kubenswrapper[4945]: I0109 01:30:31.000481 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:30:31 crc kubenswrapper[4945]: E0109 01:30:31.001207 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:30:33 crc kubenswrapper[4945]: I0109 01:30:33.731545 4945 generic.go:334] "Generic (PLEG): container finished" podID="386d3c0c-3552-4efb-8581-1a39c6f992dd" containerID="38741bcf3802da5cd47722126f056b0ad7ef368589256f930e300584f597360f" exitCode=0 Jan 09 01:30:33 crc kubenswrapper[4945]: I0109 01:30:33.731769 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" event={"ID":"386d3c0c-3552-4efb-8581-1a39c6f992dd","Type":"ContainerDied","Data":"38741bcf3802da5cd47722126f056b0ad7ef368589256f930e300584f597360f"} Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.203820 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.277023 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ssh-key-openstack-cell1\") pod \"386d3c0c-3552-4efb-8581-1a39c6f992dd\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.277087 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7bbj\" (UniqueName: \"kubernetes.io/projected/386d3c0c-3552-4efb-8581-1a39c6f992dd-kube-api-access-z7bbj\") pod \"386d3c0c-3552-4efb-8581-1a39c6f992dd\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.277148 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ceph\") pod \"386d3c0c-3552-4efb-8581-1a39c6f992dd\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.277216 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"386d3c0c-3552-4efb-8581-1a39c6f992dd\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.277318 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-inventory\") pod \"386d3c0c-3552-4efb-8581-1a39c6f992dd\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.277355 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-metadata-combined-ca-bundle\") pod \"386d3c0c-3552-4efb-8581-1a39c6f992dd\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.277446 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-nova-metadata-neutron-config-0\") pod \"386d3c0c-3552-4efb-8581-1a39c6f992dd\" (UID: \"386d3c0c-3552-4efb-8581-1a39c6f992dd\") " Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.282490 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "386d3c0c-3552-4efb-8581-1a39c6f992dd" (UID: "386d3c0c-3552-4efb-8581-1a39c6f992dd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.283740 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386d3c0c-3552-4efb-8581-1a39c6f992dd-kube-api-access-z7bbj" (OuterVolumeSpecName: "kube-api-access-z7bbj") pod "386d3c0c-3552-4efb-8581-1a39c6f992dd" (UID: "386d3c0c-3552-4efb-8581-1a39c6f992dd"). InnerVolumeSpecName "kube-api-access-z7bbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.294782 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ceph" (OuterVolumeSpecName: "ceph") pod "386d3c0c-3552-4efb-8581-1a39c6f992dd" (UID: "386d3c0c-3552-4efb-8581-1a39c6f992dd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.306047 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-inventory" (OuterVolumeSpecName: "inventory") pod "386d3c0c-3552-4efb-8581-1a39c6f992dd" (UID: "386d3c0c-3552-4efb-8581-1a39c6f992dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.311810 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "386d3c0c-3552-4efb-8581-1a39c6f992dd" (UID: "386d3c0c-3552-4efb-8581-1a39c6f992dd"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.312244 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "386d3c0c-3552-4efb-8581-1a39c6f992dd" (UID: "386d3c0c-3552-4efb-8581-1a39c6f992dd"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.324189 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "386d3c0c-3552-4efb-8581-1a39c6f992dd" (UID: "386d3c0c-3552-4efb-8581-1a39c6f992dd"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.379615 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.379741 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7bbj\" (UniqueName: \"kubernetes.io/projected/386d3c0c-3552-4efb-8581-1a39c6f992dd-kube-api-access-z7bbj\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.379799 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.379851 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.379929 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.380161 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.380233 4945 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/386d3c0c-3552-4efb-8581-1a39c6f992dd-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.757129 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" event={"ID":"386d3c0c-3552-4efb-8581-1a39c6f992dd","Type":"ContainerDied","Data":"9a96da226be8b8eb660ce443a4ac11d2c17bc0669333c062ccc839fdd69f953b"} Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.757529 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a96da226be8b8eb660ce443a4ac11d2c17bc0669333c062ccc839fdd69f953b" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.757227 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-f9588" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.846912 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-w8drn"] Jan 09 01:30:35 crc kubenswrapper[4945]: E0109 01:30:35.847470 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae36acc-1683-4dbf-b765-b3582d7ac626" containerName="collect-profiles" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.847489 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae36acc-1683-4dbf-b765-b3582d7ac626" containerName="collect-profiles" Jan 09 01:30:35 crc kubenswrapper[4945]: E0109 01:30:35.847532 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386d3c0c-3552-4efb-8581-1a39c6f992dd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.847540 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="386d3c0c-3552-4efb-8581-1a39c6f992dd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.847764 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae36acc-1683-4dbf-b765-b3582d7ac626" containerName="collect-profiles" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.847781 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="386d3c0c-3552-4efb-8581-1a39c6f992dd" containerName="neutron-metadata-openstack-openstack-cell1" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.848608 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.851926 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.852234 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.852402 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.852529 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.852763 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.860141 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-w8drn"] Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.900842 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.900904 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ceph\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.901016 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-inventory\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.901169 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.901210 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:35 crc kubenswrapper[4945]: I0109 01:30:35.901348 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vxz9\" (UniqueName: \"kubernetes.io/projected/1352adf5-5c12-4450-90dd-803a8503da11-kube-api-access-2vxz9\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.002525 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-inventory\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.002636 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.002664 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.003220 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vxz9\" (UniqueName: \"kubernetes.io/projected/1352adf5-5c12-4450-90dd-803a8503da11-kube-api-access-2vxz9\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.003260 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.003285 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ceph\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.008258 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.008366 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.009747 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ceph\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.012227 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-inventory\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.016059 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ssh-key-openstack-cell1\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.020573 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vxz9\" (UniqueName: \"kubernetes.io/projected/1352adf5-5c12-4450-90dd-803a8503da11-kube-api-access-2vxz9\") pod \"libvirt-openstack-openstack-cell1-w8drn\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.168555 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.721386 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-w8drn"] Jan 09 01:30:36 crc kubenswrapper[4945]: I0109 01:30:36.768278 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" event={"ID":"1352adf5-5c12-4450-90dd-803a8503da11","Type":"ContainerStarted","Data":"4b3f9219490fee8c7b9af24b48971b16c164d64f84221750cb7a8f164ac83064"} Jan 09 01:30:37 crc kubenswrapper[4945]: I0109 01:30:37.777853 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" event={"ID":"1352adf5-5c12-4450-90dd-803a8503da11","Type":"ContainerStarted","Data":"54aac991a910759d814edb82fb5f72e2627bd9848b7ca023cc97ecb1fd87a616"} Jan 09 01:30:37 crc kubenswrapper[4945]: I0109 01:30:37.799353 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" podStartSLOduration=2.624719079 podStartE2EDuration="2.799334957s" podCreationTimestamp="2026-01-09 01:30:35 +0000 UTC" firstStartedPulling="2026-01-09 01:30:36.724965179 +0000 UTC m=+8107.036124125" lastFinishedPulling="2026-01-09 01:30:36.899581057 +0000 UTC m=+8107.210740003" observedRunningTime="2026-01-09 01:30:37.798086866 +0000 UTC m=+8108.109245812" watchObservedRunningTime="2026-01-09 01:30:37.799334957 +0000 UTC m=+8108.110493903" Jan 09 01:30:42 crc kubenswrapper[4945]: I0109 01:30:42.000775 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:30:42 crc kubenswrapper[4945]: E0109 01:30:42.001657 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:30:45 crc kubenswrapper[4945]: I0109 01:30:45.617378 4945 scope.go:117] "RemoveContainer" containerID="5e5b0d80a1527f5de36ebd0fe40ef783870f7719fae66e0bad7004f86fa0a085" Jan 09 01:30:56 crc kubenswrapper[4945]: I0109 01:30:56.001785 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:30:56 crc kubenswrapper[4945]: I0109 01:30:56.975624 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"7fe3b7320ae35bd5d345f464dc00e0ed721cd2e8db3afcf5b5adff71c7920a53"} Jan 09 01:33:13 crc kubenswrapper[4945]: I0109 01:33:13.578368 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:33:13 crc kubenswrapper[4945]: I0109 01:33:13.579137 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:33:40 crc kubenswrapper[4945]: I0109 01:33:40.845271 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwn72"] Jan 09 01:33:40 crc kubenswrapper[4945]: I0109 01:33:40.854240 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:40 crc kubenswrapper[4945]: I0109 01:33:40.879312 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwn72"] Jan 09 01:33:40 crc kubenswrapper[4945]: I0109 01:33:40.939278 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-utilities\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:40 crc kubenswrapper[4945]: I0109 01:33:40.939348 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrdl8\" (UniqueName: \"kubernetes.io/projected/0daf63fd-ece1-4302-9d31-956dab306102-kube-api-access-rrdl8\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:40 crc kubenswrapper[4945]: I0109 01:33:40.939445 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-catalog-content\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.041139 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-utilities\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.041230 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrdl8\" (UniqueName: \"kubernetes.io/projected/0daf63fd-ece1-4302-9d31-956dab306102-kube-api-access-rrdl8\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.041320 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-catalog-content\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.041725 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-utilities\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.041943 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-catalog-content\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.064782 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrdl8\" (UniqueName: \"kubernetes.io/projected/0daf63fd-ece1-4302-9d31-956dab306102-kube-api-access-rrdl8\") pod \"redhat-operators-cwn72\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.181999 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.681456 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwn72"] Jan 09 01:33:41 crc kubenswrapper[4945]: I0109 01:33:41.788135 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn72" event={"ID":"0daf63fd-ece1-4302-9d31-956dab306102","Type":"ContainerStarted","Data":"2e6ccfcca80c745e8a38719ed331f4aa34151bf548346b570b012a07502f97bf"} Jan 09 01:33:42 crc kubenswrapper[4945]: I0109 01:33:42.804386 4945 generic.go:334] "Generic (PLEG): container finished" podID="0daf63fd-ece1-4302-9d31-956dab306102" containerID="d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b" exitCode=0 Jan 09 01:33:42 crc kubenswrapper[4945]: I0109 01:33:42.804455 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn72" event={"ID":"0daf63fd-ece1-4302-9d31-956dab306102","Type":"ContainerDied","Data":"d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b"} Jan 09 01:33:42 crc kubenswrapper[4945]: I0109 01:33:42.809402 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:33:43 crc kubenswrapper[4945]: I0109 01:33:43.577946 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:33:43 crc kubenswrapper[4945]: I0109 01:33:43.578078 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:33:44 crc kubenswrapper[4945]: I0109 01:33:44.831854 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn72" event={"ID":"0daf63fd-ece1-4302-9d31-956dab306102","Type":"ContainerStarted","Data":"59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49"} Jan 09 01:33:46 crc kubenswrapper[4945]: I0109 01:33:46.851583 4945 generic.go:334] "Generic (PLEG): container finished" podID="0daf63fd-ece1-4302-9d31-956dab306102" containerID="59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49" exitCode=0 Jan 09 01:33:46 crc kubenswrapper[4945]: I0109 01:33:46.851648 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn72" event={"ID":"0daf63fd-ece1-4302-9d31-956dab306102","Type":"ContainerDied","Data":"59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49"} Jan 09 01:33:47 crc kubenswrapper[4945]: I0109 01:33:47.865307 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn72" event={"ID":"0daf63fd-ece1-4302-9d31-956dab306102","Type":"ContainerStarted","Data":"6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898"} Jan 09 01:33:47 crc kubenswrapper[4945]: I0109 01:33:47.904434 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwn72" podStartSLOduration=3.39954259 podStartE2EDuration="7.904413277s" podCreationTimestamp="2026-01-09 01:33:40 +0000 UTC" firstStartedPulling="2026-01-09 01:33:42.808849935 +0000 UTC m=+8293.120008921" lastFinishedPulling="2026-01-09 01:33:47.313720662 +0000 UTC m=+8297.624879608" observedRunningTime="2026-01-09 01:33:47.88752844 +0000 UTC m=+8298.198687426" watchObservedRunningTime="2026-01-09 01:33:47.904413277 +0000 UTC m=+8298.215572223" Jan 09 01:33:51 crc kubenswrapper[4945]: I0109 01:33:51.183037 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:51 crc kubenswrapper[4945]: I0109 01:33:51.183685 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:33:52 crc kubenswrapper[4945]: I0109 01:33:52.228809 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cwn72" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="registry-server" probeResult="failure" output=< Jan 09 01:33:52 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 01:33:52 crc kubenswrapper[4945]: > Jan 09 01:34:01 crc kubenswrapper[4945]: I0109 01:34:01.230452 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:34:01 crc kubenswrapper[4945]: I0109 01:34:01.286769 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:34:01 crc kubenswrapper[4945]: I0109 01:34:01.467685 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwn72"] Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.045357 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cwn72" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="registry-server" containerID="cri-o://6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898" gracePeriod=2 Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.571288 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.683371 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-catalog-content\") pod \"0daf63fd-ece1-4302-9d31-956dab306102\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.683437 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-utilities\") pod \"0daf63fd-ece1-4302-9d31-956dab306102\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.684421 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-utilities" (OuterVolumeSpecName: "utilities") pod "0daf63fd-ece1-4302-9d31-956dab306102" (UID: "0daf63fd-ece1-4302-9d31-956dab306102"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.684497 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrdl8\" (UniqueName: \"kubernetes.io/projected/0daf63fd-ece1-4302-9d31-956dab306102-kube-api-access-rrdl8\") pod \"0daf63fd-ece1-4302-9d31-956dab306102\" (UID: \"0daf63fd-ece1-4302-9d31-956dab306102\") " Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.686118 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.694724 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0daf63fd-ece1-4302-9d31-956dab306102-kube-api-access-rrdl8" (OuterVolumeSpecName: "kube-api-access-rrdl8") pod "0daf63fd-ece1-4302-9d31-956dab306102" (UID: "0daf63fd-ece1-4302-9d31-956dab306102"). InnerVolumeSpecName "kube-api-access-rrdl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.788538 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrdl8\" (UniqueName: \"kubernetes.io/projected/0daf63fd-ece1-4302-9d31-956dab306102-kube-api-access-rrdl8\") on node \"crc\" DevicePath \"\"" Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.810966 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0daf63fd-ece1-4302-9d31-956dab306102" (UID: "0daf63fd-ece1-4302-9d31-956dab306102"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:34:03 crc kubenswrapper[4945]: I0109 01:34:03.891271 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0daf63fd-ece1-4302-9d31-956dab306102-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.059277 4945 generic.go:334] "Generic (PLEG): container finished" podID="0daf63fd-ece1-4302-9d31-956dab306102" containerID="6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898" exitCode=0 Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.059328 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn72" event={"ID":"0daf63fd-ece1-4302-9d31-956dab306102","Type":"ContainerDied","Data":"6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898"} Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.059360 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwn72" event={"ID":"0daf63fd-ece1-4302-9d31-956dab306102","Type":"ContainerDied","Data":"2e6ccfcca80c745e8a38719ed331f4aa34151bf548346b570b012a07502f97bf"} Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.059379 4945 scope.go:117] "RemoveContainer" containerID="6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.059523 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwn72" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.090362 4945 scope.go:117] "RemoveContainer" containerID="59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.100121 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cwn72"] Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.111647 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cwn72"] Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.137099 4945 scope.go:117] "RemoveContainer" containerID="d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.184979 4945 scope.go:117] "RemoveContainer" containerID="6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898" Jan 09 01:34:04 crc kubenswrapper[4945]: E0109 01:34:04.185714 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898\": container with ID starting with 6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898 not found: ID does not exist" containerID="6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.185784 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898"} err="failed to get container status \"6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898\": rpc error: code = NotFound desc = could not find container \"6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898\": container with ID starting with 6fca7131762171f04fc075f77728cfd54b322a149965dbabc211a2b07e57c898 not found: ID does not exist" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.185831 4945 scope.go:117] "RemoveContainer" containerID="59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49" Jan 09 01:34:04 crc kubenswrapper[4945]: E0109 01:34:04.186195 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49\": container with ID starting with 59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49 not found: ID does not exist" containerID="59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.186237 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49"} err="failed to get container status \"59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49\": rpc error: code = NotFound desc = could not find container \"59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49\": container with ID starting with 59d61cc43f7addf150feaede47b4d1374eeff37ae2dc14442d810e33bc56ab49 not found: ID does not exist" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.186266 4945 scope.go:117] "RemoveContainer" containerID="d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b" Jan 09 01:34:04 crc kubenswrapper[4945]: E0109 01:34:04.186571 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b\": container with ID starting with d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b not found: ID does not exist" containerID="d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b" Jan 09 01:34:04 crc kubenswrapper[4945]: I0109 01:34:04.186609 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b"} err="failed to get container status \"d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b\": rpc error: code = NotFound desc = could not find container \"d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b\": container with ID starting with d235861059e9bab22f70c367ae7c5c8a0d4def330ee70656cc4ceb07df67ba3b not found: ID does not exist" Jan 09 01:34:06 crc kubenswrapper[4945]: I0109 01:34:06.025363 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0daf63fd-ece1-4302-9d31-956dab306102" path="/var/lib/kubelet/pods/0daf63fd-ece1-4302-9d31-956dab306102/volumes" Jan 09 01:34:13 crc kubenswrapper[4945]: I0109 01:34:13.578567 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:34:13 crc kubenswrapper[4945]: I0109 01:34:13.580462 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:34:13 crc kubenswrapper[4945]: I0109 01:34:13.580590 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:34:13 crc kubenswrapper[4945]: I0109 01:34:13.581540 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fe3b7320ae35bd5d345f464dc00e0ed721cd2e8db3afcf5b5adff71c7920a53"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:34:13 crc kubenswrapper[4945]: I0109 01:34:13.581736 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://7fe3b7320ae35bd5d345f464dc00e0ed721cd2e8db3afcf5b5adff71c7920a53" gracePeriod=600 Jan 09 01:34:14 crc kubenswrapper[4945]: I0109 01:34:14.250653 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="7fe3b7320ae35bd5d345f464dc00e0ed721cd2e8db3afcf5b5adff71c7920a53" exitCode=0 Jan 09 01:34:14 crc kubenswrapper[4945]: I0109 01:34:14.251387 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"7fe3b7320ae35bd5d345f464dc00e0ed721cd2e8db3afcf5b5adff71c7920a53"} Jan 09 01:34:14 crc kubenswrapper[4945]: I0109 01:34:14.251474 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494"} Jan 09 01:34:14 crc kubenswrapper[4945]: I0109 01:34:14.251546 4945 scope.go:117] "RemoveContainer" containerID="7166750c17d8014ba1fe01f8e43d8719cc9221ccaa4e85d998bf65ebd5a58400" Jan 09 01:35:04 crc kubenswrapper[4945]: I0109 01:35:04.825596 4945 generic.go:334] "Generic (PLEG): container finished" podID="1352adf5-5c12-4450-90dd-803a8503da11" containerID="54aac991a910759d814edb82fb5f72e2627bd9848b7ca023cc97ecb1fd87a616" exitCode=0 Jan 09 01:35:04 crc kubenswrapper[4945]: I0109 01:35:04.825662 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" event={"ID":"1352adf5-5c12-4450-90dd-803a8503da11","Type":"ContainerDied","Data":"54aac991a910759d814edb82fb5f72e2627bd9848b7ca023cc97ecb1fd87a616"} Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.274039 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.403795 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vxz9\" (UniqueName: \"kubernetes.io/projected/1352adf5-5c12-4450-90dd-803a8503da11-kube-api-access-2vxz9\") pod \"1352adf5-5c12-4450-90dd-803a8503da11\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.403962 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ceph\") pod \"1352adf5-5c12-4450-90dd-803a8503da11\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.404044 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ssh-key-openstack-cell1\") pod \"1352adf5-5c12-4450-90dd-803a8503da11\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.404086 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0\") pod \"1352adf5-5c12-4450-90dd-803a8503da11\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.404117 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-inventory\") pod \"1352adf5-5c12-4450-90dd-803a8503da11\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.404249 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-combined-ca-bundle\") pod \"1352adf5-5c12-4450-90dd-803a8503da11\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.409042 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ceph" (OuterVolumeSpecName: "ceph") pod "1352adf5-5c12-4450-90dd-803a8503da11" (UID: "1352adf5-5c12-4450-90dd-803a8503da11"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.411671 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1352adf5-5c12-4450-90dd-803a8503da11" (UID: "1352adf5-5c12-4450-90dd-803a8503da11"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.419702 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1352adf5-5c12-4450-90dd-803a8503da11-kube-api-access-2vxz9" (OuterVolumeSpecName: "kube-api-access-2vxz9") pod "1352adf5-5c12-4450-90dd-803a8503da11" (UID: "1352adf5-5c12-4450-90dd-803a8503da11"). InnerVolumeSpecName "kube-api-access-2vxz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.438291 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1352adf5-5c12-4450-90dd-803a8503da11" (UID: "1352adf5-5c12-4450-90dd-803a8503da11"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:35:06 crc kubenswrapper[4945]: E0109 01:35:06.445407 4945 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0 podName:1352adf5-5c12-4450-90dd-803a8503da11 nodeName:}" failed. No retries permitted until 2026-01-09 01:35:06.945361149 +0000 UTC m=+8377.256520105 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "libvirt-secret-0" (UniqueName: "kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0") pod "1352adf5-5c12-4450-90dd-803a8503da11" (UID: "1352adf5-5c12-4450-90dd-803a8503da11") : error deleting /var/lib/kubelet/pods/1352adf5-5c12-4450-90dd-803a8503da11/volume-subpaths: remove /var/lib/kubelet/pods/1352adf5-5c12-4450-90dd-803a8503da11/volume-subpaths: no such file or directory Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.447321 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-inventory" (OuterVolumeSpecName: "inventory") pod "1352adf5-5c12-4450-90dd-803a8503da11" (UID: "1352adf5-5c12-4450-90dd-803a8503da11"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.506406 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.506435 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.506447 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.506457 4945 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.506466 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vxz9\" (UniqueName: \"kubernetes.io/projected/1352adf5-5c12-4450-90dd-803a8503da11-kube-api-access-2vxz9\") on node \"crc\" DevicePath \"\"" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.844690 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" event={"ID":"1352adf5-5c12-4450-90dd-803a8503da11","Type":"ContainerDied","Data":"4b3f9219490fee8c7b9af24b48971b16c164d64f84221750cb7a8f164ac83064"} Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.845103 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3f9219490fee8c7b9af24b48971b16c164d64f84221750cb7a8f164ac83064" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.844758 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-w8drn" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.961497 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-dcncv"] Jan 09 01:35:06 crc kubenswrapper[4945]: E0109 01:35:06.961984 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1352adf5-5c12-4450-90dd-803a8503da11" containerName="libvirt-openstack-openstack-cell1" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.962016 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1352adf5-5c12-4450-90dd-803a8503da11" containerName="libvirt-openstack-openstack-cell1" Jan 09 01:35:06 crc kubenswrapper[4945]: E0109 01:35:06.962031 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="extract-utilities" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.962038 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="extract-utilities" Jan 09 01:35:06 crc kubenswrapper[4945]: E0109 01:35:06.962057 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="registry-server" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.962065 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="registry-server" Jan 09 01:35:06 crc kubenswrapper[4945]: E0109 01:35:06.962092 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="extract-content" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.962098 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="extract-content" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.962297 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="0daf63fd-ece1-4302-9d31-956dab306102" containerName="registry-server" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.962312 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1352adf5-5c12-4450-90dd-803a8503da11" containerName="libvirt-openstack-openstack-cell1" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.963168 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.966019 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.966079 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.967683 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 09 01:35:06 crc kubenswrapper[4945]: I0109 01:35:06.980301 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-dcncv"] Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.017421 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0\") pod \"1352adf5-5c12-4450-90dd-803a8503da11\" (UID: \"1352adf5-5c12-4450-90dd-803a8503da11\") " Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.023355 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1352adf5-5c12-4450-90dd-803a8503da11" (UID: "1352adf5-5c12-4450-90dd-803a8503da11"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119246 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ceph\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119321 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdnz6\" (UniqueName: \"kubernetes.io/projected/5a0cc78a-1390-4815-9335-e9f030e50d32-kube-api-access-jdnz6\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119403 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119576 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119731 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119786 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119814 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-inventory\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119841 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119908 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.119937 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.120054 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.120163 4945 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1352adf5-5c12-4450-90dd-803a8503da11-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221635 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221684 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221736 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221764 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ceph\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221802 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdnz6\" (UniqueName: \"kubernetes.io/projected/5a0cc78a-1390-4815-9335-e9f030e50d32-kube-api-access-jdnz6\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221873 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221907 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221948 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.221971 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.222007 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-inventory\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.222025 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.223452 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.224233 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.225795 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.226067 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-inventory\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.226546 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.227665 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ceph\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.228798 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.228969 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.230047 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.230220 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.242664 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdnz6\" (UniqueName: \"kubernetes.io/projected/5a0cc78a-1390-4815-9335-e9f030e50d32-kube-api-access-jdnz6\") pod \"nova-cell1-openstack-openstack-cell1-dcncv\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.296526 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:35:07 crc kubenswrapper[4945]: I0109 01:35:07.947694 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-dcncv"] Jan 09 01:35:08 crc kubenswrapper[4945]: I0109 01:35:08.905565 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" event={"ID":"5a0cc78a-1390-4815-9335-e9f030e50d32","Type":"ContainerStarted","Data":"bd79166e90bfd5d75605372f2c47c41d860076f15f68a1864e750c4107aa62d1"} Jan 09 01:35:08 crc kubenswrapper[4945]: I0109 01:35:08.905922 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" event={"ID":"5a0cc78a-1390-4815-9335-e9f030e50d32","Type":"ContainerStarted","Data":"b60b0cb970c7db8eb17c1e64d17674c55ad6ff4f8aaafa3695e52d529779d773"} Jan 09 01:35:08 crc kubenswrapper[4945]: I0109 01:35:08.941509 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" podStartSLOduration=2.7351109129999998 podStartE2EDuration="2.941359552s" podCreationTimestamp="2026-01-09 01:35:06 +0000 UTC" firstStartedPulling="2026-01-09 01:35:07.957645101 +0000 UTC m=+8378.268804057" lastFinishedPulling="2026-01-09 01:35:08.16389375 +0000 UTC m=+8378.475052696" observedRunningTime="2026-01-09 01:35:08.92787364 +0000 UTC m=+8379.239032586" watchObservedRunningTime="2026-01-09 01:35:08.941359552 +0000 UTC m=+8379.252518498" Jan 09 01:36:13 crc kubenswrapper[4945]: I0109 01:36:13.577756 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:36:13 crc kubenswrapper[4945]: I0109 01:36:13.578510 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.611635 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tn6qx"] Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.614401 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.624965 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-catalog-content\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.625054 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn6xg\" (UniqueName: \"kubernetes.io/projected/d55216b8-bd10-4c99-8756-c948d4f45037-kube-api-access-hn6xg\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.625149 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-utilities\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.658743 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tn6qx"] Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.728510 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn6xg\" (UniqueName: \"kubernetes.io/projected/d55216b8-bd10-4c99-8756-c948d4f45037-kube-api-access-hn6xg\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.728599 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-utilities\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.728718 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-catalog-content\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.729269 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-catalog-content\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.729485 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-utilities\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.774937 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn6xg\" (UniqueName: \"kubernetes.io/projected/d55216b8-bd10-4c99-8756-c948d4f45037-kube-api-access-hn6xg\") pod \"community-operators-tn6qx\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:21 crc kubenswrapper[4945]: I0109 01:36:21.965673 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:22 crc kubenswrapper[4945]: I0109 01:36:22.535245 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tn6qx"] Jan 09 01:36:22 crc kubenswrapper[4945]: I0109 01:36:22.738594 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tn6qx" event={"ID":"d55216b8-bd10-4c99-8756-c948d4f45037","Type":"ContainerStarted","Data":"92eb3fdfd52e9dfce7d2d75791a220fa900581bb9c228989888429a25c0cc4f6"} Jan 09 01:36:23 crc kubenswrapper[4945]: I0109 01:36:23.748844 4945 generic.go:334] "Generic (PLEG): container finished" podID="d55216b8-bd10-4c99-8756-c948d4f45037" containerID="c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442" exitCode=0 Jan 09 01:36:23 crc kubenswrapper[4945]: I0109 01:36:23.748946 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tn6qx" event={"ID":"d55216b8-bd10-4c99-8756-c948d4f45037","Type":"ContainerDied","Data":"c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442"} Jan 09 01:36:25 crc kubenswrapper[4945]: I0109 01:36:25.780886 4945 generic.go:334] "Generic (PLEG): container finished" podID="d55216b8-bd10-4c99-8756-c948d4f45037" containerID="efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf" exitCode=0 Jan 09 01:36:25 crc kubenswrapper[4945]: I0109 01:36:25.780961 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tn6qx" event={"ID":"d55216b8-bd10-4c99-8756-c948d4f45037","Type":"ContainerDied","Data":"efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf"} Jan 09 01:36:26 crc kubenswrapper[4945]: I0109 01:36:26.799618 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tn6qx" event={"ID":"d55216b8-bd10-4c99-8756-c948d4f45037","Type":"ContainerStarted","Data":"583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91"} Jan 09 01:36:31 crc kubenswrapper[4945]: I0109 01:36:31.966324 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:31 crc kubenswrapper[4945]: I0109 01:36:31.967135 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:32 crc kubenswrapper[4945]: I0109 01:36:32.034827 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:32 crc kubenswrapper[4945]: I0109 01:36:32.061736 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tn6qx" podStartSLOduration=8.362963177 podStartE2EDuration="11.061712342s" podCreationTimestamp="2026-01-09 01:36:21 +0000 UTC" firstStartedPulling="2026-01-09 01:36:23.750949892 +0000 UTC m=+8454.062108828" lastFinishedPulling="2026-01-09 01:36:26.449699037 +0000 UTC m=+8456.760857993" observedRunningTime="2026-01-09 01:36:26.820569878 +0000 UTC m=+8457.131728834" watchObservedRunningTime="2026-01-09 01:36:32.061712342 +0000 UTC m=+8462.372871318" Jan 09 01:36:32 crc kubenswrapper[4945]: I0109 01:36:32.929311 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:32 crc kubenswrapper[4945]: I0109 01:36:32.987345 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tn6qx"] Jan 09 01:36:34 crc kubenswrapper[4945]: I0109 01:36:34.906705 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tn6qx" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" containerName="registry-server" containerID="cri-o://583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91" gracePeriod=2 Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.472462 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.603713 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-utilities\") pod \"d55216b8-bd10-4c99-8756-c948d4f45037\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.603963 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-catalog-content\") pod \"d55216b8-bd10-4c99-8756-c948d4f45037\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.604021 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn6xg\" (UniqueName: \"kubernetes.io/projected/d55216b8-bd10-4c99-8756-c948d4f45037-kube-api-access-hn6xg\") pod \"d55216b8-bd10-4c99-8756-c948d4f45037\" (UID: \"d55216b8-bd10-4c99-8756-c948d4f45037\") " Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.605877 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-utilities" (OuterVolumeSpecName: "utilities") pod "d55216b8-bd10-4c99-8756-c948d4f45037" (UID: "d55216b8-bd10-4c99-8756-c948d4f45037"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.618875 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55216b8-bd10-4c99-8756-c948d4f45037-kube-api-access-hn6xg" (OuterVolumeSpecName: "kube-api-access-hn6xg") pod "d55216b8-bd10-4c99-8756-c948d4f45037" (UID: "d55216b8-bd10-4c99-8756-c948d4f45037"). InnerVolumeSpecName "kube-api-access-hn6xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.660617 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d55216b8-bd10-4c99-8756-c948d4f45037" (UID: "d55216b8-bd10-4c99-8756-c948d4f45037"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.706081 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.706136 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn6xg\" (UniqueName: \"kubernetes.io/projected/d55216b8-bd10-4c99-8756-c948d4f45037-kube-api-access-hn6xg\") on node \"crc\" DevicePath \"\"" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.706150 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55216b8-bd10-4c99-8756-c948d4f45037-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.916766 4945 generic.go:334] "Generic (PLEG): container finished" podID="d55216b8-bd10-4c99-8756-c948d4f45037" containerID="583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91" exitCode=0 Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.916812 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tn6qx" event={"ID":"d55216b8-bd10-4c99-8756-c948d4f45037","Type":"ContainerDied","Data":"583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91"} Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.916845 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tn6qx" event={"ID":"d55216b8-bd10-4c99-8756-c948d4f45037","Type":"ContainerDied","Data":"92eb3fdfd52e9dfce7d2d75791a220fa900581bb9c228989888429a25c0cc4f6"} Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.916870 4945 scope.go:117] "RemoveContainer" containerID="583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.916865 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tn6qx" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.949972 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tn6qx"] Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.952086 4945 scope.go:117] "RemoveContainer" containerID="efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf" Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.958784 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tn6qx"] Jan 09 01:36:35 crc kubenswrapper[4945]: I0109 01:36:35.972704 4945 scope.go:117] "RemoveContainer" containerID="c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442" Jan 09 01:36:36 crc kubenswrapper[4945]: I0109 01:36:36.015797 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" path="/var/lib/kubelet/pods/d55216b8-bd10-4c99-8756-c948d4f45037/volumes" Jan 09 01:36:36 crc kubenswrapper[4945]: I0109 01:36:36.020869 4945 scope.go:117] "RemoveContainer" containerID="583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91" Jan 09 01:36:36 crc kubenswrapper[4945]: E0109 01:36:36.021491 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91\": container with ID starting with 583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91 not found: ID does not exist" containerID="583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91" Jan 09 01:36:36 crc kubenswrapper[4945]: I0109 01:36:36.021567 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91"} err="failed to get container status \"583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91\": rpc error: code = NotFound desc = could not find container \"583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91\": container with ID starting with 583fce52d6dbdc15f10728d9f1778a14615863009fc0722bc00b2dd913f08a91 not found: ID does not exist" Jan 09 01:36:36 crc kubenswrapper[4945]: I0109 01:36:36.021596 4945 scope.go:117] "RemoveContainer" containerID="efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf" Jan 09 01:36:36 crc kubenswrapper[4945]: E0109 01:36:36.022380 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf\": container with ID starting with efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf not found: ID does not exist" containerID="efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf" Jan 09 01:36:36 crc kubenswrapper[4945]: I0109 01:36:36.022402 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf"} err="failed to get container status \"efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf\": rpc error: code = NotFound desc = could not find container \"efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf\": container with ID starting with efa6d6386cc4990f01da64dc9b19b2c0887585949b59634ac3e1991bfc8c3ddf not found: ID does not exist" Jan 09 01:36:36 crc kubenswrapper[4945]: I0109 01:36:36.022415 4945 scope.go:117] "RemoveContainer" containerID="c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442" Jan 09 01:36:36 crc kubenswrapper[4945]: E0109 01:36:36.022822 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442\": container with ID starting with c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442 not found: ID does not exist" containerID="c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442" Jan 09 01:36:36 crc kubenswrapper[4945]: I0109 01:36:36.022881 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442"} err="failed to get container status \"c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442\": rpc error: code = NotFound desc = could not find container \"c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442\": container with ID starting with c13f2500dd1b1ec79a5edaebd9a02ee68b79bef4e7da5e7fc11a29cb98bdf442 not found: ID does not exist" Jan 09 01:36:43 crc kubenswrapper[4945]: I0109 01:36:43.578306 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:36:43 crc kubenswrapper[4945]: I0109 01:36:43.578860 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.227110 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l67r8"] Jan 09 01:36:47 crc kubenswrapper[4945]: E0109 01:36:47.228232 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" containerName="registry-server" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.228252 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" containerName="registry-server" Jan 09 01:36:47 crc kubenswrapper[4945]: E0109 01:36:47.228296 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" containerName="extract-utilities" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.228305 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" containerName="extract-utilities" Jan 09 01:36:47 crc kubenswrapper[4945]: E0109 01:36:47.228320 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" containerName="extract-content" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.228329 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" containerName="extract-content" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.228754 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55216b8-bd10-4c99-8756-c948d4f45037" containerName="registry-server" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.237902 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.277870 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l67r8"] Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.295910 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-utilities\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.296084 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hl9\" (UniqueName: \"kubernetes.io/projected/e0720de4-fccc-46ec-aa2f-453a68fb5ade-kube-api-access-48hl9\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.296358 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-catalog-content\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.398774 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48hl9\" (UniqueName: \"kubernetes.io/projected/e0720de4-fccc-46ec-aa2f-453a68fb5ade-kube-api-access-48hl9\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.398891 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-catalog-content\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.399065 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-utilities\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.399443 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-catalog-content\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.399542 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-utilities\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.432254 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48hl9\" (UniqueName: \"kubernetes.io/projected/e0720de4-fccc-46ec-aa2f-453a68fb5ade-kube-api-access-48hl9\") pod \"redhat-marketplace-l67r8\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:47 crc kubenswrapper[4945]: I0109 01:36:47.585095 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:48 crc kubenswrapper[4945]: I0109 01:36:48.146847 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l67r8"] Jan 09 01:36:48 crc kubenswrapper[4945]: W0109 01:36:48.166302 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0720de4_fccc_46ec_aa2f_453a68fb5ade.slice/crio-a9d7177c0903b00c25b954a037849dee99e578c9dd98636d605393ce8d51dda6 WatchSource:0}: Error finding container a9d7177c0903b00c25b954a037849dee99e578c9dd98636d605393ce8d51dda6: Status 404 returned error can't find the container with id a9d7177c0903b00c25b954a037849dee99e578c9dd98636d605393ce8d51dda6 Jan 09 01:36:49 crc kubenswrapper[4945]: I0109 01:36:49.079634 4945 generic.go:334] "Generic (PLEG): container finished" podID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerID="134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b" exitCode=0 Jan 09 01:36:49 crc kubenswrapper[4945]: I0109 01:36:49.079738 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l67r8" event={"ID":"e0720de4-fccc-46ec-aa2f-453a68fb5ade","Type":"ContainerDied","Data":"134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b"} Jan 09 01:36:49 crc kubenswrapper[4945]: I0109 01:36:49.080049 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l67r8" event={"ID":"e0720de4-fccc-46ec-aa2f-453a68fb5ade","Type":"ContainerStarted","Data":"a9d7177c0903b00c25b954a037849dee99e578c9dd98636d605393ce8d51dda6"} Jan 09 01:36:51 crc kubenswrapper[4945]: I0109 01:36:51.100189 4945 generic.go:334] "Generic (PLEG): container finished" podID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerID="cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276" exitCode=0 Jan 09 01:36:51 crc kubenswrapper[4945]: I0109 01:36:51.100259 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l67r8" event={"ID":"e0720de4-fccc-46ec-aa2f-453a68fb5ade","Type":"ContainerDied","Data":"cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276"} Jan 09 01:36:52 crc kubenswrapper[4945]: I0109 01:36:52.123169 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l67r8" event={"ID":"e0720de4-fccc-46ec-aa2f-453a68fb5ade","Type":"ContainerStarted","Data":"bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd"} Jan 09 01:36:52 crc kubenswrapper[4945]: I0109 01:36:52.168513 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l67r8" podStartSLOduration=2.6651622379999997 podStartE2EDuration="5.168462661s" podCreationTimestamp="2026-01-09 01:36:47 +0000 UTC" firstStartedPulling="2026-01-09 01:36:49.081587909 +0000 UTC m=+8479.392746895" lastFinishedPulling="2026-01-09 01:36:51.584888372 +0000 UTC m=+8481.896047318" observedRunningTime="2026-01-09 01:36:52.155675805 +0000 UTC m=+8482.466834761" watchObservedRunningTime="2026-01-09 01:36:52.168462661 +0000 UTC m=+8482.479621607" Jan 09 01:36:57 crc kubenswrapper[4945]: I0109 01:36:57.585255 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:57 crc kubenswrapper[4945]: I0109 01:36:57.586091 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:57 crc kubenswrapper[4945]: I0109 01:36:57.668740 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:58 crc kubenswrapper[4945]: I0109 01:36:58.272945 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:36:58 crc kubenswrapper[4945]: I0109 01:36:58.336985 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l67r8"] Jan 09 01:37:00 crc kubenswrapper[4945]: I0109 01:37:00.210138 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l67r8" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerName="registry-server" containerID="cri-o://bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd" gracePeriod=2 Jan 09 01:37:00 crc kubenswrapper[4945]: I0109 01:37:00.827134 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:37:00 crc kubenswrapper[4945]: I0109 01:37:00.948292 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-catalog-content\") pod \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " Jan 09 01:37:00 crc kubenswrapper[4945]: I0109 01:37:00.948478 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-utilities\") pod \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " Jan 09 01:37:00 crc kubenswrapper[4945]: I0109 01:37:00.948546 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48hl9\" (UniqueName: \"kubernetes.io/projected/e0720de4-fccc-46ec-aa2f-453a68fb5ade-kube-api-access-48hl9\") pod \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\" (UID: \"e0720de4-fccc-46ec-aa2f-453a68fb5ade\") " Jan 09 01:37:00 crc kubenswrapper[4945]: I0109 01:37:00.949670 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-utilities" (OuterVolumeSpecName: "utilities") pod "e0720de4-fccc-46ec-aa2f-453a68fb5ade" (UID: "e0720de4-fccc-46ec-aa2f-453a68fb5ade"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:37:00 crc kubenswrapper[4945]: I0109 01:37:00.958387 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0720de4-fccc-46ec-aa2f-453a68fb5ade-kube-api-access-48hl9" (OuterVolumeSpecName: "kube-api-access-48hl9") pod "e0720de4-fccc-46ec-aa2f-453a68fb5ade" (UID: "e0720de4-fccc-46ec-aa2f-453a68fb5ade"). InnerVolumeSpecName "kube-api-access-48hl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:37:00 crc kubenswrapper[4945]: I0109 01:37:00.969949 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0720de4-fccc-46ec-aa2f-453a68fb5ade" (UID: "e0720de4-fccc-46ec-aa2f-453a68fb5ade"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.052037 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.052120 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0720de4-fccc-46ec-aa2f-453a68fb5ade-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.052166 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48hl9\" (UniqueName: \"kubernetes.io/projected/e0720de4-fccc-46ec-aa2f-453a68fb5ade-kube-api-access-48hl9\") on node \"crc\" DevicePath \"\"" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.225985 4945 generic.go:334] "Generic (PLEG): container finished" podID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerID="bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd" exitCode=0 Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.226082 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l67r8" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.226106 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l67r8" event={"ID":"e0720de4-fccc-46ec-aa2f-453a68fb5ade","Type":"ContainerDied","Data":"bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd"} Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.226180 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l67r8" event={"ID":"e0720de4-fccc-46ec-aa2f-453a68fb5ade","Type":"ContainerDied","Data":"a9d7177c0903b00c25b954a037849dee99e578c9dd98636d605393ce8d51dda6"} Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.226223 4945 scope.go:117] "RemoveContainer" containerID="bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.279577 4945 scope.go:117] "RemoveContainer" containerID="cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.287680 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l67r8"] Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.303914 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l67r8"] Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.309919 4945 scope.go:117] "RemoveContainer" containerID="134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.357962 4945 scope.go:117] "RemoveContainer" containerID="bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd" Jan 09 01:37:01 crc kubenswrapper[4945]: E0109 01:37:01.358654 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd\": container with ID starting with bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd not found: ID does not exist" containerID="bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.358715 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd"} err="failed to get container status \"bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd\": rpc error: code = NotFound desc = could not find container \"bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd\": container with ID starting with bbf2bfb6d18555a519f47e28fefb4c7de78043d3f9dc27be0db912b81613ebbd not found: ID does not exist" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.358743 4945 scope.go:117] "RemoveContainer" containerID="cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276" Jan 09 01:37:01 crc kubenswrapper[4945]: E0109 01:37:01.367157 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276\": container with ID starting with cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276 not found: ID does not exist" containerID="cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.367196 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276"} err="failed to get container status \"cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276\": rpc error: code = NotFound desc = could not find container \"cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276\": container with ID starting with cee2031e4fe6b2c5c21fe4a5b99666058df3699c0557bce03be2038b9e54f276 not found: ID does not exist" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.367216 4945 scope.go:117] "RemoveContainer" containerID="134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b" Jan 09 01:37:01 crc kubenswrapper[4945]: E0109 01:37:01.367766 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b\": container with ID starting with 134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b not found: ID does not exist" containerID="134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b" Jan 09 01:37:01 crc kubenswrapper[4945]: I0109 01:37:01.367790 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b"} err="failed to get container status \"134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b\": rpc error: code = NotFound desc = could not find container \"134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b\": container with ID starting with 134ee91d62ffec1ec1b7f1b649f36de1a0f81d86ec9d79759a132c58cb5da48b not found: ID does not exist" Jan 09 01:37:02 crc kubenswrapper[4945]: I0109 01:37:02.014368 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" path="/var/lib/kubelet/pods/e0720de4-fccc-46ec-aa2f-453a68fb5ade/volumes" Jan 09 01:37:13 crc kubenswrapper[4945]: I0109 01:37:13.578255 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:37:13 crc kubenswrapper[4945]: I0109 01:37:13.578888 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:37:13 crc kubenswrapper[4945]: I0109 01:37:13.578952 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:37:13 crc kubenswrapper[4945]: I0109 01:37:13.580185 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:37:13 crc kubenswrapper[4945]: I0109 01:37:13.580288 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" gracePeriod=600 Jan 09 01:37:13 crc kubenswrapper[4945]: E0109 01:37:13.700553 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:37:14 crc kubenswrapper[4945]: I0109 01:37:14.369112 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" exitCode=0 Jan 09 01:37:14 crc kubenswrapper[4945]: I0109 01:37:14.369230 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494"} Jan 09 01:37:14 crc kubenswrapper[4945]: I0109 01:37:14.369834 4945 scope.go:117] "RemoveContainer" containerID="7fe3b7320ae35bd5d345f464dc00e0ed721cd2e8db3afcf5b5adff71c7920a53" Jan 09 01:37:14 crc kubenswrapper[4945]: I0109 01:37:14.370859 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:37:14 crc kubenswrapper[4945]: E0109 01:37:14.371510 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.463518 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bzbww"] Jan 09 01:37:23 crc kubenswrapper[4945]: E0109 01:37:23.464450 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerName="extract-content" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.464464 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerName="extract-content" Jan 09 01:37:23 crc kubenswrapper[4945]: E0109 01:37:23.464491 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerName="extract-utilities" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.464497 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerName="extract-utilities" Jan 09 01:37:23 crc kubenswrapper[4945]: E0109 01:37:23.464521 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerName="registry-server" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.464527 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerName="registry-server" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.464722 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0720de4-fccc-46ec-aa2f-453a68fb5ade" containerName="registry-server" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.466185 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.522731 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzbww"] Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.558156 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/def4b1da-2187-4d51-be0d-ae39bd2a7b30-utilities\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.558236 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9skr\" (UniqueName: \"kubernetes.io/projected/def4b1da-2187-4d51-be0d-ae39bd2a7b30-kube-api-access-w9skr\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.558315 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/def4b1da-2187-4d51-be0d-ae39bd2a7b30-catalog-content\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.660948 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/def4b1da-2187-4d51-be0d-ae39bd2a7b30-utilities\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.661285 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9skr\" (UniqueName: \"kubernetes.io/projected/def4b1da-2187-4d51-be0d-ae39bd2a7b30-kube-api-access-w9skr\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.661329 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/def4b1da-2187-4d51-be0d-ae39bd2a7b30-catalog-content\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.661348 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/def4b1da-2187-4d51-be0d-ae39bd2a7b30-utilities\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.661593 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/def4b1da-2187-4d51-be0d-ae39bd2a7b30-catalog-content\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.680919 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9skr\" (UniqueName: \"kubernetes.io/projected/def4b1da-2187-4d51-be0d-ae39bd2a7b30-kube-api-access-w9skr\") pod \"certified-operators-bzbww\" (UID: \"def4b1da-2187-4d51-be0d-ae39bd2a7b30\") " pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:23 crc kubenswrapper[4945]: I0109 01:37:23.835533 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:24 crc kubenswrapper[4945]: I0109 01:37:24.395302 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzbww"] Jan 09 01:37:24 crc kubenswrapper[4945]: I0109 01:37:24.490442 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzbww" event={"ID":"def4b1da-2187-4d51-be0d-ae39bd2a7b30","Type":"ContainerStarted","Data":"567236c56585bb21f86b539780a662ebef751a8dcb48f4a8daad8257f2019be4"} Jan 09 01:37:25 crc kubenswrapper[4945]: I0109 01:37:25.000038 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:37:25 crc kubenswrapper[4945]: E0109 01:37:25.000399 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:37:25 crc kubenswrapper[4945]: I0109 01:37:25.505611 4945 generic.go:334] "Generic (PLEG): container finished" podID="def4b1da-2187-4d51-be0d-ae39bd2a7b30" containerID="3687de4e293aa165e691c45550b968ef4a8401796499121b0fba90a09dc759d0" exitCode=0 Jan 09 01:37:25 crc kubenswrapper[4945]: I0109 01:37:25.506082 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzbww" event={"ID":"def4b1da-2187-4d51-be0d-ae39bd2a7b30","Type":"ContainerDied","Data":"3687de4e293aa165e691c45550b968ef4a8401796499121b0fba90a09dc759d0"} Jan 09 01:37:30 crc kubenswrapper[4945]: I0109 01:37:30.599241 4945 generic.go:334] "Generic (PLEG): container finished" podID="def4b1da-2187-4d51-be0d-ae39bd2a7b30" containerID="be130bd505dfa425b907fb48d70e96a56492195ece8a0d798e3e0fb91f25a6cd" exitCode=0 Jan 09 01:37:30 crc kubenswrapper[4945]: I0109 01:37:30.599348 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzbww" event={"ID":"def4b1da-2187-4d51-be0d-ae39bd2a7b30","Type":"ContainerDied","Data":"be130bd505dfa425b907fb48d70e96a56492195ece8a0d798e3e0fb91f25a6cd"} Jan 09 01:37:31 crc kubenswrapper[4945]: I0109 01:37:31.616862 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzbww" event={"ID":"def4b1da-2187-4d51-be0d-ae39bd2a7b30","Type":"ContainerStarted","Data":"bdb569913e326b88a04adcf626c4cf5303b7861e88db6ea3ab2ae48c4a1d9eca"} Jan 09 01:37:31 crc kubenswrapper[4945]: I0109 01:37:31.651255 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bzbww" podStartSLOduration=3.120026811 podStartE2EDuration="8.65122678s" podCreationTimestamp="2026-01-09 01:37:23 +0000 UTC" firstStartedPulling="2026-01-09 01:37:25.50800969 +0000 UTC m=+8515.819168636" lastFinishedPulling="2026-01-09 01:37:31.039209659 +0000 UTC m=+8521.350368605" observedRunningTime="2026-01-09 01:37:31.636209189 +0000 UTC m=+8521.947368135" watchObservedRunningTime="2026-01-09 01:37:31.65122678 +0000 UTC m=+8521.962385736" Jan 09 01:37:33 crc kubenswrapper[4945]: I0109 01:37:33.835746 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:33 crc kubenswrapper[4945]: I0109 01:37:33.837115 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:33 crc kubenswrapper[4945]: I0109 01:37:33.886953 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:40 crc kubenswrapper[4945]: I0109 01:37:40.010871 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:37:40 crc kubenswrapper[4945]: E0109 01:37:40.011840 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:37:43 crc kubenswrapper[4945]: I0109 01:37:43.927597 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bzbww" Jan 09 01:37:43 crc kubenswrapper[4945]: I0109 01:37:43.996515 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzbww"] Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.078206 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tv4b6"] Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.078523 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tv4b6" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerName="registry-server" containerID="cri-o://f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d" gracePeriod=2 Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.720813 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv4b6" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.744776 4945 generic.go:334] "Generic (PLEG): container finished" podID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerID="f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d" exitCode=0 Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.745885 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv4b6" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.746411 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv4b6" event={"ID":"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f","Type":"ContainerDied","Data":"f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d"} Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.746472 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv4b6" event={"ID":"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f","Type":"ContainerDied","Data":"a7ff6884284f2cbbba84cfe22e333869ae93c94cdd8e0b50a0c294c0b8085774"} Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.746490 4945 scope.go:117] "RemoveContainer" containerID="f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.814032 4945 scope.go:117] "RemoveContainer" containerID="b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.851107 4945 scope.go:117] "RemoveContainer" containerID="6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.886086 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-utilities\") pod \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.886136 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-catalog-content\") pod \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.886185 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gklb\" (UniqueName: \"kubernetes.io/projected/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-kube-api-access-7gklb\") pod \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\" (UID: \"1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f\") " Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.888685 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-utilities" (OuterVolumeSpecName: "utilities") pod "1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" (UID: "1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.898491 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-kube-api-access-7gklb" (OuterVolumeSpecName: "kube-api-access-7gklb") pod "1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" (UID: "1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f"). InnerVolumeSpecName "kube-api-access-7gklb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.909479 4945 scope.go:117] "RemoveContainer" containerID="f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d" Jan 09 01:37:44 crc kubenswrapper[4945]: E0109 01:37:44.910813 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d\": container with ID starting with f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d not found: ID does not exist" containerID="f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.910843 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d"} err="failed to get container status \"f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d\": rpc error: code = NotFound desc = could not find container \"f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d\": container with ID starting with f51c7c39d2c6b6e4372c8857de35f4abe4229e796dfc3a6d9324d7b59017cf6d not found: ID does not exist" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.910866 4945 scope.go:117] "RemoveContainer" containerID="b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1" Jan 09 01:37:44 crc kubenswrapper[4945]: E0109 01:37:44.911250 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1\": container with ID starting with b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1 not found: ID does not exist" containerID="b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.911297 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1"} err="failed to get container status \"b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1\": rpc error: code = NotFound desc = could not find container \"b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1\": container with ID starting with b0be8d6b7d24156ee0298b713b4515b6ed2bea4cf5ffd924106d5d7cf79301a1 not found: ID does not exist" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.911330 4945 scope.go:117] "RemoveContainer" containerID="6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f" Jan 09 01:37:44 crc kubenswrapper[4945]: E0109 01:37:44.911705 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f\": container with ID starting with 6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f not found: ID does not exist" containerID="6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.911728 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f"} err="failed to get container status \"6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f\": rpc error: code = NotFound desc = could not find container \"6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f\": container with ID starting with 6967539e31b7e0b6cb6537a47b42747e576e7127798585eff52c368aeb9fe36f not found: ID does not exist" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.958876 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" (UID: "1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.989021 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.989051 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:37:44 crc kubenswrapper[4945]: I0109 01:37:44.989061 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gklb\" (UniqueName: \"kubernetes.io/projected/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f-kube-api-access-7gklb\") on node \"crc\" DevicePath \"\"" Jan 09 01:37:45 crc kubenswrapper[4945]: I0109 01:37:45.100061 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tv4b6"] Jan 09 01:37:45 crc kubenswrapper[4945]: I0109 01:37:45.113049 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tv4b6"] Jan 09 01:37:46 crc kubenswrapper[4945]: I0109 01:37:46.013371 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" path="/var/lib/kubelet/pods/1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f/volumes" Jan 09 01:37:52 crc kubenswrapper[4945]: I0109 01:37:52.000534 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:37:52 crc kubenswrapper[4945]: E0109 01:37:52.001410 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:37:58 crc kubenswrapper[4945]: I0109 01:37:58.889984 4945 generic.go:334] "Generic (PLEG): container finished" podID="5a0cc78a-1390-4815-9335-e9f030e50d32" containerID="bd79166e90bfd5d75605372f2c47c41d860076f15f68a1864e750c4107aa62d1" exitCode=0 Jan 09 01:37:58 crc kubenswrapper[4945]: I0109 01:37:58.890155 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" event={"ID":"5a0cc78a-1390-4815-9335-e9f030e50d32","Type":"ContainerDied","Data":"bd79166e90bfd5d75605372f2c47c41d860076f15f68a1864e750c4107aa62d1"} Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.346281 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.354632 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-0\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.354677 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-1\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.354748 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-inventory\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.354776 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-1\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.354834 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-0\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.354856 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdnz6\" (UniqueName: \"kubernetes.io/projected/5a0cc78a-1390-4815-9335-e9f030e50d32-kube-api-access-jdnz6\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.354923 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ceph\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.355087 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-0\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.355113 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ssh-key-openstack-cell1\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.355146 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-1\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.355180 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-combined-ca-bundle\") pod \"5a0cc78a-1390-4815-9335-e9f030e50d32\" (UID: \"5a0cc78a-1390-4815-9335-e9f030e50d32\") " Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.386147 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ceph" (OuterVolumeSpecName: "ceph") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.389933 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.397463 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0cc78a-1390-4815-9335-e9f030e50d32-kube-api-access-jdnz6" (OuterVolumeSpecName: "kube-api-access-jdnz6") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "kube-api-access-jdnz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.401642 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-inventory" (OuterVolumeSpecName: "inventory") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.420749 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.427196 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.430872 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.435613 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.439700 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.440139 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.448149 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5a0cc78a-1390-4815-9335-e9f030e50d32" (UID: "5a0cc78a-1390-4815-9335-e9f030e50d32"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456584 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456624 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456638 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456649 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdnz6\" (UniqueName: \"kubernetes.io/projected/5a0cc78a-1390-4815-9335-e9f030e50d32-kube-api-access-jdnz6\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456661 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456672 4945 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456682 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456692 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456701 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456711 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.456724 4945 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a0cc78a-1390-4815-9335-e9f030e50d32-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.909016 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" event={"ID":"5a0cc78a-1390-4815-9335-e9f030e50d32","Type":"ContainerDied","Data":"b60b0cb970c7db8eb17c1e64d17674c55ad6ff4f8aaafa3695e52d529779d773"} Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.909055 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b60b0cb970c7db8eb17c1e64d17674c55ad6ff4f8aaafa3695e52d529779d773" Jan 09 01:38:00 crc kubenswrapper[4945]: I0109 01:38:00.909137 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-dcncv" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.031601 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-6jhb4"] Jan 09 01:38:01 crc kubenswrapper[4945]: E0109 01:38:01.032154 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerName="extract-content" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.032175 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerName="extract-content" Jan 09 01:38:01 crc kubenswrapper[4945]: E0109 01:38:01.032193 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerName="registry-server" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.032202 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerName="registry-server" Jan 09 01:38:01 crc kubenswrapper[4945]: E0109 01:38:01.032256 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0cc78a-1390-4815-9335-e9f030e50d32" containerName="nova-cell1-openstack-openstack-cell1" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.032266 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0cc78a-1390-4815-9335-e9f030e50d32" containerName="nova-cell1-openstack-openstack-cell1" Jan 09 01:38:01 crc kubenswrapper[4945]: E0109 01:38:01.032284 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerName="extract-utilities" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.032295 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerName="extract-utilities" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.032542 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0cc78a-1390-4815-9335-e9f030e50d32" containerName="nova-cell1-openstack-openstack-cell1" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.032559 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f973ac9-31e7-4757-9d0d-82d7d8fe6c2f" containerName="registry-server" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.033525 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.036694 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.037085 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.037297 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.037316 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.037668 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.042892 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-6jhb4"] Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.090967 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.091104 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceph\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.091194 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.091333 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvqdk\" (UniqueName: \"kubernetes.io/projected/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-kube-api-access-dvqdk\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.091390 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.091542 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.091718 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-inventory\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.091772 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.193280 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.193819 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvqdk\" (UniqueName: \"kubernetes.io/projected/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-kube-api-access-dvqdk\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.193913 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.194054 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.194169 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-inventory\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.194247 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.194321 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.194425 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceph\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.199041 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceph\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.199149 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.199315 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.199325 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.199711 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ssh-key-openstack-cell1\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.199823 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.200009 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-inventory\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.213485 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvqdk\" (UniqueName: \"kubernetes.io/projected/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-kube-api-access-dvqdk\") pod \"telemetry-openstack-openstack-cell1-6jhb4\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.392787 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:38:01 crc kubenswrapper[4945]: I0109 01:38:01.943115 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-6jhb4"] Jan 09 01:38:02 crc kubenswrapper[4945]: I0109 01:38:02.929445 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" event={"ID":"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2","Type":"ContainerStarted","Data":"4514ac0abf61cfb3eb5efce16e3cdd8a8801e0a8ed509131e4a6a4a3803342b1"} Jan 09 01:38:02 crc kubenswrapper[4945]: I0109 01:38:02.929762 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" event={"ID":"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2","Type":"ContainerStarted","Data":"e4d3dc80eb481c524ec53a2759ff1c545502c76a4aa2a24405b8a6e534be534b"} Jan 09 01:38:02 crc kubenswrapper[4945]: I0109 01:38:02.962508 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" podStartSLOduration=2.740365896 podStartE2EDuration="2.962484446s" podCreationTimestamp="2026-01-09 01:38:00 +0000 UTC" firstStartedPulling="2026-01-09 01:38:01.948417476 +0000 UTC m=+8552.259576422" lastFinishedPulling="2026-01-09 01:38:02.170536026 +0000 UTC m=+8552.481694972" observedRunningTime="2026-01-09 01:38:02.958752954 +0000 UTC m=+8553.269911910" watchObservedRunningTime="2026-01-09 01:38:02.962484446 +0000 UTC m=+8553.273643402" Jan 09 01:38:07 crc kubenswrapper[4945]: I0109 01:38:07.000185 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:38:07 crc kubenswrapper[4945]: E0109 01:38:07.001071 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:38:18 crc kubenswrapper[4945]: I0109 01:38:18.002137 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:38:18 crc kubenswrapper[4945]: E0109 01:38:18.003874 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:38:30 crc kubenswrapper[4945]: I0109 01:38:30.016049 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:38:30 crc kubenswrapper[4945]: E0109 01:38:30.017174 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:38:45 crc kubenswrapper[4945]: I0109 01:38:45.000951 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:38:45 crc kubenswrapper[4945]: E0109 01:38:45.001678 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:38:59 crc kubenswrapper[4945]: I0109 01:38:59.001619 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:38:59 crc kubenswrapper[4945]: E0109 01:38:59.002578 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:39:12 crc kubenswrapper[4945]: I0109 01:39:12.010744 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:39:12 crc kubenswrapper[4945]: E0109 01:39:12.011596 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:39:26 crc kubenswrapper[4945]: I0109 01:39:26.002168 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:39:26 crc kubenswrapper[4945]: E0109 01:39:26.003470 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:39:39 crc kubenswrapper[4945]: I0109 01:39:39.000061 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:39:39 crc kubenswrapper[4945]: E0109 01:39:39.000943 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:39:55 crc kubenswrapper[4945]: I0109 01:39:55.000508 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:39:55 crc kubenswrapper[4945]: E0109 01:39:55.001272 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:40:09 crc kubenswrapper[4945]: I0109 01:40:09.001296 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:40:09 crc kubenswrapper[4945]: E0109 01:40:09.002301 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:40:20 crc kubenswrapper[4945]: I0109 01:40:20.018196 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:40:20 crc kubenswrapper[4945]: E0109 01:40:20.019424 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:40:32 crc kubenswrapper[4945]: I0109 01:40:32.001171 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:40:32 crc kubenswrapper[4945]: E0109 01:40:32.003025 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:40:45 crc kubenswrapper[4945]: I0109 01:40:45.000509 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:40:45 crc kubenswrapper[4945]: E0109 01:40:45.002226 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:40:59 crc kubenswrapper[4945]: I0109 01:40:59.000472 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:40:59 crc kubenswrapper[4945]: E0109 01:40:59.001336 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:41:12 crc kubenswrapper[4945]: I0109 01:41:12.001039 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:41:12 crc kubenswrapper[4945]: E0109 01:41:12.003293 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:41:25 crc kubenswrapper[4945]: I0109 01:41:25.000643 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:41:25 crc kubenswrapper[4945]: E0109 01:41:25.001540 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:41:37 crc kubenswrapper[4945]: I0109 01:41:37.000345 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:41:37 crc kubenswrapper[4945]: E0109 01:41:37.001530 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:41:48 crc kubenswrapper[4945]: I0109 01:41:48.366604 4945 generic.go:334] "Generic (PLEG): container finished" podID="cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" containerID="4514ac0abf61cfb3eb5efce16e3cdd8a8801e0a8ed509131e4a6a4a3803342b1" exitCode=0 Jan 09 01:41:48 crc kubenswrapper[4945]: I0109 01:41:48.366745 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" event={"ID":"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2","Type":"ContainerDied","Data":"4514ac0abf61cfb3eb5efce16e3cdd8a8801e0a8ed509131e4a6a4a3803342b1"} Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.892577 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.983447 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-2\") pod \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.983578 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-inventory\") pod \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.983613 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-telemetry-combined-ca-bundle\") pod \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.983645 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ssh-key-openstack-cell1\") pod \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.983683 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-1\") pod \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.983715 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceph\") pod \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.983809 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvqdk\" (UniqueName: \"kubernetes.io/projected/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-kube-api-access-dvqdk\") pod \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.983911 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-0\") pod \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\" (UID: \"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2\") " Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.989895 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-kube-api-access-dvqdk" (OuterVolumeSpecName: "kube-api-access-dvqdk") pod "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" (UID: "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2"). InnerVolumeSpecName "kube-api-access-dvqdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:41:49 crc kubenswrapper[4945]: I0109 01:41:49.990729 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceph" (OuterVolumeSpecName: "ceph") pod "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" (UID: "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.004145 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" (UID: "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.013964 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" (UID: "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.014509 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" (UID: "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.029169 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" (UID: "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.033556 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-inventory" (OuterVolumeSpecName: "inventory") pod "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" (UID: "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.041986 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" (UID: "cf0982d8-b6fd-4bfc-b8d5-258b61aceee2"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.086126 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvqdk\" (UniqueName: \"kubernetes.io/projected/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-kube-api-access-dvqdk\") on node \"crc\" DevicePath \"\"" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.086161 4945 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.086172 4945 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.086182 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.086193 4945 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.086202 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.086211 4945 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.086222 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0982d8-b6fd-4bfc-b8d5-258b61aceee2-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.389859 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" event={"ID":"cf0982d8-b6fd-4bfc-b8d5-258b61aceee2","Type":"ContainerDied","Data":"e4d3dc80eb481c524ec53a2759ff1c545502c76a4aa2a24405b8a6e534be534b"} Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.389907 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4d3dc80eb481c524ec53a2759ff1c545502c76a4aa2a24405b8a6e534be534b" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.389923 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-6jhb4" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.525411 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-qwz6z"] Jan 09 01:41:50 crc kubenswrapper[4945]: E0109 01:41:50.526009 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" containerName="telemetry-openstack-openstack-cell1" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.526028 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" containerName="telemetry-openstack-openstack-cell1" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.526357 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0982d8-b6fd-4bfc-b8d5-258b61aceee2" containerName="telemetry-openstack-openstack-cell1" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.527484 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.529912 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.530665 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.530703 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.530778 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.531242 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.537927 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-qwz6z"] Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.600465 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.600538 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82jd\" (UniqueName: \"kubernetes.io/projected/90195899-5a3d-432e-822f-74f2ca65f0b3-kube-api-access-r82jd\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.600730 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.600803 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.600829 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.600873 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.703184 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.703292 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.703319 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.703365 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.703414 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.703465 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82jd\" (UniqueName: \"kubernetes.io/projected/90195899-5a3d-432e-822f-74f2ca65f0b3-kube-api-access-r82jd\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.708534 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.708658 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.709457 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.709656 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.710521 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ssh-key-openstack-cell1\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.727179 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82jd\" (UniqueName: \"kubernetes.io/projected/90195899-5a3d-432e-822f-74f2ca65f0b3-kube-api-access-r82jd\") pod \"neutron-sriov-openstack-openstack-cell1-qwz6z\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:50 crc kubenswrapper[4945]: I0109 01:41:50.887453 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:41:51 crc kubenswrapper[4945]: I0109 01:41:51.001961 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:41:51 crc kubenswrapper[4945]: E0109 01:41:51.004398 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:41:51 crc kubenswrapper[4945]: I0109 01:41:51.441864 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-qwz6z"] Jan 09 01:41:51 crc kubenswrapper[4945]: I0109 01:41:51.462811 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:41:52 crc kubenswrapper[4945]: I0109 01:41:52.411114 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" event={"ID":"90195899-5a3d-432e-822f-74f2ca65f0b3","Type":"ContainerStarted","Data":"a27b72bb1ed71edc3ad7d02e805c599b216532f520c1e4e2d87dd097c1843065"} Jan 09 01:41:52 crc kubenswrapper[4945]: I0109 01:41:52.411484 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" event={"ID":"90195899-5a3d-432e-822f-74f2ca65f0b3","Type":"ContainerStarted","Data":"baa63b975f3d13a8413d591742e5ec9cc401bda039e64762f8dfe5aa8b3db451"} Jan 09 01:41:52 crc kubenswrapper[4945]: I0109 01:41:52.427052 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" podStartSLOduration=2.2146550879999998 podStartE2EDuration="2.427022608s" podCreationTimestamp="2026-01-09 01:41:50 +0000 UTC" firstStartedPulling="2026-01-09 01:41:51.462508042 +0000 UTC m=+8781.773666998" lastFinishedPulling="2026-01-09 01:41:51.674875562 +0000 UTC m=+8781.986034518" observedRunningTime="2026-01-09 01:41:52.425275355 +0000 UTC m=+8782.736434301" watchObservedRunningTime="2026-01-09 01:41:52.427022608 +0000 UTC m=+8782.738181554" Jan 09 01:42:05 crc kubenswrapper[4945]: I0109 01:42:05.000409 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:42:05 crc kubenswrapper[4945]: E0109 01:42:05.001798 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:42:18 crc kubenswrapper[4945]: I0109 01:42:18.003375 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:42:18 crc kubenswrapper[4945]: I0109 01:42:18.717360 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"0493cddfa1d6ef8361a1f817832fc2129eadccd0c7d1c2e91fc162ad1d98b090"} Jan 09 01:42:47 crc kubenswrapper[4945]: I0109 01:42:47.035897 4945 generic.go:334] "Generic (PLEG): container finished" podID="90195899-5a3d-432e-822f-74f2ca65f0b3" containerID="a27b72bb1ed71edc3ad7d02e805c599b216532f520c1e4e2d87dd097c1843065" exitCode=0 Jan 09 01:42:47 crc kubenswrapper[4945]: I0109 01:42:47.036632 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" event={"ID":"90195899-5a3d-432e-822f-74f2ca65f0b3","Type":"ContainerDied","Data":"a27b72bb1ed71edc3ad7d02e805c599b216532f520c1e4e2d87dd097c1843065"} Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.572292 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.729236 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r82jd\" (UniqueName: \"kubernetes.io/projected/90195899-5a3d-432e-822f-74f2ca65f0b3-kube-api-access-r82jd\") pod \"90195899-5a3d-432e-822f-74f2ca65f0b3\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.729372 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-combined-ca-bundle\") pod \"90195899-5a3d-432e-822f-74f2ca65f0b3\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.729410 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ssh-key-openstack-cell1\") pod \"90195899-5a3d-432e-822f-74f2ca65f0b3\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.729473 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-inventory\") pod \"90195899-5a3d-432e-822f-74f2ca65f0b3\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.729568 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ceph\") pod \"90195899-5a3d-432e-822f-74f2ca65f0b3\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.729633 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-agent-neutron-config-0\") pod \"90195899-5a3d-432e-822f-74f2ca65f0b3\" (UID: \"90195899-5a3d-432e-822f-74f2ca65f0b3\") " Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.734665 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90195899-5a3d-432e-822f-74f2ca65f0b3-kube-api-access-r82jd" (OuterVolumeSpecName: "kube-api-access-r82jd") pod "90195899-5a3d-432e-822f-74f2ca65f0b3" (UID: "90195899-5a3d-432e-822f-74f2ca65f0b3"). InnerVolumeSpecName "kube-api-access-r82jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.735089 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ceph" (OuterVolumeSpecName: "ceph") pod "90195899-5a3d-432e-822f-74f2ca65f0b3" (UID: "90195899-5a3d-432e-822f-74f2ca65f0b3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.737277 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "90195899-5a3d-432e-822f-74f2ca65f0b3" (UID: "90195899-5a3d-432e-822f-74f2ca65f0b3"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.761911 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "90195899-5a3d-432e-822f-74f2ca65f0b3" (UID: "90195899-5a3d-432e-822f-74f2ca65f0b3"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.763061 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "90195899-5a3d-432e-822f-74f2ca65f0b3" (UID: "90195899-5a3d-432e-822f-74f2ca65f0b3"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.770877 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-inventory" (OuterVolumeSpecName: "inventory") pod "90195899-5a3d-432e-822f-74f2ca65f0b3" (UID: "90195899-5a3d-432e-822f-74f2ca65f0b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.831842 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.831896 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.831910 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r82jd\" (UniqueName: \"kubernetes.io/projected/90195899-5a3d-432e-822f-74f2ca65f0b3-kube-api-access-r82jd\") on node \"crc\" DevicePath \"\"" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.831922 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.831930 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:42:48 crc kubenswrapper[4945]: I0109 01:42:48.831940 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90195899-5a3d-432e-822f-74f2ca65f0b3-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.055622 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" event={"ID":"90195899-5a3d-432e-822f-74f2ca65f0b3","Type":"ContainerDied","Data":"baa63b975f3d13a8413d591742e5ec9cc401bda039e64762f8dfe5aa8b3db451"} Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.056097 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baa63b975f3d13a8413d591742e5ec9cc401bda039e64762f8dfe5aa8b3db451" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.055682 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-qwz6z" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.156034 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb"] Jan 09 01:42:49 crc kubenswrapper[4945]: E0109 01:42:49.156480 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90195899-5a3d-432e-822f-74f2ca65f0b3" containerName="neutron-sriov-openstack-openstack-cell1" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.156495 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="90195899-5a3d-432e-822f-74f2ca65f0b3" containerName="neutron-sriov-openstack-openstack-cell1" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.156698 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="90195899-5a3d-432e-822f-74f2ca65f0b3" containerName="neutron-sriov-openstack-openstack-cell1" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.157500 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.160657 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.160721 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.160751 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.160752 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.161688 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.171597 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb"] Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.239886 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.239972 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh7hm\" (UniqueName: \"kubernetes.io/projected/8c991408-c46e-4614-a376-2d459d7bb888-kube-api-access-sh7hm\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.240056 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.240138 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.240194 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.241614 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.343612 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.343678 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh7hm\" (UniqueName: \"kubernetes.io/projected/8c991408-c46e-4614-a376-2d459d7bb888-kube-api-access-sh7hm\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.343722 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.343754 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.343784 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.343836 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.349123 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.349173 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ssh-key-openstack-cell1\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.354738 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.354919 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.355163 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.364629 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh7hm\" (UniqueName: \"kubernetes.io/projected/8c991408-c46e-4614-a376-2d459d7bb888-kube-api-access-sh7hm\") pod \"neutron-dhcp-openstack-openstack-cell1-xh8nb\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:49 crc kubenswrapper[4945]: I0109 01:42:49.492561 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:42:50 crc kubenswrapper[4945]: I0109 01:42:50.046506 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb"] Jan 09 01:42:50 crc kubenswrapper[4945]: I0109 01:42:50.067918 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" event={"ID":"8c991408-c46e-4614-a376-2d459d7bb888","Type":"ContainerStarted","Data":"2e6a807d802a86c175bdd486f4a9963523c5d2a18cc35c2a7c493b0fdffa8e0d"} Jan 09 01:42:51 crc kubenswrapper[4945]: I0109 01:42:51.080781 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" event={"ID":"8c991408-c46e-4614-a376-2d459d7bb888","Type":"ContainerStarted","Data":"c60ced8c962b1ffcad297b103ab0907c997d50ffbb7a350024d84051c1d53f0f"} Jan 09 01:42:51 crc kubenswrapper[4945]: I0109 01:42:51.115567 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" podStartSLOduration=1.9350427730000002 podStartE2EDuration="2.115544977s" podCreationTimestamp="2026-01-09 01:42:49 +0000 UTC" firstStartedPulling="2026-01-09 01:42:50.048532691 +0000 UTC m=+8840.359691637" lastFinishedPulling="2026-01-09 01:42:50.229034895 +0000 UTC m=+8840.540193841" observedRunningTime="2026-01-09 01:42:51.106481744 +0000 UTC m=+8841.417640730" watchObservedRunningTime="2026-01-09 01:42:51.115544977 +0000 UTC m=+8841.426703923" Jan 09 01:44:09 crc kubenswrapper[4945]: I0109 01:44:09.885452 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mr97f"] Jan 09 01:44:09 crc kubenswrapper[4945]: I0109 01:44:09.890183 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:09 crc kubenswrapper[4945]: I0109 01:44:09.930505 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mr97f"] Jan 09 01:44:09 crc kubenswrapper[4945]: I0109 01:44:09.935074 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5cjj\" (UniqueName: \"kubernetes.io/projected/08cc7755-d846-45a4-910b-69edba95356d-kube-api-access-q5cjj\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:09 crc kubenswrapper[4945]: I0109 01:44:09.935171 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-catalog-content\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:09 crc kubenswrapper[4945]: I0109 01:44:09.936390 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-utilities\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.038855 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5cjj\" (UniqueName: \"kubernetes.io/projected/08cc7755-d846-45a4-910b-69edba95356d-kube-api-access-q5cjj\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.038930 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-catalog-content\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.039630 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-utilities\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.039786 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-catalog-content\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.040382 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-utilities\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.073469 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5cjj\" (UniqueName: \"kubernetes.io/projected/08cc7755-d846-45a4-910b-69edba95356d-kube-api-access-q5cjj\") pod \"redhat-operators-mr97f\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.222583 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.700124 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mr97f"] Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.962090 4945 generic.go:334] "Generic (PLEG): container finished" podID="08cc7755-d846-45a4-910b-69edba95356d" containerID="2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0" exitCode=0 Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.962168 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mr97f" event={"ID":"08cc7755-d846-45a4-910b-69edba95356d","Type":"ContainerDied","Data":"2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0"} Jan 09 01:44:10 crc kubenswrapper[4945]: I0109 01:44:10.964679 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mr97f" event={"ID":"08cc7755-d846-45a4-910b-69edba95356d","Type":"ContainerStarted","Data":"f62344a02a719b94df0a654300e4cf789bd800c478b133f617836944880feafb"} Jan 09 01:44:12 crc kubenswrapper[4945]: I0109 01:44:12.998389 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mr97f" event={"ID":"08cc7755-d846-45a4-910b-69edba95356d","Type":"ContainerStarted","Data":"b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70"} Jan 09 01:44:15 crc kubenswrapper[4945]: I0109 01:44:15.026211 4945 generic.go:334] "Generic (PLEG): container finished" podID="8c991408-c46e-4614-a376-2d459d7bb888" containerID="c60ced8c962b1ffcad297b103ab0907c997d50ffbb7a350024d84051c1d53f0f" exitCode=0 Jan 09 01:44:15 crc kubenswrapper[4945]: I0109 01:44:15.026298 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" event={"ID":"8c991408-c46e-4614-a376-2d459d7bb888","Type":"ContainerDied","Data":"c60ced8c962b1ffcad297b103ab0907c997d50ffbb7a350024d84051c1d53f0f"} Jan 09 01:44:15 crc kubenswrapper[4945]: I0109 01:44:15.029027 4945 generic.go:334] "Generic (PLEG): container finished" podID="08cc7755-d846-45a4-910b-69edba95356d" containerID="b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70" exitCode=0 Jan 09 01:44:15 crc kubenswrapper[4945]: I0109 01:44:15.029059 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mr97f" event={"ID":"08cc7755-d846-45a4-910b-69edba95356d","Type":"ContainerDied","Data":"b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70"} Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.039520 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mr97f" event={"ID":"08cc7755-d846-45a4-910b-69edba95356d","Type":"ContainerStarted","Data":"a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a"} Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.082270 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mr97f" podStartSLOduration=2.484567564 podStartE2EDuration="7.082242591s" podCreationTimestamp="2026-01-09 01:44:09 +0000 UTC" firstStartedPulling="2026-01-09 01:44:10.964121603 +0000 UTC m=+8921.275280539" lastFinishedPulling="2026-01-09 01:44:15.56179662 +0000 UTC m=+8925.872955566" observedRunningTime="2026-01-09 01:44:16.056191738 +0000 UTC m=+8926.367350714" watchObservedRunningTime="2026-01-09 01:44:16.082242591 +0000 UTC m=+8926.393401527" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.458845 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.495651 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-inventory\") pod \"8c991408-c46e-4614-a376-2d459d7bb888\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.495720 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ssh-key-openstack-cell1\") pod \"8c991408-c46e-4614-a376-2d459d7bb888\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.495898 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-agent-neutron-config-0\") pod \"8c991408-c46e-4614-a376-2d459d7bb888\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.496010 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-combined-ca-bundle\") pod \"8c991408-c46e-4614-a376-2d459d7bb888\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.496059 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh7hm\" (UniqueName: \"kubernetes.io/projected/8c991408-c46e-4614-a376-2d459d7bb888-kube-api-access-sh7hm\") pod \"8c991408-c46e-4614-a376-2d459d7bb888\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.496140 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ceph\") pod \"8c991408-c46e-4614-a376-2d459d7bb888\" (UID: \"8c991408-c46e-4614-a376-2d459d7bb888\") " Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.505207 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ceph" (OuterVolumeSpecName: "ceph") pod "8c991408-c46e-4614-a376-2d459d7bb888" (UID: "8c991408-c46e-4614-a376-2d459d7bb888"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.510865 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c991408-c46e-4614-a376-2d459d7bb888-kube-api-access-sh7hm" (OuterVolumeSpecName: "kube-api-access-sh7hm") pod "8c991408-c46e-4614-a376-2d459d7bb888" (UID: "8c991408-c46e-4614-a376-2d459d7bb888"). InnerVolumeSpecName "kube-api-access-sh7hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.515163 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "8c991408-c46e-4614-a376-2d459d7bb888" (UID: "8c991408-c46e-4614-a376-2d459d7bb888"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.543530 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8c991408-c46e-4614-a376-2d459d7bb888" (UID: "8c991408-c46e-4614-a376-2d459d7bb888"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.544523 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "8c991408-c46e-4614-a376-2d459d7bb888" (UID: "8c991408-c46e-4614-a376-2d459d7bb888"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.550455 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-inventory" (OuterVolumeSpecName: "inventory") pod "8c991408-c46e-4614-a376-2d459d7bb888" (UID: "8c991408-c46e-4614-a376-2d459d7bb888"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.599236 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.599676 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.599745 4945 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.599808 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh7hm\" (UniqueName: \"kubernetes.io/projected/8c991408-c46e-4614-a376-2d459d7bb888-kube-api-access-sh7hm\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.599865 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:16 crc kubenswrapper[4945]: I0109 01:44:16.599933 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c991408-c46e-4614-a376-2d459d7bb888-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:17 crc kubenswrapper[4945]: I0109 01:44:17.050709 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" event={"ID":"8c991408-c46e-4614-a376-2d459d7bb888","Type":"ContainerDied","Data":"2e6a807d802a86c175bdd486f4a9963523c5d2a18cc35c2a7c493b0fdffa8e0d"} Jan 09 01:44:17 crc kubenswrapper[4945]: I0109 01:44:17.051008 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e6a807d802a86c175bdd486f4a9963523c5d2a18cc35c2a7c493b0fdffa8e0d" Jan 09 01:44:17 crc kubenswrapper[4945]: I0109 01:44:17.050748 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-xh8nb" Jan 09 01:44:20 crc kubenswrapper[4945]: I0109 01:44:20.223210 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:20 crc kubenswrapper[4945]: I0109 01:44:20.223758 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:21 crc kubenswrapper[4945]: I0109 01:44:21.286741 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mr97f" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="registry-server" probeResult="failure" output=< Jan 09 01:44:21 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 01:44:21 crc kubenswrapper[4945]: > Jan 09 01:44:30 crc kubenswrapper[4945]: I0109 01:44:30.286495 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:30 crc kubenswrapper[4945]: I0109 01:44:30.360606 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:30 crc kubenswrapper[4945]: I0109 01:44:30.522171 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mr97f"] Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.215850 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mr97f" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="registry-server" containerID="cri-o://a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a" gracePeriod=2 Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.710296 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.882977 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-utilities\") pod \"08cc7755-d846-45a4-910b-69edba95356d\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.883064 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-catalog-content\") pod \"08cc7755-d846-45a4-910b-69edba95356d\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.883196 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5cjj\" (UniqueName: \"kubernetes.io/projected/08cc7755-d846-45a4-910b-69edba95356d-kube-api-access-q5cjj\") pod \"08cc7755-d846-45a4-910b-69edba95356d\" (UID: \"08cc7755-d846-45a4-910b-69edba95356d\") " Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.884030 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-utilities" (OuterVolumeSpecName: "utilities") pod "08cc7755-d846-45a4-910b-69edba95356d" (UID: "08cc7755-d846-45a4-910b-69edba95356d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.890095 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cc7755-d846-45a4-910b-69edba95356d-kube-api-access-q5cjj" (OuterVolumeSpecName: "kube-api-access-q5cjj") pod "08cc7755-d846-45a4-910b-69edba95356d" (UID: "08cc7755-d846-45a4-910b-69edba95356d"). InnerVolumeSpecName "kube-api-access-q5cjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.986533 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5cjj\" (UniqueName: \"kubernetes.io/projected/08cc7755-d846-45a4-910b-69edba95356d-kube-api-access-q5cjj\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:32 crc kubenswrapper[4945]: I0109 01:44:32.986582 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.001394 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08cc7755-d846-45a4-910b-69edba95356d" (UID: "08cc7755-d846-45a4-910b-69edba95356d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.088749 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7755-d846-45a4-910b-69edba95356d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.226981 4945 generic.go:334] "Generic (PLEG): container finished" podID="08cc7755-d846-45a4-910b-69edba95356d" containerID="a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a" exitCode=0 Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.227050 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mr97f" event={"ID":"08cc7755-d846-45a4-910b-69edba95356d","Type":"ContainerDied","Data":"a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a"} Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.227128 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mr97f" event={"ID":"08cc7755-d846-45a4-910b-69edba95356d","Type":"ContainerDied","Data":"f62344a02a719b94df0a654300e4cf789bd800c478b133f617836944880feafb"} Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.227139 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mr97f" Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.227159 4945 scope.go:117] "RemoveContainer" containerID="a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a" Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.246631 4945 scope.go:117] "RemoveContainer" containerID="b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70" Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.275524 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mr97f"] Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.284118 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mr97f"] Jan 09 01:44:33 crc kubenswrapper[4945]: I0109 01:44:33.291836 4945 scope.go:117] "RemoveContainer" containerID="2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0" Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.013399 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cc7755-d846-45a4-910b-69edba95356d" path="/var/lib/kubelet/pods/08cc7755-d846-45a4-910b-69edba95356d/volumes" Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.119129 4945 scope.go:117] "RemoveContainer" containerID="a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a" Jan 09 01:44:34 crc kubenswrapper[4945]: E0109 01:44:34.121452 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a\": container with ID starting with a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a not found: ID does not exist" containerID="a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a" Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.121491 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a"} err="failed to get container status \"a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a\": rpc error: code = NotFound desc = could not find container \"a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a\": container with ID starting with a2234ead815a3bb0ba9942e2f71fef574ec0b875c9d3149785cdbc4590dd539a not found: ID does not exist" Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.121522 4945 scope.go:117] "RemoveContainer" containerID="b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70" Jan 09 01:44:34 crc kubenswrapper[4945]: E0109 01:44:34.121897 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70\": container with ID starting with b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70 not found: ID does not exist" containerID="b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70" Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.121924 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70"} err="failed to get container status \"b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70\": rpc error: code = NotFound desc = could not find container \"b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70\": container with ID starting with b71d3e28dce9032653931bc54244aff192d7251e269ceb08cd323f8b26a2ac70 not found: ID does not exist" Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.121942 4945 scope.go:117] "RemoveContainer" containerID="2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0" Jan 09 01:44:34 crc kubenswrapper[4945]: E0109 01:44:34.122269 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0\": container with ID starting with 2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0 not found: ID does not exist" containerID="2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0" Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.122299 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0"} err="failed to get container status \"2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0\": rpc error: code = NotFound desc = could not find container \"2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0\": container with ID starting with 2bb22e5dae21b3f953f5187e4e702f8934a7326031dbbcb18888e77a2b78e3f0 not found: ID does not exist" Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.430294 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.430537 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="1523f74d-4bdd-4d29-b779-1ff30d782fed" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" gracePeriod=30 Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.900906 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 01:44:34 crc kubenswrapper[4945]: I0109 01:44:34.901195 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="a35509ac-ad70-4941-9024-c4bbe22a7497" containerName="nova-cell1-conductor-conductor" containerID="cri-o://81008a15da69ec3abc212239894674df87309ee8e159beed7582615e3602b411" gracePeriod=30 Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.066596 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.066891 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e8041767-cb7e-460e-b5c6-d5de80c5f244" containerName="nova-scheduler-scheduler" containerID="cri-o://b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695" gracePeriod=30 Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.085202 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.085497 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-log" containerID="cri-o://a98790eb60522635b3ca0872529a216ea87ce416eccb56d689ace88fcfeb2536" gracePeriod=30 Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.085696 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-api" containerID="cri-o://092598a02dc4722cc8ed8429dfc50031f8ad429f4fc3e3be781a87ca319cdf8d" gracePeriod=30 Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.103160 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.109255 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-log" containerID="cri-o://3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502" gracePeriod=30 Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.110125 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-metadata" containerID="cri-o://ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4" gracePeriod=30 Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.264403 4945 generic.go:334] "Generic (PLEG): container finished" podID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerID="a98790eb60522635b3ca0872529a216ea87ce416eccb56d689ace88fcfeb2536" exitCode=143 Jan 09 01:44:35 crc kubenswrapper[4945]: I0109 01:44:35.264460 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2594cf9d-f20a-4554-96c6-54fe285cc3a4","Type":"ContainerDied","Data":"a98790eb60522635b3ca0872529a216ea87ce416eccb56d689ace88fcfeb2536"} Jan 09 01:44:35 crc kubenswrapper[4945]: E0109 01:44:35.692655 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 01:44:35 crc kubenswrapper[4945]: E0109 01:44:35.695371 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 01:44:35 crc kubenswrapper[4945]: E0109 01:44:35.696955 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 01:44:35 crc kubenswrapper[4945]: E0109 01:44:35.697016 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="1523f74d-4bdd-4d29-b779-1ff30d782fed" containerName="nova-cell0-conductor-conductor" Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.274584 4945 generic.go:334] "Generic (PLEG): container finished" podID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerID="3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502" exitCode=143 Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.274665 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0638cb-7d95-4120-9e0f-f14212f84368","Type":"ContainerDied","Data":"3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502"} Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.276433 4945 generic.go:334] "Generic (PLEG): container finished" podID="a35509ac-ad70-4941-9024-c4bbe22a7497" containerID="81008a15da69ec3abc212239894674df87309ee8e159beed7582615e3602b411" exitCode=0 Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.276524 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a35509ac-ad70-4941-9024-c4bbe22a7497","Type":"ContainerDied","Data":"81008a15da69ec3abc212239894674df87309ee8e159beed7582615e3602b411"} Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.276718 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a35509ac-ad70-4941-9024-c4bbe22a7497","Type":"ContainerDied","Data":"b5a8555217da9d6885ea56ad982d193de57ae3037a296a7d1980b86bdc04d792"} Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.276890 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5a8555217da9d6885ea56ad982d193de57ae3037a296a7d1980b86bdc04d792" Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.353737 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.371974 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pxtz\" (UniqueName: \"kubernetes.io/projected/a35509ac-ad70-4941-9024-c4bbe22a7497-kube-api-access-7pxtz\") pod \"a35509ac-ad70-4941-9024-c4bbe22a7497\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.372582 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-combined-ca-bundle\") pod \"a35509ac-ad70-4941-9024-c4bbe22a7497\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.373107 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-config-data\") pod \"a35509ac-ad70-4941-9024-c4bbe22a7497\" (UID: \"a35509ac-ad70-4941-9024-c4bbe22a7497\") " Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.466846 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a35509ac-ad70-4941-9024-c4bbe22a7497-kube-api-access-7pxtz" (OuterVolumeSpecName: "kube-api-access-7pxtz") pod "a35509ac-ad70-4941-9024-c4bbe22a7497" (UID: "a35509ac-ad70-4941-9024-c4bbe22a7497"). InnerVolumeSpecName "kube-api-access-7pxtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.470638 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-config-data" (OuterVolumeSpecName: "config-data") pod "a35509ac-ad70-4941-9024-c4bbe22a7497" (UID: "a35509ac-ad70-4941-9024-c4bbe22a7497"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.470931 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a35509ac-ad70-4941-9024-c4bbe22a7497" (UID: "a35509ac-ad70-4941-9024-c4bbe22a7497"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.475873 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.475903 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pxtz\" (UniqueName: \"kubernetes.io/projected/a35509ac-ad70-4941-9024-c4bbe22a7497-kube-api-access-7pxtz\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:36 crc kubenswrapper[4945]: I0109 01:44:36.475916 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a35509ac-ad70-4941-9024-c4bbe22a7497-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.305269 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.367889 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.378189 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.397300 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 01:44:37 crc kubenswrapper[4945]: E0109 01:44:37.397722 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c991408-c46e-4614-a376-2d459d7bb888" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.397784 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c991408-c46e-4614-a376-2d459d7bb888" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 09 01:44:37 crc kubenswrapper[4945]: E0109 01:44:37.397805 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="extract-utilities" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.397811 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="extract-utilities" Jan 09 01:44:37 crc kubenswrapper[4945]: E0109 01:44:37.397835 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="extract-content" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.397841 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="extract-content" Jan 09 01:44:37 crc kubenswrapper[4945]: E0109 01:44:37.397850 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="registry-server" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.397856 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="registry-server" Jan 09 01:44:37 crc kubenswrapper[4945]: E0109 01:44:37.397889 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a35509ac-ad70-4941-9024-c4bbe22a7497" containerName="nova-cell1-conductor-conductor" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.397896 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="a35509ac-ad70-4941-9024-c4bbe22a7497" containerName="nova-cell1-conductor-conductor" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.398088 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="a35509ac-ad70-4941-9024-c4bbe22a7497" containerName="nova-cell1-conductor-conductor" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.398116 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c991408-c46e-4614-a376-2d459d7bb888" containerName="neutron-dhcp-openstack-openstack-cell1" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.398132 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cc7755-d846-45a4-910b-69edba95356d" containerName="registry-server" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.399717 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.406938 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.418083 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.503477 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94dbefda-40cf-4cea-9971-6fabbc79e1f3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.503853 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94dbefda-40cf-4cea-9971-6fabbc79e1f3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.504196 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2pt9\" (UniqueName: \"kubernetes.io/projected/94dbefda-40cf-4cea-9971-6fabbc79e1f3-kube-api-access-h2pt9\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.606561 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94dbefda-40cf-4cea-9971-6fabbc79e1f3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.606659 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94dbefda-40cf-4cea-9971-6fabbc79e1f3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.606794 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2pt9\" (UniqueName: \"kubernetes.io/projected/94dbefda-40cf-4cea-9971-6fabbc79e1f3-kube-api-access-h2pt9\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.610782 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94dbefda-40cf-4cea-9971-6fabbc79e1f3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.612937 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94dbefda-40cf-4cea-9971-6fabbc79e1f3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: E0109 01:44:37.623309 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda35509ac_ad70_4941_9024_c4bbe22a7497.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda35509ac_ad70_4941_9024_c4bbe22a7497.slice/crio-b5a8555217da9d6885ea56ad982d193de57ae3037a296a7d1980b86bdc04d792\": RecentStats: unable to find data in memory cache]" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.624930 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2pt9\" (UniqueName: \"kubernetes.io/projected/94dbefda-40cf-4cea-9971-6fabbc79e1f3-kube-api-access-h2pt9\") pod \"nova-cell1-conductor-0\" (UID: \"94dbefda-40cf-4cea-9971-6fabbc79e1f3\") " pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:37 crc kubenswrapper[4945]: I0109 01:44:37.750438 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.016388 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a35509ac-ad70-4941-9024-c4bbe22a7497" path="/var/lib/kubelet/pods/a35509ac-ad70-4941-9024-c4bbe22a7497/volumes" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.257019 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.289572 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.89:8775/\": read tcp 10.217.0.2:41878->10.217.1.89:8775: read: connection reset by peer" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.289588 4945 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.89:8775/\": read tcp 10.217.0.2:41866->10.217.1.89:8775: read: connection reset by peer" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.321534 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"94dbefda-40cf-4cea-9971-6fabbc79e1f3","Type":"ContainerStarted","Data":"68d696bcdd0e6689b9fd949364e9bd53df43397a38df81f18a062165cc5e3193"} Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.332148 4945 generic.go:334] "Generic (PLEG): container finished" podID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerID="092598a02dc4722cc8ed8429dfc50031f8ad429f4fc3e3be781a87ca319cdf8d" exitCode=0 Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.332195 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2594cf9d-f20a-4554-96c6-54fe285cc3a4","Type":"ContainerDied","Data":"092598a02dc4722cc8ed8429dfc50031f8ad429f4fc3e3be781a87ca319cdf8d"} Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.778809 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.843945 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-combined-ca-bundle\") pod \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.844077 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-config-data\") pod \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.844841 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2594cf9d-f20a-4554-96c6-54fe285cc3a4-logs\") pod \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.847638 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7tq8\" (UniqueName: \"kubernetes.io/projected/2594cf9d-f20a-4554-96c6-54fe285cc3a4-kube-api-access-w7tq8\") pod \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\" (UID: \"2594cf9d-f20a-4554-96c6-54fe285cc3a4\") " Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.860257 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2594cf9d-f20a-4554-96c6-54fe285cc3a4-logs" (OuterVolumeSpecName: "logs") pod "2594cf9d-f20a-4554-96c6-54fe285cc3a4" (UID: "2594cf9d-f20a-4554-96c6-54fe285cc3a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.864871 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2594cf9d-f20a-4554-96c6-54fe285cc3a4-kube-api-access-w7tq8" (OuterVolumeSpecName: "kube-api-access-w7tq8") pod "2594cf9d-f20a-4554-96c6-54fe285cc3a4" (UID: "2594cf9d-f20a-4554-96c6-54fe285cc3a4"). InnerVolumeSpecName "kube-api-access-w7tq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.902035 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2594cf9d-f20a-4554-96c6-54fe285cc3a4" (UID: "2594cf9d-f20a-4554-96c6-54fe285cc3a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.907352 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-config-data" (OuterVolumeSpecName: "config-data") pod "2594cf9d-f20a-4554-96c6-54fe285cc3a4" (UID: "2594cf9d-f20a-4554-96c6-54fe285cc3a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.922096 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.949830 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.961437 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58fbh\" (UniqueName: \"kubernetes.io/projected/e8041767-cb7e-460e-b5c6-d5de80c5f244-kube-api-access-58fbh\") pod \"e8041767-cb7e-460e-b5c6-d5de80c5f244\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.961627 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-config-data\") pod \"e8041767-cb7e-460e-b5c6-d5de80c5f244\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.961784 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-combined-ca-bundle\") pod \"e8041767-cb7e-460e-b5c6-d5de80c5f244\" (UID: \"e8041767-cb7e-460e-b5c6-d5de80c5f244\") " Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.962285 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7tq8\" (UniqueName: \"kubernetes.io/projected/2594cf9d-f20a-4554-96c6-54fe285cc3a4-kube-api-access-w7tq8\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.962304 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.962313 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2594cf9d-f20a-4554-96c6-54fe285cc3a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.962323 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2594cf9d-f20a-4554-96c6-54fe285cc3a4-logs\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:38 crc kubenswrapper[4945]: I0109 01:44:38.967733 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8041767-cb7e-460e-b5c6-d5de80c5f244-kube-api-access-58fbh" (OuterVolumeSpecName: "kube-api-access-58fbh") pod "e8041767-cb7e-460e-b5c6-d5de80c5f244" (UID: "e8041767-cb7e-460e-b5c6-d5de80c5f244"). InnerVolumeSpecName "kube-api-access-58fbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.000724 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-config-data" (OuterVolumeSpecName: "config-data") pod "e8041767-cb7e-460e-b5c6-d5de80c5f244" (UID: "e8041767-cb7e-460e-b5c6-d5de80c5f244"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.022217 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8041767-cb7e-460e-b5c6-d5de80c5f244" (UID: "e8041767-cb7e-460e-b5c6-d5de80c5f244"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.063511 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-combined-ca-bundle\") pod \"bf0638cb-7d95-4120-9e0f-f14212f84368\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.063570 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl5kf\" (UniqueName: \"kubernetes.io/projected/bf0638cb-7d95-4120-9e0f-f14212f84368-kube-api-access-zl5kf\") pod \"bf0638cb-7d95-4120-9e0f-f14212f84368\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.063669 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0638cb-7d95-4120-9e0f-f14212f84368-logs\") pod \"bf0638cb-7d95-4120-9e0f-f14212f84368\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.063781 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-config-data\") pod \"bf0638cb-7d95-4120-9e0f-f14212f84368\" (UID: \"bf0638cb-7d95-4120-9e0f-f14212f84368\") " Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.064308 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.064325 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58fbh\" (UniqueName: \"kubernetes.io/projected/e8041767-cb7e-460e-b5c6-d5de80c5f244-kube-api-access-58fbh\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.064335 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8041767-cb7e-460e-b5c6-d5de80c5f244-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.064365 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf0638cb-7d95-4120-9e0f-f14212f84368-logs" (OuterVolumeSpecName: "logs") pod "bf0638cb-7d95-4120-9e0f-f14212f84368" (UID: "bf0638cb-7d95-4120-9e0f-f14212f84368"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.067395 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0638cb-7d95-4120-9e0f-f14212f84368-kube-api-access-zl5kf" (OuterVolumeSpecName: "kube-api-access-zl5kf") pod "bf0638cb-7d95-4120-9e0f-f14212f84368" (UID: "bf0638cb-7d95-4120-9e0f-f14212f84368"). InnerVolumeSpecName "kube-api-access-zl5kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.090694 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-config-data" (OuterVolumeSpecName: "config-data") pod "bf0638cb-7d95-4120-9e0f-f14212f84368" (UID: "bf0638cb-7d95-4120-9e0f-f14212f84368"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.100945 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf0638cb-7d95-4120-9e0f-f14212f84368" (UID: "bf0638cb-7d95-4120-9e0f-f14212f84368"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.166563 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.166589 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl5kf\" (UniqueName: \"kubernetes.io/projected/bf0638cb-7d95-4120-9e0f-f14212f84368-kube-api-access-zl5kf\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.166599 4945 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0638cb-7d95-4120-9e0f-f14212f84368-logs\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.166607 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0638cb-7d95-4120-9e0f-f14212f84368-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.385784 4945 generic.go:334] "Generic (PLEG): container finished" podID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerID="ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4" exitCode=0 Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.385849 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0638cb-7d95-4120-9e0f-f14212f84368","Type":"ContainerDied","Data":"ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4"} Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.385879 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf0638cb-7d95-4120-9e0f-f14212f84368","Type":"ContainerDied","Data":"e9edc4cde644e22ec06e1336f03aa93933a5c799540ac1422cb97675f477b9d6"} Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.385880 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.385896 4945 scope.go:117] "RemoveContainer" containerID="ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.387853 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"94dbefda-40cf-4cea-9971-6fabbc79e1f3","Type":"ContainerStarted","Data":"79d4540abdd93bbfcee654a77de1e77b98e44697794a49ebaad17640c4f9318a"} Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.394420 4945 generic.go:334] "Generic (PLEG): container finished" podID="e8041767-cb7e-460e-b5c6-d5de80c5f244" containerID="b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695" exitCode=0 Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.394473 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.394491 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8041767-cb7e-460e-b5c6-d5de80c5f244","Type":"ContainerDied","Data":"b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695"} Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.394526 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e8041767-cb7e-460e-b5c6-d5de80c5f244","Type":"ContainerDied","Data":"7a90d40eb2938e9d0c7c2f4adc7f9f2f1e4cf7495e66d1ea21b6629290ee56e9"} Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.397123 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2594cf9d-f20a-4554-96c6-54fe285cc3a4","Type":"ContainerDied","Data":"a74d0b3a1070fcd2f3ca4ac77c841c30d69933821fe3d7a7059cd9fffdc93bca"} Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.397181 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.427257 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.427233987 podStartE2EDuration="2.427233987s" podCreationTimestamp="2026-01-09 01:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:44:39.420589583 +0000 UTC m=+8949.731748549" watchObservedRunningTime="2026-01-09 01:44:39.427233987 +0000 UTC m=+8949.738392943" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.427549 4945 scope.go:117] "RemoveContainer" containerID="3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.458285 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.471763 4945 scope.go:117] "RemoveContainer" containerID="ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4" Jan 09 01:44:39 crc kubenswrapper[4945]: E0109 01:44:39.472657 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4\": container with ID starting with ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4 not found: ID does not exist" containerID="ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.472715 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4"} err="failed to get container status \"ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4\": rpc error: code = NotFound desc = could not find container \"ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4\": container with ID starting with ca72050d06ae1ce62944579d59e1b921c32937f95c6feb5e13577e7821c0ecc4 not found: ID does not exist" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.472749 4945 scope.go:117] "RemoveContainer" containerID="3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502" Jan 09 01:44:39 crc kubenswrapper[4945]: E0109 01:44:39.474683 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502\": container with ID starting with 3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502 not found: ID does not exist" containerID="3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.474729 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502"} err="failed to get container status \"3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502\": rpc error: code = NotFound desc = could not find container \"3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502\": container with ID starting with 3e926528bfa7e816d0cc08a2d3446da63a6f54aa16beeadc8359e06c5725f502 not found: ID does not exist" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.474760 4945 scope.go:117] "RemoveContainer" containerID="b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.477564 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.488449 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: E0109 01:44:39.489053 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8041767-cb7e-460e-b5c6-d5de80c5f244" containerName="nova-scheduler-scheduler" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489076 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8041767-cb7e-460e-b5c6-d5de80c5f244" containerName="nova-scheduler-scheduler" Jan 09 01:44:39 crc kubenswrapper[4945]: E0109 01:44:39.489097 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-api" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489105 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-api" Jan 09 01:44:39 crc kubenswrapper[4945]: E0109 01:44:39.489147 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-log" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489168 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-log" Jan 09 01:44:39 crc kubenswrapper[4945]: E0109 01:44:39.489184 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-metadata" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489192 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-metadata" Jan 09 01:44:39 crc kubenswrapper[4945]: E0109 01:44:39.489214 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-log" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489222 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-log" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489503 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-log" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489525 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-metadata" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489544 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8041767-cb7e-460e-b5c6-d5de80c5f244" containerName="nova-scheduler-scheduler" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489561 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" containerName="nova-metadata-log" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.489577 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" containerName="nova-api-api" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.490474 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.497779 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.505210 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.526872 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.536443 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.553053 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.554542 4945 scope.go:117] "RemoveContainer" containerID="b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695" Jan 09 01:44:39 crc kubenswrapper[4945]: E0109 01:44:39.555398 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695\": container with ID starting with b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695 not found: ID does not exist" containerID="b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.555435 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695"} err="failed to get container status \"b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695\": rpc error: code = NotFound desc = could not find container \"b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695\": container with ID starting with b404a7c658e63aed5661cc730e833280aac4914ee8a78764d8031c4332132695 not found: ID does not exist" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.555457 4945 scope.go:117] "RemoveContainer" containerID="092598a02dc4722cc8ed8429dfc50031f8ad429f4fc3e3be781a87ca319cdf8d" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.561852 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.570710 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.572920 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.575625 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.579941 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.589419 4945 scope.go:117] "RemoveContainer" containerID="a98790eb60522635b3ca0872529a216ea87ce416eccb56d689ace88fcfeb2536" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.591027 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.592811 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.596407 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.612531 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.688963 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be14dba-24e3-47fe-96ef-6098f1910c8a-config-data\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.689433 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6df8\" (UniqueName: \"kubernetes.io/projected/8be14dba-24e3-47fe-96ef-6098f1910c8a-kube-api-access-l6df8\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.689639 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k7f5\" (UniqueName: \"kubernetes.io/projected/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-kube-api-access-5k7f5\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.689945 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-config-data\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.690117 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be14dba-24e3-47fe-96ef-6098f1910c8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.690581 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-logs\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.691195 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802071 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be14dba-24e3-47fe-96ef-6098f1910c8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802321 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-config-data\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802364 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-logs\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802460 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802497 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7qlk\" (UniqueName: \"kubernetes.io/projected/1c59bd8a-5b8f-4539-b02c-b73f9664a967-kube-api-access-q7qlk\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802530 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c59bd8a-5b8f-4539-b02c-b73f9664a967-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802691 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be14dba-24e3-47fe-96ef-6098f1910c8a-config-data\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802728 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c59bd8a-5b8f-4539-b02c-b73f9664a967-config-data\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.802778 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6df8\" (UniqueName: \"kubernetes.io/projected/8be14dba-24e3-47fe-96ef-6098f1910c8a-kube-api-access-l6df8\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.803068 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k7f5\" (UniqueName: \"kubernetes.io/projected/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-kube-api-access-5k7f5\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.803345 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c59bd8a-5b8f-4539-b02c-b73f9664a967-logs\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.805450 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-logs\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.808902 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.809629 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be14dba-24e3-47fe-96ef-6098f1910c8a-config-data\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.810806 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-config-data\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.824338 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6df8\" (UniqueName: \"kubernetes.io/projected/8be14dba-24e3-47fe-96ef-6098f1910c8a-kube-api-access-l6df8\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.832637 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be14dba-24e3-47fe-96ef-6098f1910c8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8be14dba-24e3-47fe-96ef-6098f1910c8a\") " pod="openstack/nova-scheduler-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.836898 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k7f5\" (UniqueName: \"kubernetes.io/projected/3a3d016b-214a-4ebf-bd39-f0929cd84fe4-kube-api-access-5k7f5\") pod \"nova-api-0\" (UID: \"3a3d016b-214a-4ebf-bd39-f0929cd84fe4\") " pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.896474 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.909099 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7qlk\" (UniqueName: \"kubernetes.io/projected/1c59bd8a-5b8f-4539-b02c-b73f9664a967-kube-api-access-q7qlk\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.909158 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c59bd8a-5b8f-4539-b02c-b73f9664a967-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.910087 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c59bd8a-5b8f-4539-b02c-b73f9664a967-config-data\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.910328 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c59bd8a-5b8f-4539-b02c-b73f9664a967-logs\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.910920 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c59bd8a-5b8f-4539-b02c-b73f9664a967-logs\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.918930 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c59bd8a-5b8f-4539-b02c-b73f9664a967-config-data\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.920194 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c59bd8a-5b8f-4539-b02c-b73f9664a967-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:39 crc kubenswrapper[4945]: I0109 01:44:39.926335 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7qlk\" (UniqueName: \"kubernetes.io/projected/1c59bd8a-5b8f-4539-b02c-b73f9664a967-kube-api-access-q7qlk\") pod \"nova-metadata-0\" (UID: \"1c59bd8a-5b8f-4539-b02c-b73f9664a967\") " pod="openstack/nova-metadata-0" Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.020738 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.066534 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2594cf9d-f20a-4554-96c6-54fe285cc3a4" path="/var/lib/kubelet/pods/2594cf9d-f20a-4554-96c6-54fe285cc3a4/volumes" Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.068870 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0638cb-7d95-4120-9e0f-f14212f84368" path="/var/lib/kubelet/pods/bf0638cb-7d95-4120-9e0f-f14212f84368/volumes" Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.069752 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8041767-cb7e-460e-b5c6-d5de80c5f244" path="/var/lib/kubelet/pods/e8041767-cb7e-460e-b5c6-d5de80c5f244/volumes" Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.124705 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 09 01:44:40 crc kubenswrapper[4945]: W0109 01:44:40.413159 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a3d016b_214a_4ebf_bd39_f0929cd84fe4.slice/crio-5a69f7aa5f9bc9300e4857a09be26cad904e42fee1b4a9c33d8c33393ebf6766 WatchSource:0}: Error finding container 5a69f7aa5f9bc9300e4857a09be26cad904e42fee1b4a9c33d8c33393ebf6766: Status 404 returned error can't find the container with id 5a69f7aa5f9bc9300e4857a09be26cad904e42fee1b4a9c33d8c33393ebf6766 Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.415186 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.421630 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.539732 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 09 01:44:40 crc kubenswrapper[4945]: W0109 01:44:40.671685 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8be14dba_24e3_47fe_96ef_6098f1910c8a.slice/crio-f21e7034bb478314f85b09b302a4759c14f5a6e7b088e73f820ca13662f0fe88 WatchSource:0}: Error finding container f21e7034bb478314f85b09b302a4759c14f5a6e7b088e73f820ca13662f0fe88: Status 404 returned error can't find the container with id f21e7034bb478314f85b09b302a4759c14f5a6e7b088e73f820ca13662f0fe88 Jan 09 01:44:40 crc kubenswrapper[4945]: I0109 01:44:40.674435 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 09 01:44:40 crc kubenswrapper[4945]: E0109 01:44:40.692249 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 01:44:40 crc kubenswrapper[4945]: E0109 01:44:40.695195 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 01:44:40 crc kubenswrapper[4945]: E0109 01:44:40.698910 4945 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 09 01:44:40 crc kubenswrapper[4945]: E0109 01:44:40.699269 4945 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="1523f74d-4bdd-4d29-b779-1ff30d782fed" containerName="nova-cell0-conductor-conductor" Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.433013 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c59bd8a-5b8f-4539-b02c-b73f9664a967","Type":"ContainerStarted","Data":"08bf3f3dad7e1f8b663258058a8835c49819d81a194430c5539d707276641f3f"} Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.433431 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c59bd8a-5b8f-4539-b02c-b73f9664a967","Type":"ContainerStarted","Data":"0d4c9625358262bbfe0f1c5ad45f4f1300a345421b4bae26e731ed8d84af2511"} Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.433450 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c59bd8a-5b8f-4539-b02c-b73f9664a967","Type":"ContainerStarted","Data":"1b9e78e049fed548a59269bca2950a36300e9c5836ad483cbdad4248e54b79e4"} Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.436227 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8be14dba-24e3-47fe-96ef-6098f1910c8a","Type":"ContainerStarted","Data":"2a6a2fb2eac27eab0016599b9ad24693e910a016e0f77bf88126041de19170ea"} Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.436268 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8be14dba-24e3-47fe-96ef-6098f1910c8a","Type":"ContainerStarted","Data":"f21e7034bb478314f85b09b302a4759c14f5a6e7b088e73f820ca13662f0fe88"} Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.440080 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a3d016b-214a-4ebf-bd39-f0929cd84fe4","Type":"ContainerStarted","Data":"0fba526205b05248226a60f0177c35f09f5e52bc98f11adc4a03c1413a81437c"} Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.440108 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a3d016b-214a-4ebf-bd39-f0929cd84fe4","Type":"ContainerStarted","Data":"8718ac19ca6d67e673afe33ffb0f8f47177830db20f9533b4f37c15aec46b77a"} Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.440118 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a3d016b-214a-4ebf-bd39-f0929cd84fe4","Type":"ContainerStarted","Data":"5a69f7aa5f9bc9300e4857a09be26cad904e42fee1b4a9c33d8c33393ebf6766"} Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.461607 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.4615855 podStartE2EDuration="2.4615855s" podCreationTimestamp="2026-01-09 01:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:44:41.459269933 +0000 UTC m=+8951.770428879" watchObservedRunningTime="2026-01-09 01:44:41.4615855 +0000 UTC m=+8951.772744446" Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.498918 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.49889861 podStartE2EDuration="2.49889861s" podCreationTimestamp="2026-01-09 01:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:44:41.477694667 +0000 UTC m=+8951.788853613" watchObservedRunningTime="2026-01-09 01:44:41.49889861 +0000 UTC m=+8951.810057556" Jan 09 01:44:41 crc kubenswrapper[4945]: I0109 01:44:41.499391 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.499385642 podStartE2EDuration="2.499385642s" podCreationTimestamp="2026-01-09 01:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:44:41.494329738 +0000 UTC m=+8951.805488704" watchObservedRunningTime="2026-01-09 01:44:41.499385642 +0000 UTC m=+8951.810544588" Jan 09 01:44:43 crc kubenswrapper[4945]: I0109 01:44:43.578131 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:44:43 crc kubenswrapper[4945]: I0109 01:44:43.578506 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.021878 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.024204 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.125100 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.490192 4945 generic.go:334] "Generic (PLEG): container finished" podID="1523f74d-4bdd-4d29-b779-1ff30d782fed" containerID="b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" exitCode=0 Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.490341 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1523f74d-4bdd-4d29-b779-1ff30d782fed","Type":"ContainerDied","Data":"b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203"} Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.490379 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1523f74d-4bdd-4d29-b779-1ff30d782fed","Type":"ContainerDied","Data":"bae8b370027468bf3f78af62019edba079d7da09396355818f353269c8709994"} Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.490394 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bae8b370027468bf3f78af62019edba079d7da09396355818f353269c8709994" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.558953 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.682058 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-config-data\") pod \"1523f74d-4bdd-4d29-b779-1ff30d782fed\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.682498 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsclf\" (UniqueName: \"kubernetes.io/projected/1523f74d-4bdd-4d29-b779-1ff30d782fed-kube-api-access-wsclf\") pod \"1523f74d-4bdd-4d29-b779-1ff30d782fed\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.682583 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-combined-ca-bundle\") pod \"1523f74d-4bdd-4d29-b779-1ff30d782fed\" (UID: \"1523f74d-4bdd-4d29-b779-1ff30d782fed\") " Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.687725 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1523f74d-4bdd-4d29-b779-1ff30d782fed-kube-api-access-wsclf" (OuterVolumeSpecName: "kube-api-access-wsclf") pod "1523f74d-4bdd-4d29-b779-1ff30d782fed" (UID: "1523f74d-4bdd-4d29-b779-1ff30d782fed"). InnerVolumeSpecName "kube-api-access-wsclf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.715210 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1523f74d-4bdd-4d29-b779-1ff30d782fed" (UID: "1523f74d-4bdd-4d29-b779-1ff30d782fed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.724813 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-config-data" (OuterVolumeSpecName: "config-data") pod "1523f74d-4bdd-4d29-b779-1ff30d782fed" (UID: "1523f74d-4bdd-4d29-b779-1ff30d782fed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.785016 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.785066 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsclf\" (UniqueName: \"kubernetes.io/projected/1523f74d-4bdd-4d29-b779-1ff30d782fed-kube-api-access-wsclf\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.785080 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1523f74d-4bdd-4d29-b779-1ff30d782fed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:44:45 crc kubenswrapper[4945]: I0109 01:44:45.994278 4945 scope.go:117] "RemoveContainer" containerID="b2c391687ba34c753d34322b097cbb7d928b3820599b33d5c27a50e567130203" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.022673 4945 scope.go:117] "RemoveContainer" containerID="81008a15da69ec3abc212239894674df87309ee8e159beed7582615e3602b411" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.506641 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.559153 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.580949 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.592868 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 01:44:46 crc kubenswrapper[4945]: E0109 01:44:46.593876 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1523f74d-4bdd-4d29-b779-1ff30d782fed" containerName="nova-cell0-conductor-conductor" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.593923 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="1523f74d-4bdd-4d29-b779-1ff30d782fed" containerName="nova-cell0-conductor-conductor" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.594555 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="1523f74d-4bdd-4d29-b779-1ff30d782fed" containerName="nova-cell0-conductor-conductor" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.596571 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.599643 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.605852 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.721423 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2688df2-31d6-4284-b25b-4f17a2ba06ac-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.721513 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2688df2-31d6-4284-b25b-4f17a2ba06ac-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.721669 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcmhx\" (UniqueName: \"kubernetes.io/projected/b2688df2-31d6-4284-b25b-4f17a2ba06ac-kube-api-access-tcmhx\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.823901 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2688df2-31d6-4284-b25b-4f17a2ba06ac-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.823973 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2688df2-31d6-4284-b25b-4f17a2ba06ac-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.824093 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcmhx\" (UniqueName: \"kubernetes.io/projected/b2688df2-31d6-4284-b25b-4f17a2ba06ac-kube-api-access-tcmhx\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.870557 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2688df2-31d6-4284-b25b-4f17a2ba06ac-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.870561 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2688df2-31d6-4284-b25b-4f17a2ba06ac-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.870625 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcmhx\" (UniqueName: \"kubernetes.io/projected/b2688df2-31d6-4284-b25b-4f17a2ba06ac-kube-api-access-tcmhx\") pod \"nova-cell0-conductor-0\" (UID: \"b2688df2-31d6-4284-b25b-4f17a2ba06ac\") " pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:46 crc kubenswrapper[4945]: I0109 01:44:46.930865 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:47 crc kubenswrapper[4945]: I0109 01:44:47.543847 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 09 01:44:47 crc kubenswrapper[4945]: W0109 01:44:47.546573 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2688df2_31d6_4284_b25b_4f17a2ba06ac.slice/crio-1e9bb739eded3197bd8fb8af5da5739f03c8c56262c530d8959009639df669e3 WatchSource:0}: Error finding container 1e9bb739eded3197bd8fb8af5da5739f03c8c56262c530d8959009639df669e3: Status 404 returned error can't find the container with id 1e9bb739eded3197bd8fb8af5da5739f03c8c56262c530d8959009639df669e3 Jan 09 01:44:47 crc kubenswrapper[4945]: I0109 01:44:47.781511 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 09 01:44:48 crc kubenswrapper[4945]: I0109 01:44:48.012569 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1523f74d-4bdd-4d29-b779-1ff30d782fed" path="/var/lib/kubelet/pods/1523f74d-4bdd-4d29-b779-1ff30d782fed/volumes" Jan 09 01:44:48 crc kubenswrapper[4945]: I0109 01:44:48.531101 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b2688df2-31d6-4284-b25b-4f17a2ba06ac","Type":"ContainerStarted","Data":"022a330b89a18f0439213731dc2239ccd8e1aaecffd735404b3432a7929b7cd8"} Jan 09 01:44:48 crc kubenswrapper[4945]: I0109 01:44:48.531147 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b2688df2-31d6-4284-b25b-4f17a2ba06ac","Type":"ContainerStarted","Data":"1e9bb739eded3197bd8fb8af5da5739f03c8c56262c530d8959009639df669e3"} Jan 09 01:44:48 crc kubenswrapper[4945]: I0109 01:44:48.531301 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:48 crc kubenswrapper[4945]: I0109 01:44:48.551926 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.551903838 podStartE2EDuration="2.551903838s" podCreationTimestamp="2026-01-09 01:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 01:44:48.546499794 +0000 UTC m=+8958.857658740" watchObservedRunningTime="2026-01-09 01:44:48.551903838 +0000 UTC m=+8958.863062784" Jan 09 01:44:49 crc kubenswrapper[4945]: I0109 01:44:49.897091 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 01:44:49 crc kubenswrapper[4945]: I0109 01:44:49.897624 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 09 01:44:50 crc kubenswrapper[4945]: I0109 01:44:50.022633 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 01:44:50 crc kubenswrapper[4945]: I0109 01:44:50.022672 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 09 01:44:50 crc kubenswrapper[4945]: I0109 01:44:50.125440 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 09 01:44:50 crc kubenswrapper[4945]: I0109 01:44:50.160695 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 09 01:44:50 crc kubenswrapper[4945]: I0109 01:44:50.602358 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 09 01:44:50 crc kubenswrapper[4945]: I0109 01:44:50.979224 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a3d016b-214a-4ebf-bd39-f0929cd84fe4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 01:44:50 crc kubenswrapper[4945]: I0109 01:44:50.979511 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a3d016b-214a-4ebf-bd39-f0929cd84fe4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 01:44:51 crc kubenswrapper[4945]: I0109 01:44:51.105361 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1c59bd8a-5b8f-4539-b02c-b73f9664a967" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.193:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 01:44:51 crc kubenswrapper[4945]: I0109 01:44:51.105201 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1c59bd8a-5b8f-4539-b02c-b73f9664a967" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.193:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 09 01:44:56 crc kubenswrapper[4945]: I0109 01:44:56.985389 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 09 01:44:59 crc kubenswrapper[4945]: I0109 01:44:59.901660 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 01:44:59 crc kubenswrapper[4945]: I0109 01:44:59.902706 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 01:44:59 crc kubenswrapper[4945]: I0109 01:44:59.913684 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 09 01:44:59 crc kubenswrapper[4945]: I0109 01:44:59.914773 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.024622 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.025918 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.027491 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.148751 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm"] Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.150126 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.153374 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.153428 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.164335 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm"] Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.295563 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473a7968-9635-4862-8b7d-9da282c03c41-config-volume\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.296047 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473a7968-9635-4862-8b7d-9da282c03c41-secret-volume\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.296273 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwl8\" (UniqueName: \"kubernetes.io/projected/473a7968-9635-4862-8b7d-9da282c03c41-kube-api-access-njwl8\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.398420 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473a7968-9635-4862-8b7d-9da282c03c41-config-volume\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.398545 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473a7968-9635-4862-8b7d-9da282c03c41-secret-volume\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.398662 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwl8\" (UniqueName: \"kubernetes.io/projected/473a7968-9635-4862-8b7d-9da282c03c41-kube-api-access-njwl8\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.399643 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473a7968-9635-4862-8b7d-9da282c03c41-config-volume\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.415709 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473a7968-9635-4862-8b7d-9da282c03c41-secret-volume\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.418657 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwl8\" (UniqueName: \"kubernetes.io/projected/473a7968-9635-4862-8b7d-9da282c03c41-kube-api-access-njwl8\") pod \"collect-profiles-29465385-g77lm\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.479243 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.685333 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.690112 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.697463 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 09 01:45:00 crc kubenswrapper[4945]: I0109 01:45:00.937595 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm"] Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.694220 4945 generic.go:334] "Generic (PLEG): container finished" podID="473a7968-9635-4862-8b7d-9da282c03c41" containerID="8780b940e0c27351fcf453fb63658889c3aa1fe896c5bc532dc74e3b271590f3" exitCode=0 Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.694434 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" event={"ID":"473a7968-9635-4862-8b7d-9da282c03c41","Type":"ContainerDied","Data":"8780b940e0c27351fcf453fb63658889c3aa1fe896c5bc532dc74e3b271590f3"} Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.694904 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" event={"ID":"473a7968-9635-4862-8b7d-9da282c03c41","Type":"ContainerStarted","Data":"cb1d353b994ffac1cb6e1ba27048f77a821157e7501165fd765fce4de96190f0"} Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.845788 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt"] Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.847149 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.849204 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.849407 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.850184 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.850596 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-gn4z9" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.850710 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.850793 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.850798 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.863188 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt"] Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934078 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934370 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934421 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934497 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934531 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934562 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934582 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934607 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm5xz\" (UniqueName: \"kubernetes.io/projected/ac762254-3462-449f-b07c-5cea722eb39f-kube-api-access-rm5xz\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934632 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934656 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:01 crc kubenswrapper[4945]: I0109 01:45:01.934685 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.036675 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.036722 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.036749 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.036775 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm5xz\" (UniqueName: \"kubernetes.io/projected/ac762254-3462-449f-b07c-5cea722eb39f-kube-api-access-rm5xz\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.037680 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.037725 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.037760 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.037805 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.037836 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.037875 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.037945 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.039123 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.040483 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.043390 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.043584 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.044105 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.044609 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ssh-key-openstack-cell1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.045133 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.046578 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.048677 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.050634 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.053710 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm5xz\" (UniqueName: \"kubernetes.io/projected/ac762254-3462-449f-b07c-5cea722eb39f-kube-api-access-rm5xz\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.166388 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:45:02 crc kubenswrapper[4945]: I0109 01:45:02.715718 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt"] Jan 09 01:45:02 crc kubenswrapper[4945]: W0109 01:45:02.721358 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac762254_3462_449f_b07c_5cea722eb39f.slice/crio-89f9b25fded6fcc2c79c90609a134c8d5e7e7a70b0b6fee99be987a54d1810c1 WatchSource:0}: Error finding container 89f9b25fded6fcc2c79c90609a134c8d5e7e7a70b0b6fee99be987a54d1810c1: Status 404 returned error can't find the container with id 89f9b25fded6fcc2c79c90609a134c8d5e7e7a70b0b6fee99be987a54d1810c1 Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.159592 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.263277 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njwl8\" (UniqueName: \"kubernetes.io/projected/473a7968-9635-4862-8b7d-9da282c03c41-kube-api-access-njwl8\") pod \"473a7968-9635-4862-8b7d-9da282c03c41\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.263510 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473a7968-9635-4862-8b7d-9da282c03c41-config-volume\") pod \"473a7968-9635-4862-8b7d-9da282c03c41\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.263562 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473a7968-9635-4862-8b7d-9da282c03c41-secret-volume\") pod \"473a7968-9635-4862-8b7d-9da282c03c41\" (UID: \"473a7968-9635-4862-8b7d-9da282c03c41\") " Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.264510 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473a7968-9635-4862-8b7d-9da282c03c41-config-volume" (OuterVolumeSpecName: "config-volume") pod "473a7968-9635-4862-8b7d-9da282c03c41" (UID: "473a7968-9635-4862-8b7d-9da282c03c41"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.265202 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473a7968-9635-4862-8b7d-9da282c03c41-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.268564 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473a7968-9635-4862-8b7d-9da282c03c41-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "473a7968-9635-4862-8b7d-9da282c03c41" (UID: "473a7968-9635-4862-8b7d-9da282c03c41"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.274653 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473a7968-9635-4862-8b7d-9da282c03c41-kube-api-access-njwl8" (OuterVolumeSpecName: "kube-api-access-njwl8") pod "473a7968-9635-4862-8b7d-9da282c03c41" (UID: "473a7968-9635-4862-8b7d-9da282c03c41"). InnerVolumeSpecName "kube-api-access-njwl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.366828 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473a7968-9635-4862-8b7d-9da282c03c41-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.366872 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njwl8\" (UniqueName: \"kubernetes.io/projected/473a7968-9635-4862-8b7d-9da282c03c41-kube-api-access-njwl8\") on node \"crc\" DevicePath \"\"" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.728619 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" event={"ID":"473a7968-9635-4862-8b7d-9da282c03c41","Type":"ContainerDied","Data":"cb1d353b994ffac1cb6e1ba27048f77a821157e7501165fd765fce4de96190f0"} Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.728909 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1d353b994ffac1cb6e1ba27048f77a821157e7501165fd765fce4de96190f0" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.728879 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465385-g77lm" Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.730563 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" event={"ID":"ac762254-3462-449f-b07c-5cea722eb39f","Type":"ContainerStarted","Data":"35e419303103221f0de1bd1332740b4b2161385b4c1b184edbd9b0fd76884ae3"} Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.730589 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" event={"ID":"ac762254-3462-449f-b07c-5cea722eb39f","Type":"ContainerStarted","Data":"89f9b25fded6fcc2c79c90609a134c8d5e7e7a70b0b6fee99be987a54d1810c1"} Jan 09 01:45:03 crc kubenswrapper[4945]: I0109 01:45:03.756955 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" podStartSLOduration=2.561319611 podStartE2EDuration="2.756930948s" podCreationTimestamp="2026-01-09 01:45:01 +0000 UTC" firstStartedPulling="2026-01-09 01:45:02.724447533 +0000 UTC m=+8973.035606479" lastFinishedPulling="2026-01-09 01:45:02.92005885 +0000 UTC m=+8973.231217816" observedRunningTime="2026-01-09 01:45:03.750895089 +0000 UTC m=+8974.062054035" watchObservedRunningTime="2026-01-09 01:45:03.756930948 +0000 UTC m=+8974.068089884" Jan 09 01:45:04 crc kubenswrapper[4945]: I0109 01:45:04.261967 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q"] Jan 09 01:45:04 crc kubenswrapper[4945]: I0109 01:45:04.279405 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465340-kvj5q"] Jan 09 01:45:06 crc kubenswrapper[4945]: I0109 01:45:06.018097 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579e9f90-c898-4c0d-aa7b-6d6bde49e872" path="/var/lib/kubelet/pods/579e9f90-c898-4c0d-aa7b-6d6bde49e872/volumes" Jan 09 01:45:13 crc kubenswrapper[4945]: I0109 01:45:13.578934 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:45:13 crc kubenswrapper[4945]: I0109 01:45:13.579519 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:45:43 crc kubenswrapper[4945]: I0109 01:45:43.578709 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:45:43 crc kubenswrapper[4945]: I0109 01:45:43.579115 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:45:43 crc kubenswrapper[4945]: I0109 01:45:43.579157 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:45:43 crc kubenswrapper[4945]: I0109 01:45:43.579868 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0493cddfa1d6ef8361a1f817832fc2129eadccd0c7d1c2e91fc162ad1d98b090"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:45:43 crc kubenswrapper[4945]: I0109 01:45:43.579915 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://0493cddfa1d6ef8361a1f817832fc2129eadccd0c7d1c2e91fc162ad1d98b090" gracePeriod=600 Jan 09 01:45:44 crc kubenswrapper[4945]: I0109 01:45:44.266195 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="0493cddfa1d6ef8361a1f817832fc2129eadccd0c7d1c2e91fc162ad1d98b090" exitCode=0 Jan 09 01:45:44 crc kubenswrapper[4945]: I0109 01:45:44.266261 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"0493cddfa1d6ef8361a1f817832fc2129eadccd0c7d1c2e91fc162ad1d98b090"} Jan 09 01:45:44 crc kubenswrapper[4945]: I0109 01:45:44.266762 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913"} Jan 09 01:45:44 crc kubenswrapper[4945]: I0109 01:45:44.266781 4945 scope.go:117] "RemoveContainer" containerID="0afda7b3d7661ef126df26ed151c3cd3b23372c3a7d319933601be5dacb36494" Jan 09 01:45:46 crc kubenswrapper[4945]: I0109 01:45:46.211607 4945 scope.go:117] "RemoveContainer" containerID="b65b71aee3155240fd11d465613fbc02ae05d6c4eb032b83013b7898994f372b" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.218393 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gz9g5"] Jan 09 01:47:28 crc kubenswrapper[4945]: E0109 01:47:28.230095 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473a7968-9635-4862-8b7d-9da282c03c41" containerName="collect-profiles" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.230558 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="473a7968-9635-4862-8b7d-9da282c03c41" containerName="collect-profiles" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.237900 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="473a7968-9635-4862-8b7d-9da282c03c41" containerName="collect-profiles" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.242232 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.244502 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz9g5"] Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.424711 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-catalog-content\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.425197 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-utilities\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.425390 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jj2\" (UniqueName: \"kubernetes.io/projected/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-kube-api-access-99jj2\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.528395 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-utilities\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.528619 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jj2\" (UniqueName: \"kubernetes.io/projected/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-kube-api-access-99jj2\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.528767 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-catalog-content\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.528888 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-utilities\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.529474 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-catalog-content\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:28 crc kubenswrapper[4945]: I0109 01:47:28.976594 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jj2\" (UniqueName: \"kubernetes.io/projected/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-kube-api-access-99jj2\") pod \"redhat-marketplace-gz9g5\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:29 crc kubenswrapper[4945]: I0109 01:47:29.176139 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:29 crc kubenswrapper[4945]: I0109 01:47:29.678096 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz9g5"] Jan 09 01:47:30 crc kubenswrapper[4945]: I0109 01:47:30.615625 4945 generic.go:334] "Generic (PLEG): container finished" podID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerID="1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb" exitCode=0 Jan 09 01:47:30 crc kubenswrapper[4945]: I0109 01:47:30.615662 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz9g5" event={"ID":"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0","Type":"ContainerDied","Data":"1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb"} Jan 09 01:47:30 crc kubenswrapper[4945]: I0109 01:47:30.616054 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz9g5" event={"ID":"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0","Type":"ContainerStarted","Data":"cad57fe207dc659c8d314eec24437c20baa5288afc6c9aac7b5fe1d2ed1b151e"} Jan 09 01:47:30 crc kubenswrapper[4945]: I0109 01:47:30.618490 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:47:31 crc kubenswrapper[4945]: I0109 01:47:31.629253 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz9g5" event={"ID":"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0","Type":"ContainerStarted","Data":"0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a"} Jan 09 01:47:32 crc kubenswrapper[4945]: I0109 01:47:32.643438 4945 generic.go:334] "Generic (PLEG): container finished" podID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerID="0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a" exitCode=0 Jan 09 01:47:32 crc kubenswrapper[4945]: I0109 01:47:32.643550 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz9g5" event={"ID":"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0","Type":"ContainerDied","Data":"0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a"} Jan 09 01:47:33 crc kubenswrapper[4945]: I0109 01:47:33.657548 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz9g5" event={"ID":"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0","Type":"ContainerStarted","Data":"3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a"} Jan 09 01:47:33 crc kubenswrapper[4945]: I0109 01:47:33.678266 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gz9g5" podStartSLOduration=3.133878041 podStartE2EDuration="5.67823427s" podCreationTimestamp="2026-01-09 01:47:28 +0000 UTC" firstStartedPulling="2026-01-09 01:47:30.617883534 +0000 UTC m=+9120.929042490" lastFinishedPulling="2026-01-09 01:47:33.162239773 +0000 UTC m=+9123.473398719" observedRunningTime="2026-01-09 01:47:33.674428506 +0000 UTC m=+9123.985587492" watchObservedRunningTime="2026-01-09 01:47:33.67823427 +0000 UTC m=+9123.989393246" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.757872 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j62m8"] Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.763467 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.769746 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j62m8"] Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.887704 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-utilities\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.887791 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-catalog-content\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.888012 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2gbw\" (UniqueName: \"kubernetes.io/projected/08ee9e3e-0648-48ac-beb6-5a1452343a4d-kube-api-access-k2gbw\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.991663 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-utilities\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.991782 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-catalog-content\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.991865 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2gbw\" (UniqueName: \"kubernetes.io/projected/08ee9e3e-0648-48ac-beb6-5a1452343a4d-kube-api-access-k2gbw\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.992307 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-utilities\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:35 crc kubenswrapper[4945]: I0109 01:47:35.992598 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-catalog-content\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:36 crc kubenswrapper[4945]: I0109 01:47:36.018775 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2gbw\" (UniqueName: \"kubernetes.io/projected/08ee9e3e-0648-48ac-beb6-5a1452343a4d-kube-api-access-k2gbw\") pod \"community-operators-j62m8\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:36 crc kubenswrapper[4945]: I0109 01:47:36.110167 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:36 crc kubenswrapper[4945]: W0109 01:47:36.634961 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08ee9e3e_0648_48ac_beb6_5a1452343a4d.slice/crio-8262333bf06ea156b57f0a77e284d7cedc5e24704aae2b66413770a618e646a2 WatchSource:0}: Error finding container 8262333bf06ea156b57f0a77e284d7cedc5e24704aae2b66413770a618e646a2: Status 404 returned error can't find the container with id 8262333bf06ea156b57f0a77e284d7cedc5e24704aae2b66413770a618e646a2 Jan 09 01:47:36 crc kubenswrapper[4945]: I0109 01:47:36.643656 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j62m8"] Jan 09 01:47:36 crc kubenswrapper[4945]: I0109 01:47:36.690213 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j62m8" event={"ID":"08ee9e3e-0648-48ac-beb6-5a1452343a4d","Type":"ContainerStarted","Data":"8262333bf06ea156b57f0a77e284d7cedc5e24704aae2b66413770a618e646a2"} Jan 09 01:47:37 crc kubenswrapper[4945]: I0109 01:47:37.701465 4945 generic.go:334] "Generic (PLEG): container finished" podID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerID="0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5" exitCode=0 Jan 09 01:47:37 crc kubenswrapper[4945]: I0109 01:47:37.701534 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j62m8" event={"ID":"08ee9e3e-0648-48ac-beb6-5a1452343a4d","Type":"ContainerDied","Data":"0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5"} Jan 09 01:47:38 crc kubenswrapper[4945]: I0109 01:47:38.715053 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j62m8" event={"ID":"08ee9e3e-0648-48ac-beb6-5a1452343a4d","Type":"ContainerStarted","Data":"ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7"} Jan 09 01:47:39 crc kubenswrapper[4945]: I0109 01:47:39.182559 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:39 crc kubenswrapper[4945]: I0109 01:47:39.183660 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:39 crc kubenswrapper[4945]: I0109 01:47:39.242865 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:39 crc kubenswrapper[4945]: I0109 01:47:39.728276 4945 generic.go:334] "Generic (PLEG): container finished" podID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerID="ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7" exitCode=0 Jan 09 01:47:39 crc kubenswrapper[4945]: I0109 01:47:39.728523 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j62m8" event={"ID":"08ee9e3e-0648-48ac-beb6-5a1452343a4d","Type":"ContainerDied","Data":"ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7"} Jan 09 01:47:39 crc kubenswrapper[4945]: I0109 01:47:39.804188 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:41 crc kubenswrapper[4945]: I0109 01:47:41.546820 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz9g5"] Jan 09 01:47:41 crc kubenswrapper[4945]: I0109 01:47:41.757896 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j62m8" event={"ID":"08ee9e3e-0648-48ac-beb6-5a1452343a4d","Type":"ContainerStarted","Data":"0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e"} Jan 09 01:47:41 crc kubenswrapper[4945]: I0109 01:47:41.791546 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j62m8" podStartSLOduration=4.028610613 podStartE2EDuration="6.791515555s" podCreationTimestamp="2026-01-09 01:47:35 +0000 UTC" firstStartedPulling="2026-01-09 01:47:37.703901408 +0000 UTC m=+9128.015060354" lastFinishedPulling="2026-01-09 01:47:40.46680635 +0000 UTC m=+9130.777965296" observedRunningTime="2026-01-09 01:47:41.781237321 +0000 UTC m=+9132.092396267" watchObservedRunningTime="2026-01-09 01:47:41.791515555 +0000 UTC m=+9132.102674521" Jan 09 01:47:42 crc kubenswrapper[4945]: I0109 01:47:42.764696 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gz9g5" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerName="registry-server" containerID="cri-o://3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a" gracePeriod=2 Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.272694 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.381214 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-utilities\") pod \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.381391 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jj2\" (UniqueName: \"kubernetes.io/projected/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-kube-api-access-99jj2\") pod \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.381418 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-catalog-content\") pod \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\" (UID: \"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0\") " Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.382219 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-utilities" (OuterVolumeSpecName: "utilities") pod "7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" (UID: "7b631e0a-49cd-42e4-bdb8-d5f80ab409e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.386915 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-kube-api-access-99jj2" (OuterVolumeSpecName: "kube-api-access-99jj2") pod "7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" (UID: "7b631e0a-49cd-42e4-bdb8-d5f80ab409e0"). InnerVolumeSpecName "kube-api-access-99jj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.424214 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" (UID: "7b631e0a-49cd-42e4-bdb8-d5f80ab409e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.484464 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.484504 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99jj2\" (UniqueName: \"kubernetes.io/projected/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-kube-api-access-99jj2\") on node \"crc\" DevicePath \"\"" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.484523 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.578178 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.578263 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.779414 4945 generic.go:334] "Generic (PLEG): container finished" podID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerID="3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a" exitCode=0 Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.779463 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz9g5" event={"ID":"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0","Type":"ContainerDied","Data":"3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a"} Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.779503 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gz9g5" event={"ID":"7b631e0a-49cd-42e4-bdb8-d5f80ab409e0","Type":"ContainerDied","Data":"cad57fe207dc659c8d314eec24437c20baa5288afc6c9aac7b5fe1d2ed1b151e"} Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.779497 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gz9g5" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.779583 4945 scope.go:117] "RemoveContainer" containerID="3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.816374 4945 scope.go:117] "RemoveContainer" containerID="0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.822223 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz9g5"] Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.833382 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gz9g5"] Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.867817 4945 scope.go:117] "RemoveContainer" containerID="1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.911058 4945 scope.go:117] "RemoveContainer" containerID="3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a" Jan 09 01:47:43 crc kubenswrapper[4945]: E0109 01:47:43.911552 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a\": container with ID starting with 3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a not found: ID does not exist" containerID="3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.911597 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a"} err="failed to get container status \"3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a\": rpc error: code = NotFound desc = could not find container \"3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a\": container with ID starting with 3e1501d07110d8fa7f7d8a138526eafb5d0a7d0fb30fc16a2e8d76802efad87a not found: ID does not exist" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.911625 4945 scope.go:117] "RemoveContainer" containerID="0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a" Jan 09 01:47:43 crc kubenswrapper[4945]: E0109 01:47:43.911947 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a\": container with ID starting with 0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a not found: ID does not exist" containerID="0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.912013 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a"} err="failed to get container status \"0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a\": rpc error: code = NotFound desc = could not find container \"0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a\": container with ID starting with 0ff9ee8700bfc833fa59acb6659637bcb3d3402ae33e47e567134171847bc24a not found: ID does not exist" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.912058 4945 scope.go:117] "RemoveContainer" containerID="1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb" Jan 09 01:47:43 crc kubenswrapper[4945]: E0109 01:47:43.912333 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb\": container with ID starting with 1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb not found: ID does not exist" containerID="1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb" Jan 09 01:47:43 crc kubenswrapper[4945]: I0109 01:47:43.912382 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb"} err="failed to get container status \"1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb\": rpc error: code = NotFound desc = could not find container \"1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb\": container with ID starting with 1f57a0a5f6589912751a3fade2d968aa04a35d56c8b2a855f5b10036c7a2b2bb not found: ID does not exist" Jan 09 01:47:44 crc kubenswrapper[4945]: I0109 01:47:44.017282 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" path="/var/lib/kubelet/pods/7b631e0a-49cd-42e4-bdb8-d5f80ab409e0/volumes" Jan 09 01:47:46 crc kubenswrapper[4945]: I0109 01:47:46.111582 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:46 crc kubenswrapper[4945]: I0109 01:47:46.111847 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:46 crc kubenswrapper[4945]: I0109 01:47:46.156298 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:46 crc kubenswrapper[4945]: I0109 01:47:46.859731 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:47 crc kubenswrapper[4945]: I0109 01:47:47.950559 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j62m8"] Jan 09 01:47:48 crc kubenswrapper[4945]: I0109 01:47:48.830798 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j62m8" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerName="registry-server" containerID="cri-o://0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e" gracePeriod=2 Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.336856 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.513818 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2gbw\" (UniqueName: \"kubernetes.io/projected/08ee9e3e-0648-48ac-beb6-5a1452343a4d-kube-api-access-k2gbw\") pod \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.513912 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-catalog-content\") pod \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.514050 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-utilities\") pod \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\" (UID: \"08ee9e3e-0648-48ac-beb6-5a1452343a4d\") " Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.514900 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-utilities" (OuterVolumeSpecName: "utilities") pod "08ee9e3e-0648-48ac-beb6-5a1452343a4d" (UID: "08ee9e3e-0648-48ac-beb6-5a1452343a4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.515447 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.519383 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ee9e3e-0648-48ac-beb6-5a1452343a4d-kube-api-access-k2gbw" (OuterVolumeSpecName: "kube-api-access-k2gbw") pod "08ee9e3e-0648-48ac-beb6-5a1452343a4d" (UID: "08ee9e3e-0648-48ac-beb6-5a1452343a4d"). InnerVolumeSpecName "kube-api-access-k2gbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.587222 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08ee9e3e-0648-48ac-beb6-5a1452343a4d" (UID: "08ee9e3e-0648-48ac-beb6-5a1452343a4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.618682 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2gbw\" (UniqueName: \"kubernetes.io/projected/08ee9e3e-0648-48ac-beb6-5a1452343a4d-kube-api-access-k2gbw\") on node \"crc\" DevicePath \"\"" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.618723 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ee9e3e-0648-48ac-beb6-5a1452343a4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.848553 4945 generic.go:334] "Generic (PLEG): container finished" podID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerID="0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e" exitCode=0 Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.848662 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j62m8" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.848664 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j62m8" event={"ID":"08ee9e3e-0648-48ac-beb6-5a1452343a4d","Type":"ContainerDied","Data":"0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e"} Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.849236 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j62m8" event={"ID":"08ee9e3e-0648-48ac-beb6-5a1452343a4d","Type":"ContainerDied","Data":"8262333bf06ea156b57f0a77e284d7cedc5e24704aae2b66413770a618e646a2"} Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.849299 4945 scope.go:117] "RemoveContainer" containerID="0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.870985 4945 scope.go:117] "RemoveContainer" containerID="ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.883532 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j62m8"] Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.894939 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j62m8"] Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.937819 4945 scope.go:117] "RemoveContainer" containerID="0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.973835 4945 scope.go:117] "RemoveContainer" containerID="0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e" Jan 09 01:47:49 crc kubenswrapper[4945]: E0109 01:47:49.974393 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e\": container with ID starting with 0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e not found: ID does not exist" containerID="0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.974582 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e"} err="failed to get container status \"0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e\": rpc error: code = NotFound desc = could not find container \"0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e\": container with ID starting with 0e2017cfab35d7fff982fc3941af166d730845f67c7670c8db9d39ee2b0dc83e not found: ID does not exist" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.974729 4945 scope.go:117] "RemoveContainer" containerID="ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7" Jan 09 01:47:49 crc kubenswrapper[4945]: E0109 01:47:49.975270 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7\": container with ID starting with ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7 not found: ID does not exist" containerID="ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.975308 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7"} err="failed to get container status \"ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7\": rpc error: code = NotFound desc = could not find container \"ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7\": container with ID starting with ebc7842bec33e3a66afa0a5d21d019c6c85f6a8ed8a09d1505692c12d342b0c7 not found: ID does not exist" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.975329 4945 scope.go:117] "RemoveContainer" containerID="0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5" Jan 09 01:47:49 crc kubenswrapper[4945]: E0109 01:47:49.975562 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5\": container with ID starting with 0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5 not found: ID does not exist" containerID="0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5" Jan 09 01:47:49 crc kubenswrapper[4945]: I0109 01:47:49.975586 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5"} err="failed to get container status \"0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5\": rpc error: code = NotFound desc = could not find container \"0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5\": container with ID starting with 0162b63f9e19c035fb75a964afb930beffae27ce14e497a6c7d33e3336bf07b5 not found: ID does not exist" Jan 09 01:47:50 crc kubenswrapper[4945]: I0109 01:47:50.012639 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" path="/var/lib/kubelet/pods/08ee9e3e-0648-48ac-beb6-5a1452343a4d/volumes" Jan 09 01:48:00 crc kubenswrapper[4945]: I0109 01:48:00.021349 4945 generic.go:334] "Generic (PLEG): container finished" podID="ac762254-3462-449f-b07c-5cea722eb39f" containerID="35e419303103221f0de1bd1332740b4b2161385b4c1b184edbd9b0fd76884ae3" exitCode=0 Jan 09 01:48:00 crc kubenswrapper[4945]: I0109 01:48:00.021418 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" event={"ID":"ac762254-3462-449f-b07c-5cea722eb39f","Type":"ContainerDied","Data":"35e419303103221f0de1bd1332740b4b2161385b4c1b184edbd9b0fd76884ae3"} Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.517585 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701050 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-inventory\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701121 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-1\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701150 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ssh-key-openstack-cell1\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701249 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-0\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701298 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-combined-ca-bundle\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701401 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-1\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701431 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-1\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701551 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-0\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701591 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm5xz\" (UniqueName: \"kubernetes.io/projected/ac762254-3462-449f-b07c-5cea722eb39f-kube-api-access-rm5xz\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701629 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-0\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.701658 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ceph\") pod \"ac762254-3462-449f-b07c-5cea722eb39f\" (UID: \"ac762254-3462-449f-b07c-5cea722eb39f\") " Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.707171 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac762254-3462-449f-b07c-5cea722eb39f-kube-api-access-rm5xz" (OuterVolumeSpecName: "kube-api-access-rm5xz") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "kube-api-access-rm5xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.709224 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.714342 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ceph" (OuterVolumeSpecName: "ceph") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.731413 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.734483 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.737469 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.748437 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.749467 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.750588 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.756083 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.756348 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-inventory" (OuterVolumeSpecName: "inventory") pod "ac762254-3462-449f-b07c-5cea722eb39f" (UID: "ac762254-3462-449f-b07c-5cea722eb39f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804205 4945 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-inventory\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804243 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804258 4945 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804275 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804287 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804300 4945 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804313 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/ac762254-3462-449f-b07c-5cea722eb39f-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804325 4945 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804337 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm5xz\" (UniqueName: \"kubernetes.io/projected/ac762254-3462-449f-b07c-5cea722eb39f-kube-api-access-rm5xz\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804348 4945 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:01 crc kubenswrapper[4945]: I0109 01:48:01.804360 4945 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac762254-3462-449f-b07c-5cea722eb39f-ceph\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:02 crc kubenswrapper[4945]: I0109 01:48:02.043281 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" event={"ID":"ac762254-3462-449f-b07c-5cea722eb39f","Type":"ContainerDied","Data":"89f9b25fded6fcc2c79c90609a134c8d5e7e7a70b0b6fee99be987a54d1810c1"} Jan 09 01:48:02 crc kubenswrapper[4945]: I0109 01:48:02.043324 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f9b25fded6fcc2c79c90609a134c8d5e7e7a70b0b6fee99be987a54d1810c1" Jan 09 01:48:02 crc kubenswrapper[4945]: I0109 01:48:02.043361 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt" Jan 09 01:48:13 crc kubenswrapper[4945]: I0109 01:48:13.578427 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:48:13 crc kubenswrapper[4945]: I0109 01:48:13.578977 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.104228 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xz8gf"] Jan 09 01:48:28 crc kubenswrapper[4945]: E0109 01:48:28.105319 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac762254-3462-449f-b07c-5cea722eb39f" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105339 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac762254-3462-449f-b07c-5cea722eb39f" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 09 01:48:28 crc kubenswrapper[4945]: E0109 01:48:28.105375 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerName="registry-server" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105384 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerName="registry-server" Jan 09 01:48:28 crc kubenswrapper[4945]: E0109 01:48:28.105410 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerName="extract-utilities" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105418 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerName="extract-utilities" Jan 09 01:48:28 crc kubenswrapper[4945]: E0109 01:48:28.105435 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerName="extract-utilities" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105442 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerName="extract-utilities" Jan 09 01:48:28 crc kubenswrapper[4945]: E0109 01:48:28.105458 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerName="registry-server" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105466 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerName="registry-server" Jan 09 01:48:28 crc kubenswrapper[4945]: E0109 01:48:28.105490 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerName="extract-content" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105497 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerName="extract-content" Jan 09 01:48:28 crc kubenswrapper[4945]: E0109 01:48:28.105518 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerName="extract-content" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105525 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerName="extract-content" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105859 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b631e0a-49cd-42e4-bdb8-d5f80ab409e0" containerName="registry-server" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105896 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ee9e3e-0648-48ac-beb6-5a1452343a4d" containerName="registry-server" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.105911 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac762254-3462-449f-b07c-5cea722eb39f" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.107613 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.166241 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xz8gf"] Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.192960 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-catalog-content\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.193139 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-utilities\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.193196 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klx5m\" (UniqueName: \"kubernetes.io/projected/88334746-ef94-4acf-a32a-ebc2a36ea250-kube-api-access-klx5m\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.295642 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-catalog-content\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.295960 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-utilities\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.296092 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klx5m\" (UniqueName: \"kubernetes.io/projected/88334746-ef94-4acf-a32a-ebc2a36ea250-kube-api-access-klx5m\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.296262 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-catalog-content\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.296557 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-utilities\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.315137 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klx5m\" (UniqueName: \"kubernetes.io/projected/88334746-ef94-4acf-a32a-ebc2a36ea250-kube-api-access-klx5m\") pod \"certified-operators-xz8gf\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.456251 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:28 crc kubenswrapper[4945]: I0109 01:48:28.992152 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xz8gf"] Jan 09 01:48:29 crc kubenswrapper[4945]: I0109 01:48:29.337037 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz8gf" event={"ID":"88334746-ef94-4acf-a32a-ebc2a36ea250","Type":"ContainerStarted","Data":"dada21049e9ec564672195c5037d8b89edbb87a26f3e0dc1d5c26ddb3edf00b0"} Jan 09 01:48:30 crc kubenswrapper[4945]: I0109 01:48:30.351605 4945 generic.go:334] "Generic (PLEG): container finished" podID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerID="4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4" exitCode=0 Jan 09 01:48:30 crc kubenswrapper[4945]: I0109 01:48:30.351763 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz8gf" event={"ID":"88334746-ef94-4acf-a32a-ebc2a36ea250","Type":"ContainerDied","Data":"4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4"} Jan 09 01:48:32 crc kubenswrapper[4945]: I0109 01:48:32.375873 4945 generic.go:334] "Generic (PLEG): container finished" podID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerID="6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509" exitCode=0 Jan 09 01:48:32 crc kubenswrapper[4945]: I0109 01:48:32.375977 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz8gf" event={"ID":"88334746-ef94-4acf-a32a-ebc2a36ea250","Type":"ContainerDied","Data":"6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509"} Jan 09 01:48:33 crc kubenswrapper[4945]: I0109 01:48:33.388326 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz8gf" event={"ID":"88334746-ef94-4acf-a32a-ebc2a36ea250","Type":"ContainerStarted","Data":"171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188"} Jan 09 01:48:33 crc kubenswrapper[4945]: I0109 01:48:33.410751 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xz8gf" podStartSLOduration=2.727460269 podStartE2EDuration="5.410726964s" podCreationTimestamp="2026-01-09 01:48:28 +0000 UTC" firstStartedPulling="2026-01-09 01:48:30.354058014 +0000 UTC m=+9180.665216980" lastFinishedPulling="2026-01-09 01:48:33.037324729 +0000 UTC m=+9183.348483675" observedRunningTime="2026-01-09 01:48:33.403139346 +0000 UTC m=+9183.714298312" watchObservedRunningTime="2026-01-09 01:48:33.410726964 +0000 UTC m=+9183.721885900" Jan 09 01:48:38 crc kubenswrapper[4945]: I0109 01:48:38.457390 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:38 crc kubenswrapper[4945]: I0109 01:48:38.457889 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:38 crc kubenswrapper[4945]: I0109 01:48:38.546179 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:39 crc kubenswrapper[4945]: I0109 01:48:39.528334 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:40 crc kubenswrapper[4945]: I0109 01:48:40.670957 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xz8gf"] Jan 09 01:48:41 crc kubenswrapper[4945]: I0109 01:48:41.479603 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xz8gf" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerName="registry-server" containerID="cri-o://171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188" gracePeriod=2 Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.134556 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.308695 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klx5m\" (UniqueName: \"kubernetes.io/projected/88334746-ef94-4acf-a32a-ebc2a36ea250-kube-api-access-klx5m\") pod \"88334746-ef94-4acf-a32a-ebc2a36ea250\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.308837 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-catalog-content\") pod \"88334746-ef94-4acf-a32a-ebc2a36ea250\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.308967 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-utilities\") pod \"88334746-ef94-4acf-a32a-ebc2a36ea250\" (UID: \"88334746-ef94-4acf-a32a-ebc2a36ea250\") " Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.309862 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-utilities" (OuterVolumeSpecName: "utilities") pod "88334746-ef94-4acf-a32a-ebc2a36ea250" (UID: "88334746-ef94-4acf-a32a-ebc2a36ea250"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.310363 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.323259 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88334746-ef94-4acf-a32a-ebc2a36ea250-kube-api-access-klx5m" (OuterVolumeSpecName: "kube-api-access-klx5m") pod "88334746-ef94-4acf-a32a-ebc2a36ea250" (UID: "88334746-ef94-4acf-a32a-ebc2a36ea250"). InnerVolumeSpecName "kube-api-access-klx5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.351117 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88334746-ef94-4acf-a32a-ebc2a36ea250" (UID: "88334746-ef94-4acf-a32a-ebc2a36ea250"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.412727 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klx5m\" (UniqueName: \"kubernetes.io/projected/88334746-ef94-4acf-a32a-ebc2a36ea250-kube-api-access-klx5m\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.413054 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88334746-ef94-4acf-a32a-ebc2a36ea250-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.496077 4945 generic.go:334] "Generic (PLEG): container finished" podID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerID="171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188" exitCode=0 Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.496147 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz8gf" event={"ID":"88334746-ef94-4acf-a32a-ebc2a36ea250","Type":"ContainerDied","Data":"171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188"} Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.496187 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xz8gf" event={"ID":"88334746-ef94-4acf-a32a-ebc2a36ea250","Type":"ContainerDied","Data":"dada21049e9ec564672195c5037d8b89edbb87a26f3e0dc1d5c26ddb3edf00b0"} Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.496218 4945 scope.go:117] "RemoveContainer" containerID="171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.496426 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xz8gf" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.537476 4945 scope.go:117] "RemoveContainer" containerID="6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509" Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.538923 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xz8gf"] Jan 09 01:48:42 crc kubenswrapper[4945]: I0109 01:48:42.548469 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xz8gf"] Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.279895 4945 scope.go:117] "RemoveContainer" containerID="4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.335163 4945 scope.go:117] "RemoveContainer" containerID="171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188" Jan 09 01:48:43 crc kubenswrapper[4945]: E0109 01:48:43.335900 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188\": container with ID starting with 171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188 not found: ID does not exist" containerID="171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.335939 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188"} err="failed to get container status \"171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188\": rpc error: code = NotFound desc = could not find container \"171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188\": container with ID starting with 171c64302746c4784464c6580d1329115e1ff77249fe519f52fc4bc3a6125188 not found: ID does not exist" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.335964 4945 scope.go:117] "RemoveContainer" containerID="6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509" Jan 09 01:48:43 crc kubenswrapper[4945]: E0109 01:48:43.337540 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509\": container with ID starting with 6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509 not found: ID does not exist" containerID="6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.337610 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509"} err="failed to get container status \"6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509\": rpc error: code = NotFound desc = could not find container \"6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509\": container with ID starting with 6a85a4bb8e4b43140d362f2723672ec1bb9c4a0440c71c40f3cef79223843509 not found: ID does not exist" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.337663 4945 scope.go:117] "RemoveContainer" containerID="4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4" Jan 09 01:48:43 crc kubenswrapper[4945]: E0109 01:48:43.338069 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4\": container with ID starting with 4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4 not found: ID does not exist" containerID="4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.338100 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4"} err="failed to get container status \"4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4\": rpc error: code = NotFound desc = could not find container \"4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4\": container with ID starting with 4dec208fa6ccfe5d6af70280f71c89ead67ab76859418ba2e47484c4f2c99dc4 not found: ID does not exist" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.578404 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.578479 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.578540 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.579564 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:48:43 crc kubenswrapper[4945]: I0109 01:48:43.579645 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" gracePeriod=600 Jan 09 01:48:43 crc kubenswrapper[4945]: E0109 01:48:43.709947 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:48:44 crc kubenswrapper[4945]: I0109 01:48:44.015391 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" path="/var/lib/kubelet/pods/88334746-ef94-4acf-a32a-ebc2a36ea250/volumes" Jan 09 01:48:44 crc kubenswrapper[4945]: I0109 01:48:44.521751 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" exitCode=0 Jan 09 01:48:44 crc kubenswrapper[4945]: I0109 01:48:44.521827 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913"} Jan 09 01:48:44 crc kubenswrapper[4945]: I0109 01:48:44.522153 4945 scope.go:117] "RemoveContainer" containerID="0493cddfa1d6ef8361a1f817832fc2129eadccd0c7d1c2e91fc162ad1d98b090" Jan 09 01:48:44 crc kubenswrapper[4945]: I0109 01:48:44.522906 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:48:44 crc kubenswrapper[4945]: E0109 01:48:44.523248 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:48:56 crc kubenswrapper[4945]: I0109 01:48:56.000598 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:48:56 crc kubenswrapper[4945]: E0109 01:48:56.001557 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:49:09 crc kubenswrapper[4945]: I0109 01:49:09.000691 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:49:09 crc kubenswrapper[4945]: E0109 01:49:09.001608 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:49:20 crc kubenswrapper[4945]: I0109 01:49:20.006831 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:49:20 crc kubenswrapper[4945]: E0109 01:49:20.007787 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:49:31 crc kubenswrapper[4945]: I0109 01:49:31.000228 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:49:31 crc kubenswrapper[4945]: E0109 01:49:31.001016 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:49:46 crc kubenswrapper[4945]: I0109 01:49:46.001663 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:49:46 crc kubenswrapper[4945]: E0109 01:49:46.002733 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:50:01 crc kubenswrapper[4945]: I0109 01:50:01.000112 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:50:01 crc kubenswrapper[4945]: E0109 01:50:01.000907 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:50:16 crc kubenswrapper[4945]: I0109 01:50:16.000911 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:50:16 crc kubenswrapper[4945]: E0109 01:50:16.002146 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:50:18 crc kubenswrapper[4945]: I0109 01:50:18.501561 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Jan 09 01:50:18 crc kubenswrapper[4945]: I0109 01:50:18.502334 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa" containerName="adoption" containerID="cri-o://052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a" gracePeriod=30 Jan 09 01:50:31 crc kubenswrapper[4945]: I0109 01:50:31.000308 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:50:31 crc kubenswrapper[4945]: E0109 01:50:31.001765 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:50:42 crc kubenswrapper[4945]: I0109 01:50:42.000923 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:50:42 crc kubenswrapper[4945]: E0109 01:50:42.002394 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:50:43 crc kubenswrapper[4945]: I0109 01:50:43.088574 4945 trace.go:236] Trace[1784630049]: "Calculate volume metrics of mariadb-data for pod openstack/mariadb-copy-data" (09-Jan-2026 01:50:42.047) (total time: 1040ms): Jan 09 01:50:43 crc kubenswrapper[4945]: Trace[1784630049]: [1.040803232s] [1.040803232s] END Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.037267 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.096986 4945 generic.go:334] "Generic (PLEG): container finished" podID="8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa" containerID="052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a" exitCode=137 Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.097057 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa","Type":"ContainerDied","Data":"052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a"} Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.097087 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa","Type":"ContainerDied","Data":"382c57369fb561ddad4f2d8b27d197d7a6eed40617b875234be9c4c1546183cb"} Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.097103 4945 scope.go:117] "RemoveContainer" containerID="052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.097219 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.122781 4945 scope.go:117] "RemoveContainer" containerID="052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a" Jan 09 01:50:49 crc kubenswrapper[4945]: E0109 01:50:49.123316 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a\": container with ID starting with 052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a not found: ID does not exist" containerID="052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.123369 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a"} err="failed to get container status \"052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a\": rpc error: code = NotFound desc = could not find container \"052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a\": container with ID starting with 052eda0645f2453de67f09f024f72f04d2def12030a18b6e73e0636b3a22200a not found: ID does not exist" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.161712 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\") pod \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") " Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.161814 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsxtr\" (UniqueName: \"kubernetes.io/projected/8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa-kube-api-access-fsxtr\") pod \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\" (UID: \"8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa\") " Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.170163 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa-kube-api-access-fsxtr" (OuterVolumeSpecName: "kube-api-access-fsxtr") pod "8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa" (UID: "8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa"). InnerVolumeSpecName "kube-api-access-fsxtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.184509 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885" (OuterVolumeSpecName: "mariadb-data") pod "8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa" (UID: "8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa"). InnerVolumeSpecName "pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.265128 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\") on node \"crc\" " Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.265388 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsxtr\" (UniqueName: \"kubernetes.io/projected/8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa-kube-api-access-fsxtr\") on node \"crc\" DevicePath \"\"" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.293072 4945 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.293227 4945 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885") on node "crc" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.368088 4945 reconciler_common.go:293] "Volume detached for volume \"pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-190fb6b5-e1d4-4814-aa1d-817b3b961885\") on node \"crc\" DevicePath \"\"" Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.442810 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Jan 09 01:50:49 crc kubenswrapper[4945]: I0109 01:50:49.453494 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Jan 09 01:50:50 crc kubenswrapper[4945]: I0109 01:50:50.017313 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa" path="/var/lib/kubelet/pods/8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa/volumes" Jan 09 01:50:50 crc kubenswrapper[4945]: I0109 01:50:50.250771 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Jan 09 01:50:50 crc kubenswrapper[4945]: I0109 01:50:50.251120 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="807e629f-7d47-4c5f-8b6c-ed1191b40698" containerName="adoption" containerID="cri-o://4d6ef5d4dfa45e97094dd21d9dd1247b6536d1cb12b448067a8876cb0f125d71" gracePeriod=30 Jan 09 01:50:57 crc kubenswrapper[4945]: I0109 01:50:57.000352 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:50:57 crc kubenswrapper[4945]: E0109 01:50:57.001563 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:51:11 crc kubenswrapper[4945]: I0109 01:51:11.000420 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:51:11 crc kubenswrapper[4945]: E0109 01:51:11.001351 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.423769 4945 generic.go:334] "Generic (PLEG): container finished" podID="807e629f-7d47-4c5f-8b6c-ed1191b40698" containerID="4d6ef5d4dfa45e97094dd21d9dd1247b6536d1cb12b448067a8876cb0f125d71" exitCode=137 Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.424382 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"807e629f-7d47-4c5f-8b6c-ed1191b40698","Type":"ContainerDied","Data":"4d6ef5d4dfa45e97094dd21d9dd1247b6536d1cb12b448067a8876cb0f125d71"} Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.760196 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.908171 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/807e629f-7d47-4c5f-8b6c-ed1191b40698-ovn-data-cert\") pod \"807e629f-7d47-4c5f-8b6c-ed1191b40698\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.909153 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a\") pod \"807e629f-7d47-4c5f-8b6c-ed1191b40698\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.910130 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97hlg\" (UniqueName: \"kubernetes.io/projected/807e629f-7d47-4c5f-8b6c-ed1191b40698-kube-api-access-97hlg\") pod \"807e629f-7d47-4c5f-8b6c-ed1191b40698\" (UID: \"807e629f-7d47-4c5f-8b6c-ed1191b40698\") " Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.925427 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a" (OuterVolumeSpecName: "ovn-data") pod "807e629f-7d47-4c5f-8b6c-ed1191b40698" (UID: "807e629f-7d47-4c5f-8b6c-ed1191b40698"). InnerVolumeSpecName "pvc-65d2835b-bc34-4f52-9465-8db15c16395a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.970504 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/807e629f-7d47-4c5f-8b6c-ed1191b40698-kube-api-access-97hlg" (OuterVolumeSpecName: "kube-api-access-97hlg") pod "807e629f-7d47-4c5f-8b6c-ed1191b40698" (UID: "807e629f-7d47-4c5f-8b6c-ed1191b40698"). InnerVolumeSpecName "kube-api-access-97hlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:51:20 crc kubenswrapper[4945]: I0109 01:51:20.973211 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/807e629f-7d47-4c5f-8b6c-ed1191b40698-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "807e629f-7d47-4c5f-8b6c-ed1191b40698" (UID: "807e629f-7d47-4c5f-8b6c-ed1191b40698"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.014011 4945 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-65d2835b-bc34-4f52-9465-8db15c16395a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a\") on node \"crc\" " Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.014041 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97hlg\" (UniqueName: \"kubernetes.io/projected/807e629f-7d47-4c5f-8b6c-ed1191b40698-kube-api-access-97hlg\") on node \"crc\" DevicePath \"\"" Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.014053 4945 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/807e629f-7d47-4c5f-8b6c-ed1191b40698-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.038206 4945 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.038717 4945 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-65d2835b-bc34-4f52-9465-8db15c16395a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a") on node "crc" Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.117560 4945 reconciler_common.go:293] "Volume detached for volume \"pvc-65d2835b-bc34-4f52-9465-8db15c16395a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-65d2835b-bc34-4f52-9465-8db15c16395a\") on node \"crc\" DevicePath \"\"" Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.436332 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"807e629f-7d47-4c5f-8b6c-ed1191b40698","Type":"ContainerDied","Data":"cf6c74f42eb6b7a9bab4f8cbe84b0ccd9fc29e0c04d0d5efaca97a823975026b"} Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.436408 4945 scope.go:117] "RemoveContainer" containerID="4d6ef5d4dfa45e97094dd21d9dd1247b6536d1cb12b448067a8876cb0f125d71" Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.436438 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.482827 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Jan 09 01:51:21 crc kubenswrapper[4945]: I0109 01:51:21.499551 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Jan 09 01:51:22 crc kubenswrapper[4945]: I0109 01:51:22.018418 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="807e629f-7d47-4c5f-8b6c-ed1191b40698" path="/var/lib/kubelet/pods/807e629f-7d47-4c5f-8b6c-ed1191b40698/volumes" Jan 09 01:51:24 crc kubenswrapper[4945]: I0109 01:51:24.001171 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:51:24 crc kubenswrapper[4945]: E0109 01:51:24.002401 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:51:37 crc kubenswrapper[4945]: I0109 01:51:37.000832 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:51:37 crc kubenswrapper[4945]: E0109 01:51:37.002049 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:51:48 crc kubenswrapper[4945]: I0109 01:51:48.000896 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:51:48 crc kubenswrapper[4945]: E0109 01:51:48.001769 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:52:01 crc kubenswrapper[4945]: I0109 01:52:01.001257 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:52:01 crc kubenswrapper[4945]: E0109 01:52:01.002384 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:52:15 crc kubenswrapper[4945]: I0109 01:52:15.002394 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:52:15 crc kubenswrapper[4945]: E0109 01:52:15.004548 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.727904 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sbqx9/must-gather-8r7k8"] Jan 09 01:52:19 crc kubenswrapper[4945]: E0109 01:52:19.728982 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerName="registry-server" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.729019 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerName="registry-server" Jan 09 01:52:19 crc kubenswrapper[4945]: E0109 01:52:19.729037 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="807e629f-7d47-4c5f-8b6c-ed1191b40698" containerName="adoption" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.729046 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="807e629f-7d47-4c5f-8b6c-ed1191b40698" containerName="adoption" Jan 09 01:52:19 crc kubenswrapper[4945]: E0109 01:52:19.729065 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerName="extract-content" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.729074 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerName="extract-content" Jan 09 01:52:19 crc kubenswrapper[4945]: E0109 01:52:19.729085 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerName="extract-utilities" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.729092 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerName="extract-utilities" Jan 09 01:52:19 crc kubenswrapper[4945]: E0109 01:52:19.729112 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa" containerName="adoption" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.729120 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa" containerName="adoption" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.729355 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9de6e1-00ce-4ba3-b593-2f05b26ef4fa" containerName="adoption" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.729392 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="88334746-ef94-4acf-a32a-ebc2a36ea250" containerName="registry-server" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.729416 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="807e629f-7d47-4c5f-8b6c-ed1191b40698" containerName="adoption" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.730692 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.732433 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sbqx9"/"default-dockercfg-fszc9" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.733783 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sbqx9"/"openshift-service-ca.crt" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.734556 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sbqx9"/"kube-root-ca.crt" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.738826 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sbqx9/must-gather-8r7k8"] Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.852477 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmbv\" (UniqueName: \"kubernetes.io/projected/2616d81f-9f0d-4cee-80a4-59c1499fafa5-kube-api-access-fvmbv\") pod \"must-gather-8r7k8\" (UID: \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\") " pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.852577 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2616d81f-9f0d-4cee-80a4-59c1499fafa5-must-gather-output\") pod \"must-gather-8r7k8\" (UID: \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\") " pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.958187 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmbv\" (UniqueName: \"kubernetes.io/projected/2616d81f-9f0d-4cee-80a4-59c1499fafa5-kube-api-access-fvmbv\") pod \"must-gather-8r7k8\" (UID: \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\") " pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.958276 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2616d81f-9f0d-4cee-80a4-59c1499fafa5-must-gather-output\") pod \"must-gather-8r7k8\" (UID: \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\") " pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 01:52:19 crc kubenswrapper[4945]: I0109 01:52:19.958844 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2616d81f-9f0d-4cee-80a4-59c1499fafa5-must-gather-output\") pod \"must-gather-8r7k8\" (UID: \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\") " pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 01:52:20 crc kubenswrapper[4945]: I0109 01:52:20.003780 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmbv\" (UniqueName: \"kubernetes.io/projected/2616d81f-9f0d-4cee-80a4-59c1499fafa5-kube-api-access-fvmbv\") pod \"must-gather-8r7k8\" (UID: \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\") " pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 01:52:20 crc kubenswrapper[4945]: I0109 01:52:20.052561 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 01:52:20 crc kubenswrapper[4945]: I0109 01:52:20.610587 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sbqx9/must-gather-8r7k8"] Jan 09 01:52:21 crc kubenswrapper[4945]: I0109 01:52:21.134126 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" event={"ID":"2616d81f-9f0d-4cee-80a4-59c1499fafa5","Type":"ContainerStarted","Data":"d3fdc819c3d1682fcf7be78f98b66a5ade68ec069e3e8e09623203e990e974db"} Jan 09 01:52:28 crc kubenswrapper[4945]: I0109 01:52:28.208569 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" event={"ID":"2616d81f-9f0d-4cee-80a4-59c1499fafa5","Type":"ContainerStarted","Data":"96b70a169b658510f745ca889b626e2ddba317d9d0b244e1a81304edcc92129d"} Jan 09 01:52:29 crc kubenswrapper[4945]: I0109 01:52:29.001952 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:52:29 crc kubenswrapper[4945]: E0109 01:52:29.004231 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:52:29 crc kubenswrapper[4945]: I0109 01:52:29.220749 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" event={"ID":"2616d81f-9f0d-4cee-80a4-59c1499fafa5","Type":"ContainerStarted","Data":"c95c4efc4305348b45f966ad768a3e861d8b9f6c369a3f82d199febb532db356"} Jan 09 01:52:30 crc kubenswrapper[4945]: E0109 01:52:30.021738 4945 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.74:57324->38.102.83.74:45665: write tcp 38.102.83.74:57324->38.102.83.74:45665: write: broken pipe Jan 09 01:52:30 crc kubenswrapper[4945]: E0109 01:52:30.471888 4945 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.74:57366->38.102.83.74:45665: read tcp 38.102.83.74:57366->38.102.83.74:45665: read: connection reset by peer Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.347863 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" podStartSLOduration=6.189513664 podStartE2EDuration="13.347843318s" podCreationTimestamp="2026-01-09 01:52:19 +0000 UTC" firstStartedPulling="2026-01-09 01:52:20.634767496 +0000 UTC m=+9410.945926442" lastFinishedPulling="2026-01-09 01:52:27.79309715 +0000 UTC m=+9418.104256096" observedRunningTime="2026-01-09 01:52:29.24271334 +0000 UTC m=+9419.553872286" watchObservedRunningTime="2026-01-09 01:52:32.347843318 +0000 UTC m=+9422.659002264" Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.356501 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sbqx9/crc-debug-t479k"] Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.357889 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.453799 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a5979b-3f68-4fdd-9511-5b972bdaef89-host\") pod \"crc-debug-t479k\" (UID: \"01a5979b-3f68-4fdd-9511-5b972bdaef89\") " pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.454470 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxv9\" (UniqueName: \"kubernetes.io/projected/01a5979b-3f68-4fdd-9511-5b972bdaef89-kube-api-access-qrxv9\") pod \"crc-debug-t479k\" (UID: \"01a5979b-3f68-4fdd-9511-5b972bdaef89\") " pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.556839 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrxv9\" (UniqueName: \"kubernetes.io/projected/01a5979b-3f68-4fdd-9511-5b972bdaef89-kube-api-access-qrxv9\") pod \"crc-debug-t479k\" (UID: \"01a5979b-3f68-4fdd-9511-5b972bdaef89\") " pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.556942 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a5979b-3f68-4fdd-9511-5b972bdaef89-host\") pod \"crc-debug-t479k\" (UID: \"01a5979b-3f68-4fdd-9511-5b972bdaef89\") " pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.557230 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a5979b-3f68-4fdd-9511-5b972bdaef89-host\") pod \"crc-debug-t479k\" (UID: \"01a5979b-3f68-4fdd-9511-5b972bdaef89\") " pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.577731 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrxv9\" (UniqueName: \"kubernetes.io/projected/01a5979b-3f68-4fdd-9511-5b972bdaef89-kube-api-access-qrxv9\") pod \"crc-debug-t479k\" (UID: \"01a5979b-3f68-4fdd-9511-5b972bdaef89\") " pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.684293 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:52:32 crc kubenswrapper[4945]: W0109 01:52:32.720265 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a5979b_3f68_4fdd_9511_5b972bdaef89.slice/crio-0b5953e8900fc798d57004c5c29f814f9ecef5c11549510714504e1ae2e43e37 WatchSource:0}: Error finding container 0b5953e8900fc798d57004c5c29f814f9ecef5c11549510714504e1ae2e43e37: Status 404 returned error can't find the container with id 0b5953e8900fc798d57004c5c29f814f9ecef5c11549510714504e1ae2e43e37 Jan 09 01:52:32 crc kubenswrapper[4945]: I0109 01:52:32.722586 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:52:33 crc kubenswrapper[4945]: I0109 01:52:33.258745 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/crc-debug-t479k" event={"ID":"01a5979b-3f68-4fdd-9511-5b972bdaef89","Type":"ContainerStarted","Data":"0b5953e8900fc798d57004c5c29f814f9ecef5c11549510714504e1ae2e43e37"} Jan 09 01:52:42 crc kubenswrapper[4945]: I0109 01:52:42.007873 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:52:42 crc kubenswrapper[4945]: E0109 01:52:42.008673 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:52:45 crc kubenswrapper[4945]: I0109 01:52:45.435589 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/crc-debug-t479k" event={"ID":"01a5979b-3f68-4fdd-9511-5b972bdaef89","Type":"ContainerStarted","Data":"db4178a0d6b9e9648682fb761e4c0853f7b1d09663a8cbe8ffc65bd5b6b4752e"} Jan 09 01:52:45 crc kubenswrapper[4945]: I0109 01:52:45.457277 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sbqx9/crc-debug-t479k" podStartSLOduration=1.395244312 podStartE2EDuration="13.457261224s" podCreationTimestamp="2026-01-09 01:52:32 +0000 UTC" firstStartedPulling="2026-01-09 01:52:32.722355029 +0000 UTC m=+9423.033513975" lastFinishedPulling="2026-01-09 01:52:44.784371931 +0000 UTC m=+9435.095530887" observedRunningTime="2026-01-09 01:52:45.449878741 +0000 UTC m=+9435.761037687" watchObservedRunningTime="2026-01-09 01:52:45.457261224 +0000 UTC m=+9435.768420170" Jan 09 01:52:53 crc kubenswrapper[4945]: I0109 01:52:53.001877 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:52:53 crc kubenswrapper[4945]: E0109 01:52:53.002750 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:53:01 crc kubenswrapper[4945]: I0109 01:53:01.610736 4945 generic.go:334] "Generic (PLEG): container finished" podID="01a5979b-3f68-4fdd-9511-5b972bdaef89" containerID="db4178a0d6b9e9648682fb761e4c0853f7b1d09663a8cbe8ffc65bd5b6b4752e" exitCode=0 Jan 09 01:53:01 crc kubenswrapper[4945]: I0109 01:53:01.611287 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/crc-debug-t479k" event={"ID":"01a5979b-3f68-4fdd-9511-5b972bdaef89","Type":"ContainerDied","Data":"db4178a0d6b9e9648682fb761e4c0853f7b1d09663a8cbe8ffc65bd5b6b4752e"} Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.775698 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.835651 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sbqx9/crc-debug-t479k"] Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.848908 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sbqx9/crc-debug-t479k"] Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.890584 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrxv9\" (UniqueName: \"kubernetes.io/projected/01a5979b-3f68-4fdd-9511-5b972bdaef89-kube-api-access-qrxv9\") pod \"01a5979b-3f68-4fdd-9511-5b972bdaef89\" (UID: \"01a5979b-3f68-4fdd-9511-5b972bdaef89\") " Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.891142 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a5979b-3f68-4fdd-9511-5b972bdaef89-host" (OuterVolumeSpecName: "host") pod "01a5979b-3f68-4fdd-9511-5b972bdaef89" (UID: "01a5979b-3f68-4fdd-9511-5b972bdaef89"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.891242 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a5979b-3f68-4fdd-9511-5b972bdaef89-host\") pod \"01a5979b-3f68-4fdd-9511-5b972bdaef89\" (UID: \"01a5979b-3f68-4fdd-9511-5b972bdaef89\") " Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.892115 4945 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a5979b-3f68-4fdd-9511-5b972bdaef89-host\") on node \"crc\" DevicePath \"\"" Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.966979 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a5979b-3f68-4fdd-9511-5b972bdaef89-kube-api-access-qrxv9" (OuterVolumeSpecName: "kube-api-access-qrxv9") pod "01a5979b-3f68-4fdd-9511-5b972bdaef89" (UID: "01a5979b-3f68-4fdd-9511-5b972bdaef89"). InnerVolumeSpecName "kube-api-access-qrxv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:53:02 crc kubenswrapper[4945]: I0109 01:53:02.994366 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrxv9\" (UniqueName: \"kubernetes.io/projected/01a5979b-3f68-4fdd-9511-5b972bdaef89-kube-api-access-qrxv9\") on node \"crc\" DevicePath \"\"" Jan 09 01:53:03 crc kubenswrapper[4945]: I0109 01:53:03.631034 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b5953e8900fc798d57004c5c29f814f9ecef5c11549510714504e1ae2e43e37" Jan 09 01:53:03 crc kubenswrapper[4945]: I0109 01:53:03.631098 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/crc-debug-t479k" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.014862 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a5979b-3f68-4fdd-9511-5b972bdaef89" path="/var/lib/kubelet/pods/01a5979b-3f68-4fdd-9511-5b972bdaef89/volumes" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.081924 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sbqx9/crc-debug-mvmmh"] Jan 09 01:53:04 crc kubenswrapper[4945]: E0109 01:53:04.082450 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a5979b-3f68-4fdd-9511-5b972bdaef89" containerName="container-00" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.082470 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a5979b-3f68-4fdd-9511-5b972bdaef89" containerName="container-00" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.082691 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a5979b-3f68-4fdd-9511-5b972bdaef89" containerName="container-00" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.083464 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.221039 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ffaf9e8-700b-4820-b015-159cf0edfe40-host\") pod \"crc-debug-mvmmh\" (UID: \"5ffaf9e8-700b-4820-b015-159cf0edfe40\") " pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.221095 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59d4\" (UniqueName: \"kubernetes.io/projected/5ffaf9e8-700b-4820-b015-159cf0edfe40-kube-api-access-z59d4\") pod \"crc-debug-mvmmh\" (UID: \"5ffaf9e8-700b-4820-b015-159cf0edfe40\") " pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.322726 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ffaf9e8-700b-4820-b015-159cf0edfe40-host\") pod \"crc-debug-mvmmh\" (UID: \"5ffaf9e8-700b-4820-b015-159cf0edfe40\") " pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.322775 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z59d4\" (UniqueName: \"kubernetes.io/projected/5ffaf9e8-700b-4820-b015-159cf0edfe40-kube-api-access-z59d4\") pod \"crc-debug-mvmmh\" (UID: \"5ffaf9e8-700b-4820-b015-159cf0edfe40\") " pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.322909 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ffaf9e8-700b-4820-b015-159cf0edfe40-host\") pod \"crc-debug-mvmmh\" (UID: \"5ffaf9e8-700b-4820-b015-159cf0edfe40\") " pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.669135 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z59d4\" (UniqueName: \"kubernetes.io/projected/5ffaf9e8-700b-4820-b015-159cf0edfe40-kube-api-access-z59d4\") pod \"crc-debug-mvmmh\" (UID: \"5ffaf9e8-700b-4820-b015-159cf0edfe40\") " pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:04 crc kubenswrapper[4945]: I0109 01:53:04.706224 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:05 crc kubenswrapper[4945]: I0109 01:53:05.649137 4945 generic.go:334] "Generic (PLEG): container finished" podID="5ffaf9e8-700b-4820-b015-159cf0edfe40" containerID="ddb9596b3654199a44eb53d3c7ec46e7109551bfbd83aa8ffd5b384496f4208f" exitCode=1 Jan 09 01:53:05 crc kubenswrapper[4945]: I0109 01:53:05.649222 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" event={"ID":"5ffaf9e8-700b-4820-b015-159cf0edfe40","Type":"ContainerDied","Data":"ddb9596b3654199a44eb53d3c7ec46e7109551bfbd83aa8ffd5b384496f4208f"} Jan 09 01:53:05 crc kubenswrapper[4945]: I0109 01:53:05.649766 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" event={"ID":"5ffaf9e8-700b-4820-b015-159cf0edfe40","Type":"ContainerStarted","Data":"bb68024ed6b08a2c038d4f117af94d5f7fa0443b1feabb4dfc2a43a9322d252b"} Jan 09 01:53:05 crc kubenswrapper[4945]: I0109 01:53:05.687183 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sbqx9/crc-debug-mvmmh"] Jan 09 01:53:05 crc kubenswrapper[4945]: I0109 01:53:05.697997 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sbqx9/crc-debug-mvmmh"] Jan 09 01:53:06 crc kubenswrapper[4945]: I0109 01:53:06.799910 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:06 crc kubenswrapper[4945]: I0109 01:53:06.975909 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z59d4\" (UniqueName: \"kubernetes.io/projected/5ffaf9e8-700b-4820-b015-159cf0edfe40-kube-api-access-z59d4\") pod \"5ffaf9e8-700b-4820-b015-159cf0edfe40\" (UID: \"5ffaf9e8-700b-4820-b015-159cf0edfe40\") " Jan 09 01:53:06 crc kubenswrapper[4945]: I0109 01:53:06.976155 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ffaf9e8-700b-4820-b015-159cf0edfe40-host\") pod \"5ffaf9e8-700b-4820-b015-159cf0edfe40\" (UID: \"5ffaf9e8-700b-4820-b015-159cf0edfe40\") " Jan 09 01:53:06 crc kubenswrapper[4945]: I0109 01:53:06.976239 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ffaf9e8-700b-4820-b015-159cf0edfe40-host" (OuterVolumeSpecName: "host") pod "5ffaf9e8-700b-4820-b015-159cf0edfe40" (UID: "5ffaf9e8-700b-4820-b015-159cf0edfe40"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 09 01:53:06 crc kubenswrapper[4945]: I0109 01:53:06.976905 4945 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5ffaf9e8-700b-4820-b015-159cf0edfe40-host\") on node \"crc\" DevicePath \"\"" Jan 09 01:53:07 crc kubenswrapper[4945]: I0109 01:53:07.001820 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ffaf9e8-700b-4820-b015-159cf0edfe40-kube-api-access-z59d4" (OuterVolumeSpecName: "kube-api-access-z59d4") pod "5ffaf9e8-700b-4820-b015-159cf0edfe40" (UID: "5ffaf9e8-700b-4820-b015-159cf0edfe40"). InnerVolumeSpecName "kube-api-access-z59d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:53:07 crc kubenswrapper[4945]: I0109 01:53:07.003414 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:53:07 crc kubenswrapper[4945]: E0109 01:53:07.003983 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:53:07 crc kubenswrapper[4945]: I0109 01:53:07.079298 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z59d4\" (UniqueName: \"kubernetes.io/projected/5ffaf9e8-700b-4820-b015-159cf0edfe40-kube-api-access-z59d4\") on node \"crc\" DevicePath \"\"" Jan 09 01:53:07 crc kubenswrapper[4945]: I0109 01:53:07.670019 4945 scope.go:117] "RemoveContainer" containerID="ddb9596b3654199a44eb53d3c7ec46e7109551bfbd83aa8ffd5b384496f4208f" Jan 09 01:53:07 crc kubenswrapper[4945]: I0109 01:53:07.670075 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/crc-debug-mvmmh" Jan 09 01:53:08 crc kubenswrapper[4945]: I0109 01:53:08.011630 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ffaf9e8-700b-4820-b015-159cf0edfe40" path="/var/lib/kubelet/pods/5ffaf9e8-700b-4820-b015-159cf0edfe40/volumes" Jan 09 01:53:21 crc kubenswrapper[4945]: I0109 01:53:21.001363 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:53:21 crc kubenswrapper[4945]: E0109 01:53:21.002207 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:53:36 crc kubenswrapper[4945]: I0109 01:53:36.005376 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:53:36 crc kubenswrapper[4945]: E0109 01:53:36.007222 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 01:53:47 crc kubenswrapper[4945]: I0109 01:53:47.000853 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:53:48 crc kubenswrapper[4945]: I0109 01:53:48.124290 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"f3c991c305a9920c7952603ec6b4b80561265fd71736212f6f87a2434ae7d21a"} Jan 09 01:55:10 crc kubenswrapper[4945]: I0109 01:55:10.980267 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kwjvz"] Jan 09 01:55:10 crc kubenswrapper[4945]: E0109 01:55:10.981726 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ffaf9e8-700b-4820-b015-159cf0edfe40" containerName="container-00" Jan 09 01:55:10 crc kubenswrapper[4945]: I0109 01:55:10.981751 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffaf9e8-700b-4820-b015-159cf0edfe40" containerName="container-00" Jan 09 01:55:10 crc kubenswrapper[4945]: I0109 01:55:10.982143 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ffaf9e8-700b-4820-b015-159cf0edfe40" containerName="container-00" Jan 09 01:55:10 crc kubenswrapper[4945]: I0109 01:55:10.984850 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:10.999925 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kwjvz"] Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.136293 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-catalog-content\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.136471 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-utilities\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.136789 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7sn9\" (UniqueName: \"kubernetes.io/projected/e7320826-a035-4f84-a817-58cc1653faaa-kube-api-access-s7sn9\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.239077 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7sn9\" (UniqueName: \"kubernetes.io/projected/e7320826-a035-4f84-a817-58cc1653faaa-kube-api-access-s7sn9\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.239165 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-catalog-content\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.239210 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-utilities\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.239773 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-utilities\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.239874 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-catalog-content\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.265807 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7sn9\" (UniqueName: \"kubernetes.io/projected/e7320826-a035-4f84-a817-58cc1653faaa-kube-api-access-s7sn9\") pod \"redhat-operators-kwjvz\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.314351 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:11 crc kubenswrapper[4945]: I0109 01:55:11.783583 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kwjvz"] Jan 09 01:55:12 crc kubenswrapper[4945]: I0109 01:55:12.166528 4945 generic.go:334] "Generic (PLEG): container finished" podID="e7320826-a035-4f84-a817-58cc1653faaa" containerID="38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554" exitCode=0 Jan 09 01:55:12 crc kubenswrapper[4945]: I0109 01:55:12.166754 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwjvz" event={"ID":"e7320826-a035-4f84-a817-58cc1653faaa","Type":"ContainerDied","Data":"38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554"} Jan 09 01:55:12 crc kubenswrapper[4945]: I0109 01:55:12.166882 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwjvz" event={"ID":"e7320826-a035-4f84-a817-58cc1653faaa","Type":"ContainerStarted","Data":"fed58ecc1d0340b0119eee7bbadbd91fdf04134966dc32f4e9e6c8fb1a5626a2"} Jan 09 01:55:13 crc kubenswrapper[4945]: I0109 01:55:13.186658 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwjvz" event={"ID":"e7320826-a035-4f84-a817-58cc1653faaa","Type":"ContainerStarted","Data":"117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446"} Jan 09 01:55:17 crc kubenswrapper[4945]: I0109 01:55:17.233339 4945 generic.go:334] "Generic (PLEG): container finished" podID="e7320826-a035-4f84-a817-58cc1653faaa" containerID="117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446" exitCode=0 Jan 09 01:55:17 crc kubenswrapper[4945]: I0109 01:55:17.235129 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwjvz" event={"ID":"e7320826-a035-4f84-a817-58cc1653faaa","Type":"ContainerDied","Data":"117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446"} Jan 09 01:55:19 crc kubenswrapper[4945]: I0109 01:55:19.262579 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwjvz" event={"ID":"e7320826-a035-4f84-a817-58cc1653faaa","Type":"ContainerStarted","Data":"09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e"} Jan 09 01:55:19 crc kubenswrapper[4945]: I0109 01:55:19.293662 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kwjvz" podStartSLOduration=3.612119129 podStartE2EDuration="9.293639181s" podCreationTimestamp="2026-01-09 01:55:10 +0000 UTC" firstStartedPulling="2026-01-09 01:55:12.170211499 +0000 UTC m=+9582.481370445" lastFinishedPulling="2026-01-09 01:55:17.851731561 +0000 UTC m=+9588.162890497" observedRunningTime="2026-01-09 01:55:19.283378198 +0000 UTC m=+9589.594537154" watchObservedRunningTime="2026-01-09 01:55:19.293639181 +0000 UTC m=+9589.604798137" Jan 09 01:55:21 crc kubenswrapper[4945]: I0109 01:55:21.315230 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:21 crc kubenswrapper[4945]: I0109 01:55:21.315669 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:22 crc kubenswrapper[4945]: I0109 01:55:22.377814 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kwjvz" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="registry-server" probeResult="failure" output=< Jan 09 01:55:22 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 01:55:22 crc kubenswrapper[4945]: > Jan 09 01:55:32 crc kubenswrapper[4945]: I0109 01:55:32.239054 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:32 crc kubenswrapper[4945]: I0109 01:55:32.320969 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:32 crc kubenswrapper[4945]: I0109 01:55:32.487987 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kwjvz"] Jan 09 01:55:33 crc kubenswrapper[4945]: I0109 01:55:33.425284 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kwjvz" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="registry-server" containerID="cri-o://09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e" gracePeriod=2 Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.381969 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.437276 4945 generic.go:334] "Generic (PLEG): container finished" podID="e7320826-a035-4f84-a817-58cc1653faaa" containerID="09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e" exitCode=0 Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.437325 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwjvz" event={"ID":"e7320826-a035-4f84-a817-58cc1653faaa","Type":"ContainerDied","Data":"09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e"} Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.437357 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kwjvz" event={"ID":"e7320826-a035-4f84-a817-58cc1653faaa","Type":"ContainerDied","Data":"fed58ecc1d0340b0119eee7bbadbd91fdf04134966dc32f4e9e6c8fb1a5626a2"} Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.437359 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kwjvz" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.437376 4945 scope.go:117] "RemoveContainer" containerID="09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.437816 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-catalog-content\") pod \"e7320826-a035-4f84-a817-58cc1653faaa\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.438105 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7sn9\" (UniqueName: \"kubernetes.io/projected/e7320826-a035-4f84-a817-58cc1653faaa-kube-api-access-s7sn9\") pod \"e7320826-a035-4f84-a817-58cc1653faaa\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.438158 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-utilities\") pod \"e7320826-a035-4f84-a817-58cc1653faaa\" (UID: \"e7320826-a035-4f84-a817-58cc1653faaa\") " Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.439200 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-utilities" (OuterVolumeSpecName: "utilities") pod "e7320826-a035-4f84-a817-58cc1653faaa" (UID: "e7320826-a035-4f84-a817-58cc1653faaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.466882 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7320826-a035-4f84-a817-58cc1653faaa-kube-api-access-s7sn9" (OuterVolumeSpecName: "kube-api-access-s7sn9") pod "e7320826-a035-4f84-a817-58cc1653faaa" (UID: "e7320826-a035-4f84-a817-58cc1653faaa"). InnerVolumeSpecName "kube-api-access-s7sn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.473832 4945 scope.go:117] "RemoveContainer" containerID="117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.527456 4945 scope.go:117] "RemoveContainer" containerID="38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.541332 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7sn9\" (UniqueName: \"kubernetes.io/projected/e7320826-a035-4f84-a817-58cc1653faaa-kube-api-access-s7sn9\") on node \"crc\" DevicePath \"\"" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.541370 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.564599 4945 scope.go:117] "RemoveContainer" containerID="09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e" Jan 09 01:55:34 crc kubenswrapper[4945]: E0109 01:55:34.565028 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e\": container with ID starting with 09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e not found: ID does not exist" containerID="09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.565069 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e"} err="failed to get container status \"09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e\": rpc error: code = NotFound desc = could not find container \"09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e\": container with ID starting with 09bcf892a4aec4cbb56307f61e004c6647b356ed0c3885df22f811101e74688e not found: ID does not exist" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.565094 4945 scope.go:117] "RemoveContainer" containerID="117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446" Jan 09 01:55:34 crc kubenswrapper[4945]: E0109 01:55:34.565366 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446\": container with ID starting with 117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446 not found: ID does not exist" containerID="117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.565388 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446"} err="failed to get container status \"117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446\": rpc error: code = NotFound desc = could not find container \"117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446\": container with ID starting with 117e3d854b43cf44c841dcbf286ad5e0f69829a1a54dbd5cca794f9ea7f08446 not found: ID does not exist" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.565403 4945 scope.go:117] "RemoveContainer" containerID="38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554" Jan 09 01:55:34 crc kubenswrapper[4945]: E0109 01:55:34.565662 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554\": container with ID starting with 38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554 not found: ID does not exist" containerID="38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.565687 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554"} err="failed to get container status \"38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554\": rpc error: code = NotFound desc = could not find container \"38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554\": container with ID starting with 38570b10fafd0cf2d4482f3019b6351fbbe9a4c46209f1f89f8f0e96dbfce554 not found: ID does not exist" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.598157 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7320826-a035-4f84-a817-58cc1653faaa" (UID: "e7320826-a035-4f84-a817-58cc1653faaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.643149 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7320826-a035-4f84-a817-58cc1653faaa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.793945 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kwjvz"] Jan 09 01:55:34 crc kubenswrapper[4945]: I0109 01:55:34.808668 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kwjvz"] Jan 09 01:55:36 crc kubenswrapper[4945]: I0109 01:55:36.012416 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7320826-a035-4f84-a817-58cc1653faaa" path="/var/lib/kubelet/pods/e7320826-a035-4f84-a817-58cc1653faaa/volumes" Jan 09 01:56:13 crc kubenswrapper[4945]: I0109 01:56:13.577871 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:56:13 crc kubenswrapper[4945]: I0109 01:56:13.578456 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:56:43 crc kubenswrapper[4945]: I0109 01:56:43.577894 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:56:43 crc kubenswrapper[4945]: I0109 01:56:43.578735 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:57:13 crc kubenswrapper[4945]: I0109 01:57:13.578119 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:57:13 crc kubenswrapper[4945]: I0109 01:57:13.578506 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:57:13 crc kubenswrapper[4945]: I0109 01:57:13.578548 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 01:57:13 crc kubenswrapper[4945]: I0109 01:57:13.579351 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3c991c305a9920c7952603ec6b4b80561265fd71736212f6f87a2434ae7d21a"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 01:57:13 crc kubenswrapper[4945]: I0109 01:57:13.579397 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://f3c991c305a9920c7952603ec6b4b80561265fd71736212f6f87a2434ae7d21a" gracePeriod=600 Jan 09 01:57:14 crc kubenswrapper[4945]: I0109 01:57:14.519059 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="f3c991c305a9920c7952603ec6b4b80561265fd71736212f6f87a2434ae7d21a" exitCode=0 Jan 09 01:57:14 crc kubenswrapper[4945]: I0109 01:57:14.519330 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"f3c991c305a9920c7952603ec6b4b80561265fd71736212f6f87a2434ae7d21a"} Jan 09 01:57:14 crc kubenswrapper[4945]: I0109 01:57:14.519598 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713"} Jan 09 01:57:14 crc kubenswrapper[4945]: I0109 01:57:14.519615 4945 scope.go:117] "RemoveContainer" containerID="f24fbf41c2453b5c1d2eb5caf8cba37d0b3a32e67fd38459888686c2b930f913" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.110375 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wdpt6"] Jan 09 01:57:31 crc kubenswrapper[4945]: E0109 01:57:31.111535 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="extract-content" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.111553 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="extract-content" Jan 09 01:57:31 crc kubenswrapper[4945]: E0109 01:57:31.111590 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="extract-utilities" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.111599 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="extract-utilities" Jan 09 01:57:31 crc kubenswrapper[4945]: E0109 01:57:31.111614 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="registry-server" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.111622 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="registry-server" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.111880 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7320826-a035-4f84-a817-58cc1653faaa" containerName="registry-server" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.113958 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.132920 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdpt6"] Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.225203 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-catalog-content\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.225372 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-utilities\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.225478 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7px\" (UniqueName: \"kubernetes.io/projected/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-kube-api-access-zc7px\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.328076 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-catalog-content\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.328153 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-utilities\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.328208 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7px\" (UniqueName: \"kubernetes.io/projected/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-kube-api-access-zc7px\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.328579 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-catalog-content\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.328772 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-utilities\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.351977 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7px\" (UniqueName: \"kubernetes.io/projected/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-kube-api-access-zc7px\") pod \"redhat-marketplace-wdpt6\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.437895 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:31 crc kubenswrapper[4945]: I0109 01:57:31.950602 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdpt6"] Jan 09 01:57:32 crc kubenswrapper[4945]: I0109 01:57:32.695735 4945 generic.go:334] "Generic (PLEG): container finished" podID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerID="c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119" exitCode=0 Jan 09 01:57:32 crc kubenswrapper[4945]: I0109 01:57:32.695829 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdpt6" event={"ID":"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c","Type":"ContainerDied","Data":"c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119"} Jan 09 01:57:32 crc kubenswrapper[4945]: I0109 01:57:32.696103 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdpt6" event={"ID":"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c","Type":"ContainerStarted","Data":"fcc7a270c7925dbf5cc1b7a0d661fdffff89bf0ada994c4038329f8fb83f6e67"} Jan 09 01:57:34 crc kubenswrapper[4945]: I0109 01:57:34.742603 4945 generic.go:334] "Generic (PLEG): container finished" podID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerID="b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55" exitCode=0 Jan 09 01:57:34 crc kubenswrapper[4945]: I0109 01:57:34.742820 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdpt6" event={"ID":"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c","Type":"ContainerDied","Data":"b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55"} Jan 09 01:57:34 crc kubenswrapper[4945]: I0109 01:57:34.746775 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.118161 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vsdkv"] Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.121253 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.135333 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsdkv"] Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.240268 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqz5s\" (UniqueName: \"kubernetes.io/projected/fbc0af81-185c-4502-a81d-0c843ba6d0cb-kube-api-access-fqz5s\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.240935 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc0af81-185c-4502-a81d-0c843ba6d0cb-utilities\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.241126 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc0af81-185c-4502-a81d-0c843ba6d0cb-catalog-content\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.343237 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc0af81-185c-4502-a81d-0c843ba6d0cb-utilities\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.343288 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc0af81-185c-4502-a81d-0c843ba6d0cb-catalog-content\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.343376 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqz5s\" (UniqueName: \"kubernetes.io/projected/fbc0af81-185c-4502-a81d-0c843ba6d0cb-kube-api-access-fqz5s\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.343793 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc0af81-185c-4502-a81d-0c843ba6d0cb-utilities\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.343890 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc0af81-185c-4502-a81d-0c843ba6d0cb-catalog-content\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.383984 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqz5s\" (UniqueName: \"kubernetes.io/projected/fbc0af81-185c-4502-a81d-0c843ba6d0cb-kube-api-access-fqz5s\") pod \"community-operators-vsdkv\" (UID: \"fbc0af81-185c-4502-a81d-0c843ba6d0cb\") " pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.481218 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.769054 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdpt6" event={"ID":"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c","Type":"ContainerStarted","Data":"8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18"} Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.816879 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wdpt6" podStartSLOduration=3.149703122 podStartE2EDuration="5.81685587s" podCreationTimestamp="2026-01-09 01:57:31 +0000 UTC" firstStartedPulling="2026-01-09 01:57:32.697898848 +0000 UTC m=+9723.009057804" lastFinishedPulling="2026-01-09 01:57:35.365051566 +0000 UTC m=+9725.676210552" observedRunningTime="2026-01-09 01:57:36.807387226 +0000 UTC m=+9727.118546172" watchObservedRunningTime="2026-01-09 01:57:36.81685587 +0000 UTC m=+9727.128014816" Jan 09 01:57:36 crc kubenswrapper[4945]: I0109 01:57:36.993022 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsdkv"] Jan 09 01:57:37 crc kubenswrapper[4945]: W0109 01:57:37.777170 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbc0af81_185c_4502_a81d_0c843ba6d0cb.slice/crio-4602a714eb16de629e57303dbf3e2a9b7bae0d558a8bfc574964677946489f0b WatchSource:0}: Error finding container 4602a714eb16de629e57303dbf3e2a9b7bae0d558a8bfc574964677946489f0b: Status 404 returned error can't find the container with id 4602a714eb16de629e57303dbf3e2a9b7bae0d558a8bfc574964677946489f0b Jan 09 01:57:38 crc kubenswrapper[4945]: I0109 01:57:38.816305 4945 generic.go:334] "Generic (PLEG): container finished" podID="fbc0af81-185c-4502-a81d-0c843ba6d0cb" containerID="4dd5d47b4c199d9a04bbf87c624e1d16dc8141d847c620b2c1597900a3a08867" exitCode=0 Jan 09 01:57:38 crc kubenswrapper[4945]: I0109 01:57:38.816979 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsdkv" event={"ID":"fbc0af81-185c-4502-a81d-0c843ba6d0cb","Type":"ContainerDied","Data":"4dd5d47b4c199d9a04bbf87c624e1d16dc8141d847c620b2c1597900a3a08867"} Jan 09 01:57:38 crc kubenswrapper[4945]: I0109 01:57:38.817078 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsdkv" event={"ID":"fbc0af81-185c-4502-a81d-0c843ba6d0cb","Type":"ContainerStarted","Data":"4602a714eb16de629e57303dbf3e2a9b7bae0d558a8bfc574964677946489f0b"} Jan 09 01:57:41 crc kubenswrapper[4945]: I0109 01:57:41.438041 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:41 crc kubenswrapper[4945]: I0109 01:57:41.438684 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:41 crc kubenswrapper[4945]: I0109 01:57:41.534098 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:41 crc kubenswrapper[4945]: I0109 01:57:41.890318 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:44 crc kubenswrapper[4945]: I0109 01:57:44.295140 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdpt6"] Jan 09 01:57:44 crc kubenswrapper[4945]: I0109 01:57:44.872656 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wdpt6" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerName="registry-server" containerID="cri-o://8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18" gracePeriod=2 Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.621295 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.677120 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-utilities\") pod \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.678106 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-utilities" (OuterVolumeSpecName: "utilities") pod "c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" (UID: "c9efe750-ae2c-44d3-ba4d-5bdd67dc983c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.677224 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-catalog-content\") pod \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.680296 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc7px\" (UniqueName: \"kubernetes.io/projected/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-kube-api-access-zc7px\") pod \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\" (UID: \"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c\") " Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.681703 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.689924 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-kube-api-access-zc7px" (OuterVolumeSpecName: "kube-api-access-zc7px") pod "c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" (UID: "c9efe750-ae2c-44d3-ba4d-5bdd67dc983c"). InnerVolumeSpecName "kube-api-access-zc7px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.702521 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" (UID: "c9efe750-ae2c-44d3-ba4d-5bdd67dc983c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.784656 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.784688 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc7px\" (UniqueName: \"kubernetes.io/projected/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c-kube-api-access-zc7px\") on node \"crc\" DevicePath \"\"" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.883743 4945 generic.go:334] "Generic (PLEG): container finished" podID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerID="8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18" exitCode=0 Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.883829 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdpt6" event={"ID":"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c","Type":"ContainerDied","Data":"8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18"} Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.883832 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdpt6" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.883880 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdpt6" event={"ID":"c9efe750-ae2c-44d3-ba4d-5bdd67dc983c","Type":"ContainerDied","Data":"fcc7a270c7925dbf5cc1b7a0d661fdffff89bf0ada994c4038329f8fb83f6e67"} Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.883903 4945 scope.go:117] "RemoveContainer" containerID="8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.887673 4945 generic.go:334] "Generic (PLEG): container finished" podID="fbc0af81-185c-4502-a81d-0c843ba6d0cb" containerID="971244c5c8e85df56bca95da98d48dba412205dd912e61062ccc357d9829011b" exitCode=0 Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.887728 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsdkv" event={"ID":"fbc0af81-185c-4502-a81d-0c843ba6d0cb","Type":"ContainerDied","Data":"971244c5c8e85df56bca95da98d48dba412205dd912e61062ccc357d9829011b"} Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.917367 4945 scope.go:117] "RemoveContainer" containerID="b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.936093 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdpt6"] Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.943791 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdpt6"] Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.956531 4945 scope.go:117] "RemoveContainer" containerID="c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.997493 4945 scope.go:117] "RemoveContainer" containerID="8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18" Jan 09 01:57:45 crc kubenswrapper[4945]: E0109 01:57:45.997966 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18\": container with ID starting with 8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18 not found: ID does not exist" containerID="8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.998029 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18"} err="failed to get container status \"8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18\": rpc error: code = NotFound desc = could not find container \"8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18\": container with ID starting with 8d83e68eedf958331b19f7f9722593e4bef0e58c241c9b3a8f7bb725e5e4bd18 not found: ID does not exist" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.998058 4945 scope.go:117] "RemoveContainer" containerID="b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55" Jan 09 01:57:45 crc kubenswrapper[4945]: E0109 01:57:45.998370 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55\": container with ID starting with b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55 not found: ID does not exist" containerID="b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.998400 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55"} err="failed to get container status \"b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55\": rpc error: code = NotFound desc = could not find container \"b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55\": container with ID starting with b0c35eb51ea56864577fc963a3c5c90b52c1e255466c4acd908afd81423a1c55 not found: ID does not exist" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.998421 4945 scope.go:117] "RemoveContainer" containerID="c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119" Jan 09 01:57:45 crc kubenswrapper[4945]: E0109 01:57:45.998668 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119\": container with ID starting with c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119 not found: ID does not exist" containerID="c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119" Jan 09 01:57:45 crc kubenswrapper[4945]: I0109 01:57:45.998696 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119"} err="failed to get container status \"c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119\": rpc error: code = NotFound desc = could not find container \"c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119\": container with ID starting with c5c1eefa4631d9fbba3c6e59a0240303ca1b797a834039a8c2d4d38349688119 not found: ID does not exist" Jan 09 01:57:46 crc kubenswrapper[4945]: I0109 01:57:46.011301 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" path="/var/lib/kubelet/pods/c9efe750-ae2c-44d3-ba4d-5bdd67dc983c/volumes" Jan 09 01:57:46 crc kubenswrapper[4945]: I0109 01:57:46.900233 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsdkv" event={"ID":"fbc0af81-185c-4502-a81d-0c843ba6d0cb","Type":"ContainerStarted","Data":"36f7b03f88818edf06db858c52cc6b05802e2f16f952649ea34fe71c311f5f44"} Jan 09 01:57:46 crc kubenswrapper[4945]: I0109 01:57:46.928647 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vsdkv" podStartSLOduration=3.346940881 podStartE2EDuration="10.928621214s" podCreationTimestamp="2026-01-09 01:57:36 +0000 UTC" firstStartedPulling="2026-01-09 01:57:38.819339958 +0000 UTC m=+9729.130498924" lastFinishedPulling="2026-01-09 01:57:46.401020311 +0000 UTC m=+9736.712179257" observedRunningTime="2026-01-09 01:57:46.927183969 +0000 UTC m=+9737.238342915" watchObservedRunningTime="2026-01-09 01:57:46.928621214 +0000 UTC m=+9737.239780210" Jan 09 01:57:56 crc kubenswrapper[4945]: I0109 01:57:56.481620 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:56 crc kubenswrapper[4945]: I0109 01:57:56.482187 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:56 crc kubenswrapper[4945]: I0109 01:57:56.530017 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:57 crc kubenswrapper[4945]: I0109 01:57:57.054105 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vsdkv" Jan 09 01:57:59 crc kubenswrapper[4945]: I0109 01:57:59.538158 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsdkv"] Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.100268 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8t2dg"] Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.101068 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8t2dg" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="registry-server" containerID="cri-o://89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44" gracePeriod=2 Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.628578 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.722829 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-catalog-content\") pod \"61c160d2-8041-4fec-aa06-1d1fc66438cf\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.723021 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-utilities\") pod \"61c160d2-8041-4fec-aa06-1d1fc66438cf\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.723135 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95gvb\" (UniqueName: \"kubernetes.io/projected/61c160d2-8041-4fec-aa06-1d1fc66438cf-kube-api-access-95gvb\") pod \"61c160d2-8041-4fec-aa06-1d1fc66438cf\" (UID: \"61c160d2-8041-4fec-aa06-1d1fc66438cf\") " Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.726566 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-utilities" (OuterVolumeSpecName: "utilities") pod "61c160d2-8041-4fec-aa06-1d1fc66438cf" (UID: "61c160d2-8041-4fec-aa06-1d1fc66438cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.734926 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c160d2-8041-4fec-aa06-1d1fc66438cf-kube-api-access-95gvb" (OuterVolumeSpecName: "kube-api-access-95gvb") pod "61c160d2-8041-4fec-aa06-1d1fc66438cf" (UID: "61c160d2-8041-4fec-aa06-1d1fc66438cf"). InnerVolumeSpecName "kube-api-access-95gvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.790604 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61c160d2-8041-4fec-aa06-1d1fc66438cf" (UID: "61c160d2-8041-4fec-aa06-1d1fc66438cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.826252 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.826294 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95gvb\" (UniqueName: \"kubernetes.io/projected/61c160d2-8041-4fec-aa06-1d1fc66438cf-kube-api-access-95gvb\") on node \"crc\" DevicePath \"\"" Jan 09 01:58:00 crc kubenswrapper[4945]: I0109 01:58:00.826310 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c160d2-8041-4fec-aa06-1d1fc66438cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.042412 4945 generic.go:334] "Generic (PLEG): container finished" podID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerID="89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44" exitCode=0 Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.042777 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8t2dg" event={"ID":"61c160d2-8041-4fec-aa06-1d1fc66438cf","Type":"ContainerDied","Data":"89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44"} Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.042815 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8t2dg" event={"ID":"61c160d2-8041-4fec-aa06-1d1fc66438cf","Type":"ContainerDied","Data":"c8cda3f2671e88c995d471f90ca3a4cb569a853ce127338c4ee4b2df86634c58"} Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.042836 4945 scope.go:117] "RemoveContainer" containerID="89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.042983 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8t2dg" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.074985 4945 scope.go:117] "RemoveContainer" containerID="ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.100895 4945 scope.go:117] "RemoveContainer" containerID="a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.107063 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8t2dg"] Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.116642 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8t2dg"] Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.144752 4945 scope.go:117] "RemoveContainer" containerID="89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44" Jan 09 01:58:01 crc kubenswrapper[4945]: E0109 01:58:01.145814 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44\": container with ID starting with 89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44 not found: ID does not exist" containerID="89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.145855 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44"} err="failed to get container status \"89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44\": rpc error: code = NotFound desc = could not find container \"89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44\": container with ID starting with 89c932a02f33809bf7e735049a86ef9b2bcb7659f08c53402649e43eb30dcc44 not found: ID does not exist" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.145883 4945 scope.go:117] "RemoveContainer" containerID="ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285" Jan 09 01:58:01 crc kubenswrapper[4945]: E0109 01:58:01.146437 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285\": container with ID starting with ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285 not found: ID does not exist" containerID="ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.146461 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285"} err="failed to get container status \"ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285\": rpc error: code = NotFound desc = could not find container \"ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285\": container with ID starting with ff205755517c685c895338c8e35ce0a57138619cda60513022be3a6ed4947285 not found: ID does not exist" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.146476 4945 scope.go:117] "RemoveContainer" containerID="a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927" Jan 09 01:58:01 crc kubenswrapper[4945]: E0109 01:58:01.146755 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927\": container with ID starting with a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927 not found: ID does not exist" containerID="a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927" Jan 09 01:58:01 crc kubenswrapper[4945]: I0109 01:58:01.146792 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927"} err="failed to get container status \"a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927\": rpc error: code = NotFound desc = could not find container \"a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927\": container with ID starting with a4316600bda564bc10d77506bdb7ea699ac488a0f4f1dcb94242809860355927 not found: ID does not exist" Jan 09 01:58:02 crc kubenswrapper[4945]: I0109 01:58:02.016076 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" path="/var/lib/kubelet/pods/61c160d2-8041-4fec-aa06-1d1fc66438cf/volumes" Jan 09 01:58:46 crc kubenswrapper[4945]: I0109 01:58:46.722104 4945 scope.go:117] "RemoveContainer" containerID="db4178a0d6b9e9648682fb761e4c0853f7b1d09663a8cbe8ffc65bd5b6b4752e" Jan 09 01:59:13 crc kubenswrapper[4945]: I0109 01:59:13.578961 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:59:13 crc kubenswrapper[4945]: I0109 01:59:13.579852 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 01:59:43 crc kubenswrapper[4945]: I0109 01:59:43.578421 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 01:59:43 crc kubenswrapper[4945]: I0109 01:59:43.579176 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.168472 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k"] Jan 09 02:00:00 crc kubenswrapper[4945]: E0109 02:00:00.169640 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerName="extract-utilities" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.169660 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerName="extract-utilities" Jan 09 02:00:00 crc kubenswrapper[4945]: E0109 02:00:00.169676 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerName="extract-content" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.169684 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerName="extract-content" Jan 09 02:00:00 crc kubenswrapper[4945]: E0109 02:00:00.169700 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="registry-server" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.169707 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="registry-server" Jan 09 02:00:00 crc kubenswrapper[4945]: E0109 02:00:00.169734 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerName="registry-server" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.169742 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerName="registry-server" Jan 09 02:00:00 crc kubenswrapper[4945]: E0109 02:00:00.169760 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="extract-utilities" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.169771 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="extract-utilities" Jan 09 02:00:00 crc kubenswrapper[4945]: E0109 02:00:00.169785 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="extract-content" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.169792 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="extract-content" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.170011 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9efe750-ae2c-44d3-ba4d-5bdd67dc983c" containerName="registry-server" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.170054 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c160d2-8041-4fec-aa06-1d1fc66438cf" containerName="registry-server" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.170825 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.173263 4945 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.173544 4945 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.193508 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k"] Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.343721 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cf2551d-14e5-4276-b68d-67368de2ceee-secret-volume\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.343812 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqz7t\" (UniqueName: \"kubernetes.io/projected/6cf2551d-14e5-4276-b68d-67368de2ceee-kube-api-access-nqz7t\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.343960 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cf2551d-14e5-4276-b68d-67368de2ceee-config-volume\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.446153 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cf2551d-14e5-4276-b68d-67368de2ceee-config-volume\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.446402 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cf2551d-14e5-4276-b68d-67368de2ceee-secret-volume\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.446528 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqz7t\" (UniqueName: \"kubernetes.io/projected/6cf2551d-14e5-4276-b68d-67368de2ceee-kube-api-access-nqz7t\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.448494 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cf2551d-14e5-4276-b68d-67368de2ceee-config-volume\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.456045 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cf2551d-14e5-4276-b68d-67368de2ceee-secret-volume\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.468889 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqz7t\" (UniqueName: \"kubernetes.io/projected/6cf2551d-14e5-4276-b68d-67368de2ceee-kube-api-access-nqz7t\") pod \"collect-profiles-29465400-z9z2k\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:00 crc kubenswrapper[4945]: I0109 02:00:00.512532 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:01 crc kubenswrapper[4945]: I0109 02:00:01.016900 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k"] Jan 09 02:00:01 crc kubenswrapper[4945]: I0109 02:00:01.445811 4945 generic.go:334] "Generic (PLEG): container finished" podID="6cf2551d-14e5-4276-b68d-67368de2ceee" containerID="3b08215f98ab90045b3dc3361a040cc6b634954250fd4cf9f6305cde65d74fcb" exitCode=0 Jan 09 02:00:01 crc kubenswrapper[4945]: I0109 02:00:01.445863 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" event={"ID":"6cf2551d-14e5-4276-b68d-67368de2ceee","Type":"ContainerDied","Data":"3b08215f98ab90045b3dc3361a040cc6b634954250fd4cf9f6305cde65d74fcb"} Jan 09 02:00:01 crc kubenswrapper[4945]: I0109 02:00:01.445892 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" event={"ID":"6cf2551d-14e5-4276-b68d-67368de2ceee","Type":"ContainerStarted","Data":"be33da8a275b23103184f9356b02d66ed6a79e08f06891b9c8d1110701fd708d"} Jan 09 02:00:02 crc kubenswrapper[4945]: I0109 02:00:02.885048 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:02 crc kubenswrapper[4945]: I0109 02:00:02.899602 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqz7t\" (UniqueName: \"kubernetes.io/projected/6cf2551d-14e5-4276-b68d-67368de2ceee-kube-api-access-nqz7t\") pod \"6cf2551d-14e5-4276-b68d-67368de2ceee\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " Jan 09 02:00:02 crc kubenswrapper[4945]: I0109 02:00:02.899697 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cf2551d-14e5-4276-b68d-67368de2ceee-config-volume\") pod \"6cf2551d-14e5-4276-b68d-67368de2ceee\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " Jan 09 02:00:02 crc kubenswrapper[4945]: I0109 02:00:02.900486 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cf2551d-14e5-4276-b68d-67368de2ceee-config-volume" (OuterVolumeSpecName: "config-volume") pod "6cf2551d-14e5-4276-b68d-67368de2ceee" (UID: "6cf2551d-14e5-4276-b68d-67368de2ceee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 09 02:00:02 crc kubenswrapper[4945]: I0109 02:00:02.905832 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf2551d-14e5-4276-b68d-67368de2ceee-kube-api-access-nqz7t" (OuterVolumeSpecName: "kube-api-access-nqz7t") pod "6cf2551d-14e5-4276-b68d-67368de2ceee" (UID: "6cf2551d-14e5-4276-b68d-67368de2ceee"). InnerVolumeSpecName "kube-api-access-nqz7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.000605 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cf2551d-14e5-4276-b68d-67368de2ceee-secret-volume\") pod \"6cf2551d-14e5-4276-b68d-67368de2ceee\" (UID: \"6cf2551d-14e5-4276-b68d-67368de2ceee\") " Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.001928 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqz7t\" (UniqueName: \"kubernetes.io/projected/6cf2551d-14e5-4276-b68d-67368de2ceee-kube-api-access-nqz7t\") on node \"crc\" DevicePath \"\"" Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.001950 4945 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cf2551d-14e5-4276-b68d-67368de2ceee-config-volume\") on node \"crc\" DevicePath \"\"" Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.009265 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf2551d-14e5-4276-b68d-67368de2ceee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6cf2551d-14e5-4276-b68d-67368de2ceee" (UID: "6cf2551d-14e5-4276-b68d-67368de2ceee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.103975 4945 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6cf2551d-14e5-4276-b68d-67368de2ceee-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.475952 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" event={"ID":"6cf2551d-14e5-4276-b68d-67368de2ceee","Type":"ContainerDied","Data":"be33da8a275b23103184f9356b02d66ed6a79e08f06891b9c8d1110701fd708d"} Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.475993 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be33da8a275b23103184f9356b02d66ed6a79e08f06891b9c8d1110701fd708d" Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.476066 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29465400-z9z2k" Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.962837 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5"] Jan 09 02:00:03 crc kubenswrapper[4945]: I0109 02:00:03.977168 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29465355-bhrg5"] Jan 09 02:00:04 crc kubenswrapper[4945]: I0109 02:00:04.014630 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59174838-ea08-4d97-91d2-9193dabd276f" path="/var/lib/kubelet/pods/59174838-ea08-4d97-91d2-9193dabd276f/volumes" Jan 09 02:00:13 crc kubenswrapper[4945]: I0109 02:00:13.578656 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 02:00:13 crc kubenswrapper[4945]: I0109 02:00:13.579608 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 02:00:13 crc kubenswrapper[4945]: I0109 02:00:13.579687 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 02:00:13 crc kubenswrapper[4945]: I0109 02:00:13.581275 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 02:00:13 crc kubenswrapper[4945]: I0109 02:00:13.581388 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" gracePeriod=600 Jan 09 02:00:13 crc kubenswrapper[4945]: E0109 02:00:13.729536 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:00:14 crc kubenswrapper[4945]: I0109 02:00:14.594472 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" exitCode=0 Jan 09 02:00:14 crc kubenswrapper[4945]: I0109 02:00:14.594535 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713"} Jan 09 02:00:14 crc kubenswrapper[4945]: I0109 02:00:14.594587 4945 scope.go:117] "RemoveContainer" containerID="f3c991c305a9920c7952603ec6b4b80561265fd71736212f6f87a2434ae7d21a" Jan 09 02:00:14 crc kubenswrapper[4945]: I0109 02:00:14.595722 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:00:14 crc kubenswrapper[4945]: E0109 02:00:14.596520 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:00:30 crc kubenswrapper[4945]: I0109 02:00:30.046267 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:00:30 crc kubenswrapper[4945]: E0109 02:00:30.060265 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:00:44 crc kubenswrapper[4945]: I0109 02:00:44.854387 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:00:44 crc kubenswrapper[4945]: E0109 02:00:44.862800 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:00:46 crc kubenswrapper[4945]: I0109 02:00:46.844496 4945 scope.go:117] "RemoveContainer" containerID="092555a7a8cf0423b416dec48b7eea54b7f4d80695cee1dd4efa1413828babed" Jan 09 02:00:58 crc kubenswrapper[4945]: I0109 02:00:58.001523 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:00:58 crc kubenswrapper[4945]: E0109 02:00:58.003212 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.156938 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29465401-7pg7z"] Jan 09 02:01:00 crc kubenswrapper[4945]: E0109 02:01:00.157728 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf2551d-14e5-4276-b68d-67368de2ceee" containerName="collect-profiles" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.157744 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf2551d-14e5-4276-b68d-67368de2ceee" containerName="collect-profiles" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.157985 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf2551d-14e5-4276-b68d-67368de2ceee" containerName="collect-profiles" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.158706 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.172284 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29465401-7pg7z"] Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.301974 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-config-data\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.302290 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6pwd\" (UniqueName: \"kubernetes.io/projected/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-kube-api-access-v6pwd\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.302375 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-fernet-keys\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.302462 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-combined-ca-bundle\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.405113 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-combined-ca-bundle\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.405248 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-config-data\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.405273 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6pwd\" (UniqueName: \"kubernetes.io/projected/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-kube-api-access-v6pwd\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.405377 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-fernet-keys\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.415816 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-combined-ca-bundle\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.419046 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-fernet-keys\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.419290 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-config-data\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.426637 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6pwd\" (UniqueName: \"kubernetes.io/projected/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-kube-api-access-v6pwd\") pod \"keystone-cron-29465401-7pg7z\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.495024 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:00 crc kubenswrapper[4945]: I0109 02:01:00.985725 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29465401-7pg7z"] Jan 09 02:01:01 crc kubenswrapper[4945]: I0109 02:01:01.098912 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29465401-7pg7z" event={"ID":"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6","Type":"ContainerStarted","Data":"b94ad1b4c7079e9386760dc17409b669973e5f2e35b4b24a9106c492f7fc38ee"} Jan 09 02:01:02 crc kubenswrapper[4945]: I0109 02:01:02.112144 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29465401-7pg7z" event={"ID":"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6","Type":"ContainerStarted","Data":"721a971a2c3884b8930d3c49470fa4badc9680599b50750ecca17b2a1949ba7e"} Jan 09 02:01:02 crc kubenswrapper[4945]: I0109 02:01:02.130618 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29465401-7pg7z" podStartSLOduration=2.130595964 podStartE2EDuration="2.130595964s" podCreationTimestamp="2026-01-09 02:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-09 02:01:02.12882622 +0000 UTC m=+9932.439985216" watchObservedRunningTime="2026-01-09 02:01:02.130595964 +0000 UTC m=+9932.441754950" Jan 09 02:01:04 crc kubenswrapper[4945]: I0109 02:01:04.134210 4945 generic.go:334] "Generic (PLEG): container finished" podID="bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6" containerID="721a971a2c3884b8930d3c49470fa4badc9680599b50750ecca17b2a1949ba7e" exitCode=0 Jan 09 02:01:04 crc kubenswrapper[4945]: I0109 02:01:04.134553 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29465401-7pg7z" event={"ID":"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6","Type":"ContainerDied","Data":"721a971a2c3884b8930d3c49470fa4badc9680599b50750ecca17b2a1949ba7e"} Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.700539 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.847375 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-combined-ca-bundle\") pod \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.847545 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-fernet-keys\") pod \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.847685 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-config-data\") pod \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.847772 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6pwd\" (UniqueName: \"kubernetes.io/projected/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-kube-api-access-v6pwd\") pod \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\" (UID: \"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6\") " Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.855789 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6" (UID: "bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.855792 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-kube-api-access-v6pwd" (OuterVolumeSpecName: "kube-api-access-v6pwd") pod "bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6" (UID: "bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6"). InnerVolumeSpecName "kube-api-access-v6pwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.950673 4945 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.950705 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6pwd\" (UniqueName: \"kubernetes.io/projected/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-kube-api-access-v6pwd\") on node \"crc\" DevicePath \"\"" Jan 09 02:01:05 crc kubenswrapper[4945]: I0109 02:01:05.976955 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6" (UID: "bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 02:01:06 crc kubenswrapper[4945]: I0109 02:01:06.001545 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-config-data" (OuterVolumeSpecName: "config-data") pod "bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6" (UID: "bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 09 02:01:06 crc kubenswrapper[4945]: I0109 02:01:06.053419 4945 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 09 02:01:06 crc kubenswrapper[4945]: I0109 02:01:06.053451 4945 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 09 02:01:06 crc kubenswrapper[4945]: I0109 02:01:06.169117 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29465401-7pg7z" event={"ID":"bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6","Type":"ContainerDied","Data":"b94ad1b4c7079e9386760dc17409b669973e5f2e35b4b24a9106c492f7fc38ee"} Jan 09 02:01:06 crc kubenswrapper[4945]: I0109 02:01:06.169186 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b94ad1b4c7079e9386760dc17409b669973e5f2e35b4b24a9106c492f7fc38ee" Jan 09 02:01:06 crc kubenswrapper[4945]: I0109 02:01:06.169331 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29465401-7pg7z" Jan 09 02:01:13 crc kubenswrapper[4945]: I0109 02:01:13.001866 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:01:13 crc kubenswrapper[4945]: E0109 02:01:13.003154 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:01:27 crc kubenswrapper[4945]: I0109 02:01:27.001673 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:01:27 crc kubenswrapper[4945]: E0109 02:01:27.002754 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:01:42 crc kubenswrapper[4945]: I0109 02:01:42.001105 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:01:42 crc kubenswrapper[4945]: E0109 02:01:42.002070 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:01:54 crc kubenswrapper[4945]: I0109 02:01:54.000274 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:01:54 crc kubenswrapper[4945]: E0109 02:01:54.001315 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:02:05 crc kubenswrapper[4945]: I0109 02:02:05.001101 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:02:05 crc kubenswrapper[4945]: E0109 02:02:05.001819 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:02:18 crc kubenswrapper[4945]: I0109 02:02:18.000401 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:02:18 crc kubenswrapper[4945]: E0109 02:02:18.001212 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:02:31 crc kubenswrapper[4945]: I0109 02:02:31.000233 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:02:31 crc kubenswrapper[4945]: E0109 02:02:31.001140 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:02:45 crc kubenswrapper[4945]: I0109 02:02:45.002051 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:02:45 crc kubenswrapper[4945]: E0109 02:02:45.003214 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:02:57 crc kubenswrapper[4945]: I0109 02:02:57.000450 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:02:57 crc kubenswrapper[4945]: E0109 02:02:57.001147 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.041195 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_515f90ea-9ad3-4d93-82c0-ccb39b893643/init-config-reloader/0.log" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.504648 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_515f90ea-9ad3-4d93-82c0-ccb39b893643/init-config-reloader/0.log" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.532978 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_515f90ea-9ad3-4d93-82c0-ccb39b893643/config-reloader/0.log" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.555561 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_515f90ea-9ad3-4d93-82c0-ccb39b893643/alertmanager/0.log" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.726682 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_1d8f820e-0157-4bc4-b675-a96d5a704c07/aodh-api/0.log" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.751908 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_1d8f820e-0157-4bc4-b675-a96d5a704c07/aodh-listener/0.log" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.804638 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_1d8f820e-0157-4bc4-b675-a96d5a704c07/aodh-evaluator/0.log" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.885187 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_1d8f820e-0157-4bc4-b675-a96d5a704c07/aodh-notifier/0.log" Jan 09 02:03:08 crc kubenswrapper[4945]: I0109 02:03:08.971072 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-799f9b6998-2jhb7_d6142446-21fa-43af-b0c7-46ed8f5111c0/barbican-api/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.000313 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:03:09 crc kubenswrapper[4945]: E0109 02:03:09.000692 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.021450 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-799f9b6998-2jhb7_d6142446-21fa-43af-b0c7-46ed8f5111c0/barbican-api-log/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.176562 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77648949c6-std4c_56ac70af-0c59-4ccc-9ca3-732b5bde275a/barbican-keystone-listener/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.217636 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77648949c6-std4c_56ac70af-0c59-4ccc-9ca3-732b5bde275a/barbican-keystone-listener-log/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.371900 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f5db76bd7-2q6ts_d5288278-7bca-479f-8518-6bc622b31f66/barbican-worker/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.372324 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-f5db76bd7-2q6ts_d5288278-7bca-479f-8518-6bc622b31f66/barbican-worker-log/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.557059 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-qbvtp_1cf89d43-4ef8-44ce-9196-bed933905f35/bootstrap-openstack-openstack-cell1/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.621256 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_770e54ce-50a7-4cd5-8be6-f905ed744d17/ceilometer-central-agent/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.676846 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_770e54ce-50a7-4cd5-8be6-f905ed744d17/ceilometer-notification-agent/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.772860 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_770e54ce-50a7-4cd5-8be6-f905ed744d17/proxy-httpd/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.795051 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_770e54ce-50a7-4cd5-8be6-f905ed744d17/sg-core/0.log" Jan 09 02:03:09 crc kubenswrapper[4945]: I0109 02:03:09.917389 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-n7l5w_a1d8b928-7aa9-475e-9da7-152c2c9590a9/ceph-client-openstack-openstack-cell1/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.032027 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6823cec-9eb1-4888-9870-a60c2be2e698/cinder-api/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.128343 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6823cec-9eb1-4888-9870-a60c2be2e698/cinder-api-log/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.296781 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3ed788db-6586-4da2-896b-3efbc2ee48a9/probe/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.336117 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3ed788db-6586-4da2-896b-3efbc2ee48a9/cinder-backup/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.507019 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e920ac41-940e-4b3e-a210-edddd414ec3f/cinder-scheduler/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.597657 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e920ac41-940e-4b3e-a210-edddd414ec3f/probe/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.709197 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7/cinder-volume/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.793292 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_a0efa3a6-9dd3-49ec-a33d-c68d8ff474b7/probe/0.log" Jan 09 02:03:10 crc kubenswrapper[4945]: I0109 02:03:10.897412 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-g9b52_66c85b89-d1b3-4657-97e7-df9f3c390638/configure-network-openstack-openstack-cell1/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.021060 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-gg6v4_90ecc2d7-7681-462f-b6d8-25eeaaae8e5d/configure-os-openstack-openstack-cell1/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.100355 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b7bc899f-mv8gz_a3643c97-f962-483e-b870-b95122174cbd/init/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.373653 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b7bc899f-mv8gz_a3643c97-f962-483e-b870-b95122174cbd/dnsmasq-dns/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.380202 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b7bc899f-mv8gz_a3643c97-f962-483e-b870-b95122174cbd/init/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.418252 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-dvrl7_947b21b1-6853-466c-8115-a0680381f340/download-cache-openstack-openstack-cell1/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.590437 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_641d1452-204c-48c0-89f8-b1065d2288ca/glance-httpd/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.607071 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_641d1452-204c-48c0-89f8-b1065d2288ca/glance-log/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.791985 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_44a2e71f-e372-41f3-b8e3-ba83a769bca6/glance-httpd/0.log" Jan 09 02:03:11 crc kubenswrapper[4945]: I0109 02:03:11.831372 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_44a2e71f-e372-41f3-b8e3-ba83a769bca6/glance-log/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.060709 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-7d67bffdcd-94tvm_e5f3554f-eb28-47b5-8974-5d0811b2b49f/heat-api/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.091733 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-94594dc99-zmqzt_cbd6a97e-5d89-4301-ad12-96fe5b1ae27e/heat-cfnapi/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.123185 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5dcc459d4-6rx9t_9ef821d4-234b-4c1c-b45e-0b25e6d905c9/heat-engine/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.330489 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9b6dcf455-m4j6c_2c838616-0c22-49a2-86fb-8ecedb6c5bfe/horizon-log/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.386374 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-9b6dcf455-m4j6c_2c838616-0c22-49a2-86fb-8ecedb6c5bfe/horizon/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.518331 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-t2rj8_2f738e96-1625-4837-a484-f513c96dc31c/install-certs-openstack-openstack-cell1/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.731771 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-rn5sr_d2b23121-c02d-4a5a-b40c-8331347f5911/install-os-openstack-openstack-cell1/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.934888 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-74478c8c7f-gzflr_79837a34-fd63-45a4-9521-387e72c26b24/keystone-api/0.log" Jan 09 02:03:12 crc kubenswrapper[4945]: I0109 02:03:12.980170 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29465341-bb2bt_b9f78d1d-dc0b-4aac-9898-2c507f6944e1/keystone-cron/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.059582 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29465401-7pg7z_bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6/keystone-cron/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.126360 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_77b91420-00bf-4b20-9999-52325501b237/kube-state-metrics/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.303776 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-w8drn_1352adf5-5c12-4450-90dd-803a8503da11/libvirt-openstack-openstack-cell1/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.397139 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_db943e15-5363-4299-85fb-cd9b0805fb86/manila-api-log/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.461795 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_db943e15-5363-4299-85fb-cd9b0805fb86/manila-api/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.603204 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_3f17ceb7-ed77-4912-aeaf-025c32f52c78/manila-scheduler/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.645819 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_3f17ceb7-ed77-4912-aeaf-025c32f52c78/probe/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.671602 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_fcb36454-abb9-473c-a184-f7b89cc73f6b/manila-share/0.log" Jan 09 02:03:13 crc kubenswrapper[4945]: I0109 02:03:13.782116 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_fcb36454-abb9-473c-a184-f7b89cc73f6b/probe/0.log" Jan 09 02:03:14 crc kubenswrapper[4945]: I0109 02:03:14.054406 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-9b48855d5-7fhb5_0ba7a8cb-fec8-4b0d-baab-594bc5d674dd/neutron-httpd/0.log" Jan 09 02:03:14 crc kubenswrapper[4945]: I0109 02:03:14.055700 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-9b48855d5-7fhb5_0ba7a8cb-fec8-4b0d-baab-594bc5d674dd/neutron-api/0.log" Jan 09 02:03:14 crc kubenswrapper[4945]: I0109 02:03:14.205381 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-xh8nb_8c991408-c46e-4614-a376-2d459d7bb888/neutron-dhcp-openstack-openstack-cell1/0.log" Jan 09 02:03:14 crc kubenswrapper[4945]: I0109 02:03:14.357631 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-f9588_386d3c0c-3552-4efb-8581-1a39c6f992dd/neutron-metadata-openstack-openstack-cell1/0.log" Jan 09 02:03:14 crc kubenswrapper[4945]: I0109 02:03:14.499064 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-qwz6z_90195899-5a3d-432e-822f-74f2ca65f0b3/neutron-sriov-openstack-openstack-cell1/0.log" Jan 09 02:03:14 crc kubenswrapper[4945]: I0109 02:03:14.699946 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3a3d016b-214a-4ebf-bd39-f0929cd84fe4/nova-api-api/0.log" Jan 09 02:03:14 crc kubenswrapper[4945]: I0109 02:03:14.823791 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_3a3d016b-214a-4ebf-bd39-f0929cd84fe4/nova-api-log/0.log" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.068618 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b2688df2-31d6-4284-b25b-4f17a2ba06ac/nova-cell0-conductor-conductor/0.log" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.084379 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_94dbefda-40cf-4cea-9971-6fabbc79e1f3/nova-cell1-conductor-conductor/0.log" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.311331 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_076c2b0b-d880-4de5-8e11-796837092802/nova-cell1-novncproxy-novncproxy/0.log" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.364901 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wcwt"] Jan 09 02:03:15 crc kubenswrapper[4945]: E0109 02:03:15.365920 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6" containerName="keystone-cron" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.366020 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6" containerName="keystone-cron" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.366402 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4487ab-a34c-41d6-83c7-a9d3fcd3f7d6" containerName="keystone-cron" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.372291 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.375635 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellmjdkt_ac762254-3462-449f-b07c-5cea722eb39f/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.393965 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wcwt"] Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.545649 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-utilities\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.546746 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xtmv\" (UniqueName: \"kubernetes.io/projected/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-kube-api-access-6xtmv\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.546961 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-catalog-content\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.553373 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-dcncv_5a0cc78a-1390-4815-9335-e9f030e50d32/nova-cell1-openstack-openstack-cell1/0.log" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.649376 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-catalog-content\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.649663 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-utilities\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.649776 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xtmv\" (UniqueName: \"kubernetes.io/projected/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-kube-api-access-6xtmv\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.650597 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-catalog-content\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.650884 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-utilities\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:15 crc kubenswrapper[4945]: I0109 02:03:15.769863 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xtmv\" (UniqueName: \"kubernetes.io/projected/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-kube-api-access-6xtmv\") pod \"certified-operators-4wcwt\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:16 crc kubenswrapper[4945]: I0109 02:03:16.007581 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:16 crc kubenswrapper[4945]: I0109 02:03:16.287086 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_1c59bd8a-5b8f-4539-b02c-b73f9664a967/nova-metadata-log/0.log" Jan 09 02:03:16 crc kubenswrapper[4945]: I0109 02:03:16.389350 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_1c59bd8a-5b8f-4539-b02c-b73f9664a967/nova-metadata-metadata/0.log" Jan 09 02:03:16 crc kubenswrapper[4945]: I0109 02:03:16.548311 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wcwt"] Jan 09 02:03:16 crc kubenswrapper[4945]: I0109 02:03:16.602895 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wcwt" event={"ID":"dad139e2-ce6e-4bcf-84f8-bc6976d20a28","Type":"ContainerStarted","Data":"8f1563410e79eb74344613be0bff543f0f1a0b8896632ce00f716e6ddfc1cb3f"} Jan 09 02:03:16 crc kubenswrapper[4945]: I0109 02:03:16.603488 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-ff95db4fd-q2s54_dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca/init/0.log" Jan 09 02:03:16 crc kubenswrapper[4945]: I0109 02:03:16.607877 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8be14dba-24e3-47fe-96ef-6098f1910c8a/nova-scheduler-scheduler/0.log" Jan 09 02:03:17 crc kubenswrapper[4945]: I0109 02:03:17.039103 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-ff95db4fd-q2s54_dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca/init/0.log" Jan 09 02:03:17 crc kubenswrapper[4945]: I0109 02:03:17.276038 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-ff95db4fd-q2s54_dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca/octavia-api/0.log" Jan 09 02:03:17 crc kubenswrapper[4945]: I0109 02:03:17.613234 4945 generic.go:334] "Generic (PLEG): container finished" podID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerID="82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970" exitCode=0 Jan 09 02:03:17 crc kubenswrapper[4945]: I0109 02:03:17.613284 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wcwt" event={"ID":"dad139e2-ce6e-4bcf-84f8-bc6976d20a28","Type":"ContainerDied","Data":"82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970"} Jan 09 02:03:17 crc kubenswrapper[4945]: I0109 02:03:17.617719 4945 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 09 02:03:17 crc kubenswrapper[4945]: I0109 02:03:17.618104 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-ff95db4fd-q2s54_dcecf9e4-c4c6-4f3f-86ec-dea0483b7cca/octavia-api-provider-agent/0.log" Jan 09 02:03:17 crc kubenswrapper[4945]: I0109 02:03:17.777692 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-tgb9d_8d60e34d-3237-4db9-86c1-b9b7ade05a0c/init/0.log" Jan 09 02:03:17 crc kubenswrapper[4945]: I0109 02:03:17.955411 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-tgb9d_8d60e34d-3237-4db9-86c1-b9b7ade05a0c/init/0.log" Jan 09 02:03:18 crc kubenswrapper[4945]: I0109 02:03:18.154774 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9ql7c_1208c3c7-070c-4387-9dc7-fbfda06186fa/init/0.log" Jan 09 02:03:18 crc kubenswrapper[4945]: I0109 02:03:18.255821 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-tgb9d_8d60e34d-3237-4db9-86c1-b9b7ade05a0c/octavia-healthmanager/0.log" Jan 09 02:03:18 crc kubenswrapper[4945]: I0109 02:03:18.628631 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wcwt" event={"ID":"dad139e2-ce6e-4bcf-84f8-bc6976d20a28","Type":"ContainerStarted","Data":"73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e"} Jan 09 02:03:18 crc kubenswrapper[4945]: I0109 02:03:18.655623 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9ql7c_1208c3c7-070c-4387-9dc7-fbfda06186fa/octavia-housekeeping/0.log" Jan 09 02:03:18 crc kubenswrapper[4945]: I0109 02:03:18.697215 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-9ql7c_1208c3c7-070c-4387-9dc7-fbfda06186fa/init/0.log" Jan 09 02:03:18 crc kubenswrapper[4945]: I0109 02:03:18.940849 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-s964b_e00cd5b2-f064-4088-8d6a-7ad028fc7147/init/0.log" Jan 09 02:03:19 crc kubenswrapper[4945]: I0109 02:03:19.142085 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-s964b_e00cd5b2-f064-4088-8d6a-7ad028fc7147/init/0.log" Jan 09 02:03:19 crc kubenswrapper[4945]: I0109 02:03:19.174971 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-s964b_e00cd5b2-f064-4088-8d6a-7ad028fc7147/octavia-rsyslog/0.log" Jan 09 02:03:19 crc kubenswrapper[4945]: I0109 02:03:19.394144 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-8mx9s_39f06cf9-0b65-4c87-958c-8d14482890ac/init/0.log" Jan 09 02:03:19 crc kubenswrapper[4945]: I0109 02:03:19.819174 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-8mx9s_39f06cf9-0b65-4c87-958c-8d14482890ac/init/0.log" Jan 09 02:03:19 crc kubenswrapper[4945]: I0109 02:03:19.883981 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a3f7c56-bed3-4e26-93b3-17627d2066bb/mysql-bootstrap/0.log" Jan 09 02:03:19 crc kubenswrapper[4945]: I0109 02:03:19.946279 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-8mx9s_39f06cf9-0b65-4c87-958c-8d14482890ac/octavia-worker/0.log" Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.093813 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a3f7c56-bed3-4e26-93b3-17627d2066bb/mysql-bootstrap/0.log" Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.336574 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a3f7c56-bed3-4e26-93b3-17627d2066bb/galera/0.log" Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.450812 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_65eea646-4dc8-44a1-b394-3d4ce08867a6/mysql-bootstrap/0.log" Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.550127 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_65eea646-4dc8-44a1-b394-3d4ce08867a6/mysql-bootstrap/0.log" Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.599828 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_65eea646-4dc8-44a1-b394-3d4ce08867a6/galera/0.log" Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.651037 4945 generic.go:334] "Generic (PLEG): container finished" podID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerID="73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e" exitCode=0 Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.651097 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wcwt" event={"ID":"dad139e2-ce6e-4bcf-84f8-bc6976d20a28","Type":"ContainerDied","Data":"73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e"} Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.685136 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f161a9b2-86ed-4cd1-9def-3a9c8736b302/openstackclient/0.log" Jan 09 02:03:20 crc kubenswrapper[4945]: I0109 02:03:20.839969 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-l579m_bf226c11-b1c7-48c1-938e-2f6e96678644/ovn-controller/0.log" Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.015116 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xvwk7_873fe3a7-08d5-4c2f-866b-da5d92ee950a/openstack-network-exporter/0.log" Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.097022 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ht5ml_6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d/ovsdb-server-init/0.log" Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.280584 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ht5ml_6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d/ovsdb-server/0.log" Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.579836 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ht5ml_6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d/ovsdb-server-init/0.log" Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.651537 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ht5ml_6b8f4cd7-57cf-4ab5-a1a7-55e2cfc3be1d/ovs-vswitchd/0.log" Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.669346 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wcwt" event={"ID":"dad139e2-ce6e-4bcf-84f8-bc6976d20a28","Type":"ContainerStarted","Data":"d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a"} Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.693650 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wcwt" podStartSLOduration=3.136070766 podStartE2EDuration="6.693629s" podCreationTimestamp="2026-01-09 02:03:15 +0000 UTC" firstStartedPulling="2026-01-09 02:03:17.617458405 +0000 UTC m=+10067.928617351" lastFinishedPulling="2026-01-09 02:03:21.175016639 +0000 UTC m=+10071.486175585" observedRunningTime="2026-01-09 02:03:21.682854854 +0000 UTC m=+10071.994013790" watchObservedRunningTime="2026-01-09 02:03:21.693629 +0000 UTC m=+10072.004787946" Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.768584 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8845476d-879e-4e67-b913-4fd5c1c8f8cc/openstack-network-exporter/0.log" Jan 09 02:03:21 crc kubenswrapper[4945]: I0109 02:03:21.861351 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8845476d-879e-4e67-b913-4fd5c1c8f8cc/ovn-northd/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.001919 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:03:22 crc kubenswrapper[4945]: E0109 02:03:22.002651 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.117507 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-d5mgp_56c39c1b-9a6d-43c7-8d83-3d6191abf210/ovn-openstack-openstack-cell1/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.193255 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_88e271bf-a6e7-4db1-9f1c-7d3260cbeb29/openstack-network-exporter/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.304206 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_88e271bf-a6e7-4db1-9f1c-7d3260cbeb29/ovsdbserver-nb/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.434019 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8a88c188-1123-4040-8ad7-4622fcbb1715/openstack-network-exporter/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.456837 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_8a88c188-1123-4040-8ad7-4622fcbb1715/ovsdbserver-nb/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.669506 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_766c4239-387b-46ba-9cc8-933d55f0a636/ovsdbserver-nb/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.730962 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_766c4239-387b-46ba-9cc8-933d55f0a636/openstack-network-exporter/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.886253 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0ef85a57-00f5-483f-8641-a0cfb51b4045/openstack-network-exporter/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.898930 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0ef85a57-00f5-483f-8641-a0cfb51b4045/ovsdbserver-sb/0.log" Jan 09 02:03:22 crc kubenswrapper[4945]: I0109 02:03:22.951598 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_ae9bb09d-4258-4dae-b69e-28ea7f437c63/openstack-network-exporter/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.109473 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_ae9bb09d-4258-4dae-b69e-28ea7f437c63/ovsdbserver-sb/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.139253 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_99f4dd38-74e0-43f9-951e-39b35d884b9e/openstack-network-exporter/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.166368 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_99f4dd38-74e0-43f9-951e-39b35d884b9e/ovsdbserver-sb/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.443756 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64997f759b-plbfx_de5eb7fd-a844-4806-92fb-b8d37736abee/placement-api/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.446236 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64997f759b-plbfx_de5eb7fd-a844-4806-92fb-b8d37736abee/placement-log/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.630891 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_79652c70-3275-4435-ba88-786cb8beaf4e/memcached/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.668108 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-ckcg44_2cc86b17-cd47-4048-8171-5c4d6bbc3ea8/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.730736 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_124ecf7f-2df8-4d30-82e9-b393785c7786/init-config-reloader/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.847959 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_124ecf7f-2df8-4d30-82e9-b393785c7786/init-config-reloader/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.886762 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_124ecf7f-2df8-4d30-82e9-b393785c7786/config-reloader/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.931805 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_124ecf7f-2df8-4d30-82e9-b393785c7786/thanos-sidecar/0.log" Jan 09 02:03:23 crc kubenswrapper[4945]: I0109 02:03:23.936434 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_124ecf7f-2df8-4d30-82e9-b393785c7786/prometheus/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.093665 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2a5d3098-d52c-489b-8a1d-64ac1aed714c/setup-container/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.235426 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2a5d3098-d52c-489b-8a1d-64ac1aed714c/setup-container/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.298611 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2a5d3098-d52c-489b-8a1d-64ac1aed714c/rabbitmq/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.314927 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc1824e6-f578-45fb-9536-91976e922955/setup-container/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.469073 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc1824e6-f578-45fb-9536-91976e922955/setup-container/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.525176 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-szs94_f686f621-ce60-4be5-9671-2a9d2a6c6990/reboot-os-openstack-openstack-cell1/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.553339 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc1824e6-f578-45fb-9536-91976e922955/rabbitmq/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.704619 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-qfkwk_de4ab205-3743-46f1-8922-cfd9c5e6f54d/run-os-openstack-openstack-cell1/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.786058 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-rkcjd_75d9260e-736f-4062-a544-5dc637a3a7da/ssh-known-hosts-openstack/0.log" Jan 09 02:03:24 crc kubenswrapper[4945]: I0109 02:03:24.955672 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-6jhb4_cf0982d8-b6fd-4bfc-b8d5-258b61aceee2/telemetry-openstack-openstack-cell1/0.log" Jan 09 02:03:25 crc kubenswrapper[4945]: I0109 02:03:25.041389 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-nrwmx_fe7e1ae9-abb7-4f68-834a-5e4245dd2374/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Jan 09 02:03:25 crc kubenswrapper[4945]: I0109 02:03:25.226268 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-hpvhk_7ca909c2-0c38-4762-aaa7-f15abf2e5548/validate-network-openstack-openstack-cell1/0.log" Jan 09 02:03:26 crc kubenswrapper[4945]: I0109 02:03:26.014019 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:26 crc kubenswrapper[4945]: I0109 02:03:26.014358 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:26 crc kubenswrapper[4945]: I0109 02:03:26.066806 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:26 crc kubenswrapper[4945]: I0109 02:03:26.769708 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:27 crc kubenswrapper[4945]: I0109 02:03:27.127695 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wcwt"] Jan 09 02:03:28 crc kubenswrapper[4945]: I0109 02:03:28.739988 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wcwt" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerName="registry-server" containerID="cri-o://d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a" gracePeriod=2 Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.263797 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.362760 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-utilities\") pod \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.363019 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xtmv\" (UniqueName: \"kubernetes.io/projected/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-kube-api-access-6xtmv\") pod \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.363057 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-catalog-content\") pod \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\" (UID: \"dad139e2-ce6e-4bcf-84f8-bc6976d20a28\") " Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.363631 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-utilities" (OuterVolumeSpecName: "utilities") pod "dad139e2-ce6e-4bcf-84f8-bc6976d20a28" (UID: "dad139e2-ce6e-4bcf-84f8-bc6976d20a28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.364085 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.368258 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-kube-api-access-6xtmv" (OuterVolumeSpecName: "kube-api-access-6xtmv") pod "dad139e2-ce6e-4bcf-84f8-bc6976d20a28" (UID: "dad139e2-ce6e-4bcf-84f8-bc6976d20a28"). InnerVolumeSpecName "kube-api-access-6xtmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.409140 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dad139e2-ce6e-4bcf-84f8-bc6976d20a28" (UID: "dad139e2-ce6e-4bcf-84f8-bc6976d20a28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.466141 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xtmv\" (UniqueName: \"kubernetes.io/projected/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-kube-api-access-6xtmv\") on node \"crc\" DevicePath \"\"" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.466182 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dad139e2-ce6e-4bcf-84f8-bc6976d20a28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.751549 4945 generic.go:334] "Generic (PLEG): container finished" podID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerID="d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a" exitCode=0 Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.751594 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wcwt" event={"ID":"dad139e2-ce6e-4bcf-84f8-bc6976d20a28","Type":"ContainerDied","Data":"d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a"} Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.751618 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wcwt" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.751661 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wcwt" event={"ID":"dad139e2-ce6e-4bcf-84f8-bc6976d20a28","Type":"ContainerDied","Data":"8f1563410e79eb74344613be0bff543f0f1a0b8896632ce00f716e6ddfc1cb3f"} Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.751682 4945 scope.go:117] "RemoveContainer" containerID="d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.780362 4945 scope.go:117] "RemoveContainer" containerID="73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.820743 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wcwt"] Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.831081 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wcwt"] Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.832253 4945 scope.go:117] "RemoveContainer" containerID="82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.867143 4945 scope.go:117] "RemoveContainer" containerID="d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a" Jan 09 02:03:29 crc kubenswrapper[4945]: E0109 02:03:29.869589 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a\": container with ID starting with d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a not found: ID does not exist" containerID="d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.869640 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a"} err="failed to get container status \"d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a\": rpc error: code = NotFound desc = could not find container \"d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a\": container with ID starting with d5921dc7a02717409578df4d030bfc826a84f46cd95ed0a8de386854864d020a not found: ID does not exist" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.869686 4945 scope.go:117] "RemoveContainer" containerID="73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e" Jan 09 02:03:29 crc kubenswrapper[4945]: E0109 02:03:29.870252 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e\": container with ID starting with 73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e not found: ID does not exist" containerID="73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.870287 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e"} err="failed to get container status \"73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e\": rpc error: code = NotFound desc = could not find container \"73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e\": container with ID starting with 73c5f55f3beedb6c59b1f469e8fdc70120a50862e92d78e228a00f26ad54022e not found: ID does not exist" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.870308 4945 scope.go:117] "RemoveContainer" containerID="82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970" Jan 09 02:03:29 crc kubenswrapper[4945]: E0109 02:03:29.870592 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970\": container with ID starting with 82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970 not found: ID does not exist" containerID="82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970" Jan 09 02:03:29 crc kubenswrapper[4945]: I0109 02:03:29.870636 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970"} err="failed to get container status \"82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970\": rpc error: code = NotFound desc = could not find container \"82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970\": container with ID starting with 82c4e04b8698f28c0fa51b052474acb04538c355b704e90517232f835e1bc970 not found: ID does not exist" Jan 09 02:03:29 crc kubenswrapper[4945]: E0109 02:03:29.900501 4945 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddad139e2_ce6e_4bcf_84f8_bc6976d20a28.slice/crio-8f1563410e79eb74344613be0bff543f0f1a0b8896632ce00f716e6ddfc1cb3f\": RecentStats: unable to find data in memory cache]" Jan 09 02:03:30 crc kubenswrapper[4945]: I0109 02:03:30.051387 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" path="/var/lib/kubelet/pods/dad139e2-ce6e-4bcf-84f8-bc6976d20a28/volumes" Jan 09 02:03:37 crc kubenswrapper[4945]: I0109 02:03:37.001236 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:03:37 crc kubenswrapper[4945]: E0109 02:03:37.002148 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:03:48 crc kubenswrapper[4945]: I0109 02:03:48.867454 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2_9200d52e-875b-4b26-b67b-7515bd99f30a/util/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.001489 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:03:49 crc kubenswrapper[4945]: E0109 02:03:49.002053 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.061284 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2_9200d52e-875b-4b26-b67b-7515bd99f30a/pull/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.086108 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2_9200d52e-875b-4b26-b67b-7515bd99f30a/util/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.125859 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2_9200d52e-875b-4b26-b67b-7515bd99f30a/pull/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.262522 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2_9200d52e-875b-4b26-b67b-7515bd99f30a/util/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.280291 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2_9200d52e-875b-4b26-b67b-7515bd99f30a/pull/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.305066 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_af6a20fa9b494696ff7193b5917420aa8c46f58a2803d95282ee27edd8ps5k2_9200d52e-875b-4b26-b67b-7515bd99f30a/extract/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.541435 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-qwj2j_1604a21a-39a8-4c27-886f-dd74a9c6ed92/manager/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.617021 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-xvn4t_18520dae-623d-499b-95c6-96b4de8d9bf4/manager/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.744955 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-k72k4_90070b95-ae61-4ae4-b6b4-a4436fe457ef/manager/0.log" Jan 09 02:03:49 crc kubenswrapper[4945]: I0109 02:03:49.931120 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7b549fc966-9t9tj_f9869afd-afb1-4974-9b35-61c60e107d86/manager/0.log" Jan 09 02:03:50 crc kubenswrapper[4945]: I0109 02:03:50.032223 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-hnk29_c32027c9-9810-4f56-afaf-b680d5baed3c/manager/0.log" Jan 09 02:03:50 crc kubenswrapper[4945]: I0109 02:03:50.132766 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-gh5xr_b9560bc6-7245-4df3-8daa-19f79d9d3d12/manager/0.log" Jan 09 02:03:50 crc kubenswrapper[4945]: I0109 02:03:50.371394 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-fsjwc_18eb8b42-2595-4104-aaf4-005fda7ded69/manager/0.log" Jan 09 02:03:50 crc kubenswrapper[4945]: I0109 02:03:50.643528 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-568985c78-wbhqw_750dd75e-3bcc-4490-a77b-0e759b74b760/manager/0.log" Jan 09 02:03:50 crc kubenswrapper[4945]: I0109 02:03:50.846427 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-v5nq9_f50a75b2-71ab-49c6-b184-f630dbfd9cc0/manager/0.log" Jan 09 02:03:51 crc kubenswrapper[4945]: I0109 02:03:51.163882 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-npd8w_7c55b3d4-07b6-45cd-8216-90e88ffe9e59/manager/0.log" Jan 09 02:03:51 crc kubenswrapper[4945]: I0109 02:03:51.314613 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-48rg6_3c233910-81d5-42fb-8fe9-af3f45260c72/manager/0.log" Jan 09 02:03:51 crc kubenswrapper[4945]: I0109 02:03:51.452708 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-q7lmc_77d96cd2-5140-418c-8c4b-0789ccd534e1/manager/0.log" Jan 09 02:03:51 crc kubenswrapper[4945]: I0109 02:03:51.691095 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-f6n8j_f1e9bdc2-26c3-4304-8af5-5423dccf220e/manager/0.log" Jan 09 02:03:51 crc kubenswrapper[4945]: I0109 02:03:51.714863 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-xnlp6_1dfe9903-3eb9-4c02-a453-fe2a8314d79b/manager/0.log" Jan 09 02:03:51 crc kubenswrapper[4945]: I0109 02:03:51.866578 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7qpr7g_cd1e4942-f561-478c-b456-93d8886d0a31/manager/0.log" Jan 09 02:03:52 crc kubenswrapper[4945]: I0109 02:03:52.277574 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-q987z_bd224d58-5cf6-4728-bf36-4676b288ec46/registry-server/0.log" Jan 09 02:03:52 crc kubenswrapper[4945]: I0109 02:03:52.407779 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7b756b7698-j2d2b_1ccf42cb-84a0-47f7-b9ec-c9d777e1ca31/operator/0.log" Jan 09 02:03:52 crc kubenswrapper[4945]: I0109 02:03:52.619490 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-4qb9v_0b0d4e49-ab97-4808-96da-642983c3bafa/manager/0.log" Jan 09 02:03:52 crc kubenswrapper[4945]: I0109 02:03:52.723583 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-vcpxm_0383d95b-a4e8-401c-92f9-7564a1aa286a/manager/0.log" Jan 09 02:03:52 crc kubenswrapper[4945]: I0109 02:03:52.836141 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v7fnt_e16d1a8a-0404-4007-86fd-886fed232b0b/operator/0.log" Jan 09 02:03:52 crc kubenswrapper[4945]: I0109 02:03:52.954449 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-5twfl_aed9ac7f-f2e8-4729-8caa-d6c81f09a392/manager/0.log" Jan 09 02:03:53 crc kubenswrapper[4945]: I0109 02:03:53.266354 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-dg8tj_37d70ea8-8db9-4194-a697-5e8f77c89be0/manager/0.log" Jan 09 02:03:53 crc kubenswrapper[4945]: I0109 02:03:53.312613 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-h6wf5_eed07e92-bbf8-4845-ac9a-3ab8eafadb58/manager/0.log" Jan 09 02:03:53 crc kubenswrapper[4945]: I0109 02:03:53.358914 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-m49gl_a62c3d41-f859-4eea-8e77-468d06e687bd/manager/0.log" Jan 09 02:03:54 crc kubenswrapper[4945]: I0109 02:03:54.573246 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57bd96d86c-vf2m9_640c2f26-25d6-4aa3-a5ed-fb6a0890bc3d/manager/0.log" Jan 09 02:04:00 crc kubenswrapper[4945]: I0109 02:04:00.013085 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:04:00 crc kubenswrapper[4945]: E0109 02:04:00.013776 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:04:13 crc kubenswrapper[4945]: I0109 02:04:13.000334 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:04:13 crc kubenswrapper[4945]: E0109 02:04:13.001160 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:04:13 crc kubenswrapper[4945]: I0109 02:04:13.217586 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-79m2f_662bb234-dff9-4a44-9432-c2f864195ce0/control-plane-machine-set-operator/0.log" Jan 09 02:04:13 crc kubenswrapper[4945]: I0109 02:04:13.415210 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lmmph_f15a080f-5182-49ab-bcb8-75d85654378a/kube-rbac-proxy/0.log" Jan 09 02:04:13 crc kubenswrapper[4945]: I0109 02:04:13.447097 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lmmph_f15a080f-5182-49ab-bcb8-75d85654378a/machine-api-operator/0.log" Jan 09 02:04:26 crc kubenswrapper[4945]: I0109 02:04:26.166219 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-rkvnx_8b1362ec-1b4b-44f9-8f19-29f48e9d443e/cert-manager-controller/0.log" Jan 09 02:04:26 crc kubenswrapper[4945]: I0109 02:04:26.367615 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-vb49h_2c1f140e-13b8-4135-bd0f-ac210dd8d3d1/cert-manager-cainjector/0.log" Jan 09 02:04:26 crc kubenswrapper[4945]: I0109 02:04:26.430835 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-csm2r_d143d502-c896-48f2-bbd3-f9fbc4e814fc/cert-manager-webhook/0.log" Jan 09 02:04:27 crc kubenswrapper[4945]: I0109 02:04:27.000286 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:04:27 crc kubenswrapper[4945]: E0109 02:04:27.000562 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:04:39 crc kubenswrapper[4945]: I0109 02:04:39.347873 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-phj4q_4ab97d43-54d6-4a5e-8eab-a87fbecd6c8c/nmstate-console-plugin/0.log" Jan 09 02:04:39 crc kubenswrapper[4945]: I0109 02:04:39.502814 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-dkw8b_333350be-c9ef-44d4-9de8-58b29fe9da27/nmstate-handler/0.log" Jan 09 02:04:39 crc kubenswrapper[4945]: I0109 02:04:39.526026 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-wpxks_9b83df9c-7230-4d13-922b-de7ea806d98d/kube-rbac-proxy/0.log" Jan 09 02:04:39 crc kubenswrapper[4945]: I0109 02:04:39.698497 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-wpxks_9b83df9c-7230-4d13-922b-de7ea806d98d/nmstate-metrics/0.log" Jan 09 02:04:39 crc kubenswrapper[4945]: I0109 02:04:39.730113 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-9jxqm_44391049-db83-4408-bf6c-086285c20de7/nmstate-operator/0.log" Jan 09 02:04:39 crc kubenswrapper[4945]: I0109 02:04:39.844172 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-fz94s_9011efad-d324-473e-bb95-cb0ee8905fc1/nmstate-webhook/0.log" Jan 09 02:04:41 crc kubenswrapper[4945]: I0109 02:04:41.000842 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:04:41 crc kubenswrapper[4945]: E0109 02:04:41.001346 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:04:54 crc kubenswrapper[4945]: I0109 02:04:54.851729 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5bzq8_be31d881-d239-450c-8a45-622a6645072f/prometheus-operator/0.log" Jan 09 02:04:54 crc kubenswrapper[4945]: I0109 02:04:54.953124 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t_0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf/prometheus-operator-admission-webhook/0.log" Jan 09 02:04:55 crc kubenswrapper[4945]: I0109 02:04:55.056166 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh_72c8983c-93b8-44f7-bbe1-9e8d048f6b3f/prometheus-operator-admission-webhook/0.log" Jan 09 02:04:55 crc kubenswrapper[4945]: I0109 02:04:55.151823 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-bc94x_77ae07e8-12da-477b-86be-05e24de9edf7/operator/0.log" Jan 09 02:04:55 crc kubenswrapper[4945]: I0109 02:04:55.271660 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-w65s2_bca46932-b26c-40a7-a51f-9008f7e153ab/perses-operator/0.log" Jan 09 02:04:56 crc kubenswrapper[4945]: I0109 02:04:56.001409 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:04:56 crc kubenswrapper[4945]: E0109 02:04:56.002757 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:05:09 crc kubenswrapper[4945]: I0109 02:05:09.539085 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-mdsjd_8e87d1f4-a552-4809-9897-d28efa1967da/kube-rbac-proxy/0.log" Jan 09 02:05:09 crc kubenswrapper[4945]: I0109 02:05:09.672246 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-frr-files/0.log" Jan 09 02:05:09 crc kubenswrapper[4945]: I0109 02:05:09.902644 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-frr-files/0.log" Jan 09 02:05:09 crc kubenswrapper[4945]: I0109 02:05:09.908902 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-mdsjd_8e87d1f4-a552-4809-9897-d28efa1967da/controller/0.log" Jan 09 02:05:09 crc kubenswrapper[4945]: I0109 02:05:09.919868 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-reloader/0.log" Jan 09 02:05:09 crc kubenswrapper[4945]: I0109 02:05:09.947557 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-metrics/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.001061 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:05:10 crc kubenswrapper[4945]: E0109 02:05:10.001360 4945 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vbm95_openshift-machine-config-operator(694a1575-6630-406f-93e7-ef55359bc79c)\"" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.100146 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-reloader/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.245169 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-reloader/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.266607 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-metrics/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.267262 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-frr-files/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.349119 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-metrics/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.491786 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-frr-files/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.511570 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-reloader/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.523167 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/cp-metrics/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.550563 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/controller/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.693152 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/frr-metrics/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.725690 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/kube-rbac-proxy/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.732862 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/kube-rbac-proxy-frr/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.915416 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/reloader/0.log" Jan 09 02:05:10 crc kubenswrapper[4945]: I0109 02:05:10.952878 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-rfzl6_4bcee75f-e456-41cb-bf09-b6cc6052c849/frr-k8s-webhook-server/0.log" Jan 09 02:05:11 crc kubenswrapper[4945]: I0109 02:05:11.116675 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7748b8f8-75z5m_b278de9b-64f4-4686-811f-bee3eff92638/manager/0.log" Jan 09 02:05:11 crc kubenswrapper[4945]: I0109 02:05:11.285811 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-759575b9df-pnhm9_ce1c5efe-289d-4419-9089-bb1fc5761690/webhook-server/0.log" Jan 09 02:05:11 crc kubenswrapper[4945]: I0109 02:05:11.403704 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nhjd2_9be4e803-c45e-4443-b1f5-2ea89eed04e6/kube-rbac-proxy/0.log" Jan 09 02:05:12 crc kubenswrapper[4945]: I0109 02:05:12.487805 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nhjd2_9be4e803-c45e-4443-b1f5-2ea89eed04e6/speaker/0.log" Jan 09 02:05:13 crc kubenswrapper[4945]: I0109 02:05:13.828346 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tcq8p_27da940b-5334-4559-8cf3-754a90037ef5/frr/0.log" Jan 09 02:05:22 crc kubenswrapper[4945]: I0109 02:05:22.000509 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:05:22 crc kubenswrapper[4945]: I0109 02:05:22.924139 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"31191e133d2d313e4adf0a2674b2582f584744001ec02279d8ae28794d2aa4e5"} Jan 09 02:05:27 crc kubenswrapper[4945]: I0109 02:05:27.600171 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs_75c30e97-a22c-489e-84d5-053809039e77/util/0.log" Jan 09 02:05:27 crc kubenswrapper[4945]: I0109 02:05:27.756418 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs_75c30e97-a22c-489e-84d5-053809039e77/util/0.log" Jan 09 02:05:27 crc kubenswrapper[4945]: I0109 02:05:27.779546 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs_75c30e97-a22c-489e-84d5-053809039e77/pull/0.log" Jan 09 02:05:28 crc kubenswrapper[4945]: I0109 02:05:28.275776 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs_75c30e97-a22c-489e-84d5-053809039e77/pull/0.log" Jan 09 02:05:28 crc kubenswrapper[4945]: I0109 02:05:28.474432 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs_75c30e97-a22c-489e-84d5-053809039e77/util/0.log" Jan 09 02:05:28 crc kubenswrapper[4945]: I0109 02:05:28.532693 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs_75c30e97-a22c-489e-84d5-053809039e77/extract/0.log" Jan 09 02:05:28 crc kubenswrapper[4945]: I0109 02:05:28.564400 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ar59gs_75c30e97-a22c-489e-84d5-053809039e77/pull/0.log" Jan 09 02:05:28 crc kubenswrapper[4945]: I0109 02:05:28.704027 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc_57ccdf78-e21f-4952-a6a3-2a66d509b1bb/util/0.log" Jan 09 02:05:28 crc kubenswrapper[4945]: I0109 02:05:28.868937 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc_57ccdf78-e21f-4952-a6a3-2a66d509b1bb/util/0.log" Jan 09 02:05:28 crc kubenswrapper[4945]: I0109 02:05:28.899731 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc_57ccdf78-e21f-4952-a6a3-2a66d509b1bb/pull/0.log" Jan 09 02:05:28 crc kubenswrapper[4945]: I0109 02:05:28.953284 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc_57ccdf78-e21f-4952-a6a3-2a66d509b1bb/pull/0.log" Jan 09 02:05:29 crc kubenswrapper[4945]: I0109 02:05:29.055817 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc_57ccdf78-e21f-4952-a6a3-2a66d509b1bb/util/0.log" Jan 09 02:05:29 crc kubenswrapper[4945]: I0109 02:05:29.068313 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc_57ccdf78-e21f-4952-a6a3-2a66d509b1bb/pull/0.log" Jan 09 02:05:29 crc kubenswrapper[4945]: I0109 02:05:29.097420 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4pbfkc_57ccdf78-e21f-4952-a6a3-2a66d509b1bb/extract/0.log" Jan 09 02:05:29 crc kubenswrapper[4945]: I0109 02:05:29.276764 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b_9fb4e4b3-9425-474f-9125-d98c958af414/util/0.log" Jan 09 02:05:29 crc kubenswrapper[4945]: I0109 02:05:29.879618 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b_9fb4e4b3-9425-474f-9125-d98c958af414/pull/0.log" Jan 09 02:05:29 crc kubenswrapper[4945]: I0109 02:05:29.898286 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b_9fb4e4b3-9425-474f-9125-d98c958af414/util/0.log" Jan 09 02:05:29 crc kubenswrapper[4945]: I0109 02:05:29.946970 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b_9fb4e4b3-9425-474f-9125-d98c958af414/pull/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.096892 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b_9fb4e4b3-9425-474f-9125-d98c958af414/pull/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.099571 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b_9fb4e4b3-9425-474f-9125-d98c958af414/util/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.153930 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa89qt7b_9fb4e4b3-9425-474f-9125-d98c958af414/extract/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.308944 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp_674b6d94-cd48-43a3-a15b-748ef00b6579/util/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.497316 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp_674b6d94-cd48-43a3-a15b-748ef00b6579/util/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.521076 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp_674b6d94-cd48-43a3-a15b-748ef00b6579/pull/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.528191 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp_674b6d94-cd48-43a3-a15b-748ef00b6579/pull/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.738145 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp_674b6d94-cd48-43a3-a15b-748ef00b6579/util/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.741914 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp_674b6d94-cd48-43a3-a15b-748ef00b6579/pull/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.790524 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f088wlnp_674b6d94-cd48-43a3-a15b-748ef00b6579/extract/0.log" Jan 09 02:05:30 crc kubenswrapper[4945]: I0109 02:05:30.920325 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bzbww_def4b1da-2187-4d51-be0d-ae39bd2a7b30/extract-utilities/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.100803 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bzbww_def4b1da-2187-4d51-be0d-ae39bd2a7b30/extract-content/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.116877 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bzbww_def4b1da-2187-4d51-be0d-ae39bd2a7b30/extract-utilities/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.127868 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bzbww_def4b1da-2187-4d51-be0d-ae39bd2a7b30/extract-content/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.309755 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bzbww_def4b1da-2187-4d51-be0d-ae39bd2a7b30/extract-utilities/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.325704 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bzbww_def4b1da-2187-4d51-be0d-ae39bd2a7b30/extract-content/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.408645 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vsdkv_fbc0af81-185c-4502-a81d-0c843ba6d0cb/extract-utilities/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.598324 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-spq2g"] Jan 09 02:05:31 crc kubenswrapper[4945]: E0109 02:05:31.598747 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerName="registry-server" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.598765 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerName="registry-server" Jan 09 02:05:31 crc kubenswrapper[4945]: E0109 02:05:31.598800 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerName="extract-utilities" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.598807 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerName="extract-utilities" Jan 09 02:05:31 crc kubenswrapper[4945]: E0109 02:05:31.598818 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerName="extract-content" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.598824 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerName="extract-content" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.599023 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad139e2-ce6e-4bcf-84f8-bc6976d20a28" containerName="registry-server" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.600531 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.627925 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vsdkv_fbc0af81-185c-4502-a81d-0c843ba6d0cb/extract-content/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.632698 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spq2g"] Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.657228 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vsdkv_fbc0af81-185c-4502-a81d-0c843ba6d0cb/extract-content/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.681358 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vsdkv_fbc0af81-185c-4502-a81d-0c843ba6d0cb/extract-utilities/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.694366 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-utilities\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.694503 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-catalog-content\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.694541 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjhg\" (UniqueName: \"kubernetes.io/projected/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-kube-api-access-8hjhg\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.705396 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bzbww_def4b1da-2187-4d51-be0d-ae39bd2a7b30/registry-server/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.797116 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-utilities\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.797483 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-catalog-content\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.797517 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hjhg\" (UniqueName: \"kubernetes.io/projected/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-kube-api-access-8hjhg\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.797643 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-utilities\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.797874 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-catalog-content\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.816641 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hjhg\" (UniqueName: \"kubernetes.io/projected/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-kube-api-access-8hjhg\") pod \"redhat-operators-spq2g\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.888761 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vsdkv_fbc0af81-185c-4502-a81d-0c843ba6d0cb/extract-utilities/0.log" Jan 09 02:05:31 crc kubenswrapper[4945]: I0109 02:05:31.935741 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.010846 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vsdkv_fbc0af81-185c-4502-a81d-0c843ba6d0cb/registry-server/0.log" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.178653 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tl6cr_776b72cb-be81-499c-9a26-09ce115d3b8e/marketplace-operator/0.log" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.229617 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vsdkv_fbc0af81-185c-4502-a81d-0c843ba6d0cb/extract-content/0.log" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.507060 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xsf4k_fe4f2df8-e361-4814-bc78-16d82dd1cb84/extract-utilities/0.log" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.700316 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spq2g"] Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.711035 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xsf4k_fe4f2df8-e361-4814-bc78-16d82dd1cb84/extract-content/0.log" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.714089 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xsf4k_fe4f2df8-e361-4814-bc78-16d82dd1cb84/extract-content/0.log" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.739959 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xsf4k_fe4f2df8-e361-4814-bc78-16d82dd1cb84/extract-utilities/0.log" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.919481 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xsf4k_fe4f2df8-e361-4814-bc78-16d82dd1cb84/extract-utilities/0.log" Jan 09 02:05:32 crc kubenswrapper[4945]: I0109 02:05:32.951567 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xsf4k_fe4f2df8-e361-4814-bc78-16d82dd1cb84/extract-content/0.log" Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.057153 4945 generic.go:334] "Generic (PLEG): container finished" podID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerID="5a8c570be787de4a75058eaacd831e1605a916a56c34bfd9cd3255309df6883e" exitCode=0 Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.057210 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spq2g" event={"ID":"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd","Type":"ContainerDied","Data":"5a8c570be787de4a75058eaacd831e1605a916a56c34bfd9cd3255309df6883e"} Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.057246 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spq2g" event={"ID":"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd","Type":"ContainerStarted","Data":"a5c6c11c35000dbe68e7cd96745e4cd0699a55fbc03ec1bc1cb92265ad3f44f6"} Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.069903 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mpr9p_ce75ef30-8da2-4993-b5d6-6db6250cb3ac/extract-utilities/0.log" Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.301392 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xsf4k_fe4f2df8-e361-4814-bc78-16d82dd1cb84/registry-server/0.log" Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.431525 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mpr9p_ce75ef30-8da2-4993-b5d6-6db6250cb3ac/extract-content/0.log" Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.434795 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mpr9p_ce75ef30-8da2-4993-b5d6-6db6250cb3ac/extract-content/0.log" Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.473635 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mpr9p_ce75ef30-8da2-4993-b5d6-6db6250cb3ac/extract-utilities/0.log" Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.654704 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mpr9p_ce75ef30-8da2-4993-b5d6-6db6250cb3ac/extract-utilities/0.log" Jan 09 02:05:33 crc kubenswrapper[4945]: I0109 02:05:33.696385 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mpr9p_ce75ef30-8da2-4993-b5d6-6db6250cb3ac/extract-content/0.log" Jan 09 02:05:34 crc kubenswrapper[4945]: I0109 02:05:34.947224 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mpr9p_ce75ef30-8da2-4993-b5d6-6db6250cb3ac/registry-server/0.log" Jan 09 02:05:35 crc kubenswrapper[4945]: I0109 02:05:35.100751 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spq2g" event={"ID":"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd","Type":"ContainerStarted","Data":"bb4877177893ef7d81769822347ff6e5de9a3ffcdb18da6f4f283477adc5f6b9"} Jan 09 02:05:38 crc kubenswrapper[4945]: I0109 02:05:38.129648 4945 generic.go:334] "Generic (PLEG): container finished" podID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerID="bb4877177893ef7d81769822347ff6e5de9a3ffcdb18da6f4f283477adc5f6b9" exitCode=0 Jan 09 02:05:38 crc kubenswrapper[4945]: I0109 02:05:38.129748 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spq2g" event={"ID":"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd","Type":"ContainerDied","Data":"bb4877177893ef7d81769822347ff6e5de9a3ffcdb18da6f4f283477adc5f6b9"} Jan 09 02:05:39 crc kubenswrapper[4945]: I0109 02:05:39.141975 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spq2g" event={"ID":"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd","Type":"ContainerStarted","Data":"7e82073af19daecc642ba432ce896b9340a92a0f18f44a136559d89c68a644d6"} Jan 09 02:05:41 crc kubenswrapper[4945]: I0109 02:05:41.936652 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:41 crc kubenswrapper[4945]: I0109 02:05:41.937125 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:42 crc kubenswrapper[4945]: I0109 02:05:42.999653 4945 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spq2g" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="registry-server" probeResult="failure" output=< Jan 09 02:05:42 crc kubenswrapper[4945]: timeout: failed to connect service ":50051" within 1s Jan 09 02:05:42 crc kubenswrapper[4945]: > Jan 09 02:05:47 crc kubenswrapper[4945]: I0109 02:05:47.202551 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b69df9d97-tv7kh_72c8983c-93b8-44f7-bbe1-9e8d048f6b3f/prometheus-operator-admission-webhook/0.log" Jan 09 02:05:47 crc kubenswrapper[4945]: I0109 02:05:47.204679 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b69df9d97-fmt7t_0b3d5e1a-f84a-4813-83d1-91e6ca00f5bf/prometheus-operator-admission-webhook/0.log" Jan 09 02:05:47 crc kubenswrapper[4945]: I0109 02:05:47.218210 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5bzq8_be31d881-d239-450c-8a45-622a6645072f/prometheus-operator/0.log" Jan 09 02:05:47 crc kubenswrapper[4945]: I0109 02:05:47.402048 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-w65s2_bca46932-b26c-40a7-a51f-9008f7e153ab/perses-operator/0.log" Jan 09 02:05:47 crc kubenswrapper[4945]: I0109 02:05:47.403731 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-bc94x_77ae07e8-12da-477b-86be-05e24de9edf7/operator/0.log" Jan 09 02:05:52 crc kubenswrapper[4945]: I0109 02:05:52.016862 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:52 crc kubenswrapper[4945]: I0109 02:05:52.043325 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-spq2g" podStartSLOduration=15.567038894 podStartE2EDuration="21.04330423s" podCreationTimestamp="2026-01-09 02:05:31 +0000 UTC" firstStartedPulling="2026-01-09 02:05:33.05896158 +0000 UTC m=+10203.370120526" lastFinishedPulling="2026-01-09 02:05:38.535226926 +0000 UTC m=+10208.846385862" observedRunningTime="2026-01-09 02:05:39.158927056 +0000 UTC m=+10209.470086022" watchObservedRunningTime="2026-01-09 02:05:52.04330423 +0000 UTC m=+10222.354463166" Jan 09 02:05:52 crc kubenswrapper[4945]: I0109 02:05:52.089961 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:52 crc kubenswrapper[4945]: I0109 02:05:52.250051 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spq2g"] Jan 09 02:05:53 crc kubenswrapper[4945]: I0109 02:05:53.296446 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spq2g" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="registry-server" containerID="cri-o://7e82073af19daecc642ba432ce896b9340a92a0f18f44a136559d89c68a644d6" gracePeriod=2 Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.308273 4945 generic.go:334] "Generic (PLEG): container finished" podID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerID="7e82073af19daecc642ba432ce896b9340a92a0f18f44a136559d89c68a644d6" exitCode=0 Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.308365 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spq2g" event={"ID":"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd","Type":"ContainerDied","Data":"7e82073af19daecc642ba432ce896b9340a92a0f18f44a136559d89c68a644d6"} Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.308784 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spq2g" event={"ID":"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd","Type":"ContainerDied","Data":"a5c6c11c35000dbe68e7cd96745e4cd0699a55fbc03ec1bc1cb92265ad3f44f6"} Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.308802 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5c6c11c35000dbe68e7cd96745e4cd0699a55fbc03ec1bc1cb92265ad3f44f6" Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.310450 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.387912 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-utilities\") pod \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.388028 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-catalog-content\") pod \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.388142 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hjhg\" (UniqueName: \"kubernetes.io/projected/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-kube-api-access-8hjhg\") pod \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\" (UID: \"fad3a8f6-c840-4cfb-bd1f-fc58d98244fd\") " Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.393422 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-utilities" (OuterVolumeSpecName: "utilities") pod "fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" (UID: "fad3a8f6-c840-4cfb-bd1f-fc58d98244fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.399235 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-kube-api-access-8hjhg" (OuterVolumeSpecName: "kube-api-access-8hjhg") pod "fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" (UID: "fad3a8f6-c840-4cfb-bd1f-fc58d98244fd"). InnerVolumeSpecName "kube-api-access-8hjhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.491002 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hjhg\" (UniqueName: \"kubernetes.io/projected/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-kube-api-access-8hjhg\") on node \"crc\" DevicePath \"\"" Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.491036 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.531298 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" (UID: "fad3a8f6-c840-4cfb-bd1f-fc58d98244fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:05:54 crc kubenswrapper[4945]: I0109 02:05:54.592525 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 02:05:55 crc kubenswrapper[4945]: I0109 02:05:55.316849 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spq2g" Jan 09 02:05:55 crc kubenswrapper[4945]: I0109 02:05:55.351975 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spq2g"] Jan 09 02:05:55 crc kubenswrapper[4945]: I0109 02:05:55.366396 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-spq2g"] Jan 09 02:05:56 crc kubenswrapper[4945]: I0109 02:05:56.011681 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" path="/var/lib/kubelet/pods/fad3a8f6-c840-4cfb-bd1f-fc58d98244fd/volumes" Jan 09 02:06:06 crc kubenswrapper[4945]: E0109 02:06:06.704518 4945 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.74:57354->38.102.83.74:45665: read tcp 38.102.83.74:57354->38.102.83.74:45665: read: connection reset by peer Jan 09 02:06:14 crc kubenswrapper[4945]: E0109 02:06:14.345274 4945 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.74:57606->38.102.83.74:45665: read tcp 38.102.83.74:57606->38.102.83.74:45665: read: connection reset by peer Jan 09 02:06:14 crc kubenswrapper[4945]: E0109 02:06:14.345787 4945 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.74:57606->38.102.83.74:45665: write tcp 38.102.83.74:57606->38.102.83.74:45665: write: broken pipe Jan 09 02:06:16 crc kubenswrapper[4945]: E0109 02:06:16.364093 4945 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.74:49174->38.102.83.74:45665: write tcp 38.102.83.74:49174->38.102.83.74:45665: write: broken pipe Jan 09 02:07:43 crc kubenswrapper[4945]: I0109 02:07:43.580278 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 02:07:43 crc kubenswrapper[4945]: I0109 02:07:43.581081 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 02:07:52 crc kubenswrapper[4945]: I0109 02:07:52.453380 4945 generic.go:334] "Generic (PLEG): container finished" podID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerID="96b70a169b658510f745ca889b626e2ddba317d9d0b244e1a81304edcc92129d" exitCode=0 Jan 09 02:07:52 crc kubenswrapper[4945]: I0109 02:07:52.453471 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" event={"ID":"2616d81f-9f0d-4cee-80a4-59c1499fafa5","Type":"ContainerDied","Data":"96b70a169b658510f745ca889b626e2ddba317d9d0b244e1a81304edcc92129d"} Jan 09 02:07:52 crc kubenswrapper[4945]: I0109 02:07:52.454666 4945 scope.go:117] "RemoveContainer" containerID="96b70a169b658510f745ca889b626e2ddba317d9d0b244e1a81304edcc92129d" Jan 09 02:07:53 crc kubenswrapper[4945]: I0109 02:07:53.054509 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sbqx9_must-gather-8r7k8_2616d81f-9f0d-4cee-80a4-59c1499fafa5/gather/0.log" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.236370 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pq2qj"] Jan 09 02:07:57 crc kubenswrapper[4945]: E0109 02:07:57.237555 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="extract-content" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.237577 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="extract-content" Jan 09 02:07:57 crc kubenswrapper[4945]: E0109 02:07:57.237603 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="registry-server" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.237614 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="registry-server" Jan 09 02:07:57 crc kubenswrapper[4945]: E0109 02:07:57.237632 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="extract-utilities" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.237641 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="extract-utilities" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.237962 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad3a8f6-c840-4cfb-bd1f-fc58d98244fd" containerName="registry-server" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.240022 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.248364 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pq2qj"] Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.337255 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-catalog-content\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.337477 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84nqc\" (UniqueName: \"kubernetes.io/projected/f8be9015-e4da-49c5-9b97-76c0360f00ac-kube-api-access-84nqc\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.337575 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-utilities\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.438517 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-catalog-content\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.439023 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-catalog-content\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.439034 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84nqc\" (UniqueName: \"kubernetes.io/projected/f8be9015-e4da-49c5-9b97-76c0360f00ac-kube-api-access-84nqc\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.439294 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-utilities\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.439835 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-utilities\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.669385 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84nqc\" (UniqueName: \"kubernetes.io/projected/f8be9015-e4da-49c5-9b97-76c0360f00ac-kube-api-access-84nqc\") pod \"redhat-marketplace-pq2qj\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:57 crc kubenswrapper[4945]: I0109 02:07:57.887486 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:07:58 crc kubenswrapper[4945]: I0109 02:07:58.382923 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pq2qj"] Jan 09 02:07:58 crc kubenswrapper[4945]: W0109 02:07:58.391310 4945 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8be9015_e4da_49c5_9b97_76c0360f00ac.slice/crio-191cd7dc0f472a1d0cf4614039188d48e9528bf22dd0b35f11e059b6162e1693 WatchSource:0}: Error finding container 191cd7dc0f472a1d0cf4614039188d48e9528bf22dd0b35f11e059b6162e1693: Status 404 returned error can't find the container with id 191cd7dc0f472a1d0cf4614039188d48e9528bf22dd0b35f11e059b6162e1693 Jan 09 02:07:58 crc kubenswrapper[4945]: I0109 02:07:58.522071 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pq2qj" event={"ID":"f8be9015-e4da-49c5-9b97-76c0360f00ac","Type":"ContainerStarted","Data":"191cd7dc0f472a1d0cf4614039188d48e9528bf22dd0b35f11e059b6162e1693"} Jan 09 02:07:59 crc kubenswrapper[4945]: I0109 02:07:59.537251 4945 generic.go:334] "Generic (PLEG): container finished" podID="f8be9015-e4da-49c5-9b97-76c0360f00ac" containerID="27470a72b580f5f604e04a72e7c406524b226acae86e58566659b3306d8499ed" exitCode=0 Jan 09 02:07:59 crc kubenswrapper[4945]: I0109 02:07:59.537295 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pq2qj" event={"ID":"f8be9015-e4da-49c5-9b97-76c0360f00ac","Type":"ContainerDied","Data":"27470a72b580f5f604e04a72e7c406524b226acae86e58566659b3306d8499ed"} Jan 09 02:08:00 crc kubenswrapper[4945]: I0109 02:08:00.548610 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pq2qj" event={"ID":"f8be9015-e4da-49c5-9b97-76c0360f00ac","Type":"ContainerStarted","Data":"5a2847d12141fbde6ff8490b0273750b5e29652170ff92afb6ec2a940ae602b6"} Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.258384 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sbqx9/must-gather-8r7k8"] Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.258954 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" podUID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerName="copy" containerID="cri-o://c95c4efc4305348b45f966ad768a3e861d8b9f6c369a3f82d199febb532db356" gracePeriod=2 Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.269598 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sbqx9/must-gather-8r7k8"] Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.586231 4945 generic.go:334] "Generic (PLEG): container finished" podID="f8be9015-e4da-49c5-9b97-76c0360f00ac" containerID="5a2847d12141fbde6ff8490b0273750b5e29652170ff92afb6ec2a940ae602b6" exitCode=0 Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.586359 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pq2qj" event={"ID":"f8be9015-e4da-49c5-9b97-76c0360f00ac","Type":"ContainerDied","Data":"5a2847d12141fbde6ff8490b0273750b5e29652170ff92afb6ec2a940ae602b6"} Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.593811 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sbqx9_must-gather-8r7k8_2616d81f-9f0d-4cee-80a4-59c1499fafa5/copy/0.log" Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.594575 4945 generic.go:334] "Generic (PLEG): container finished" podID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerID="c95c4efc4305348b45f966ad768a3e861d8b9f6c369a3f82d199febb532db356" exitCode=143 Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.773171 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sbqx9_must-gather-8r7k8_2616d81f-9f0d-4cee-80a4-59c1499fafa5/copy/0.log" Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.773663 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.847256 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2616d81f-9f0d-4cee-80a4-59c1499fafa5-must-gather-output\") pod \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\" (UID: \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\") " Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.847894 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvmbv\" (UniqueName: \"kubernetes.io/projected/2616d81f-9f0d-4cee-80a4-59c1499fafa5-kube-api-access-fvmbv\") pod \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\" (UID: \"2616d81f-9f0d-4cee-80a4-59c1499fafa5\") " Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.854075 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2616d81f-9f0d-4cee-80a4-59c1499fafa5-kube-api-access-fvmbv" (OuterVolumeSpecName: "kube-api-access-fvmbv") pod "2616d81f-9f0d-4cee-80a4-59c1499fafa5" (UID: "2616d81f-9f0d-4cee-80a4-59c1499fafa5"). InnerVolumeSpecName "kube-api-access-fvmbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 02:08:01 crc kubenswrapper[4945]: I0109 02:08:01.951191 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvmbv\" (UniqueName: \"kubernetes.io/projected/2616d81f-9f0d-4cee-80a4-59c1499fafa5-kube-api-access-fvmbv\") on node \"crc\" DevicePath \"\"" Jan 09 02:08:02 crc kubenswrapper[4945]: I0109 02:08:02.066558 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2616d81f-9f0d-4cee-80a4-59c1499fafa5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2616d81f-9f0d-4cee-80a4-59c1499fafa5" (UID: "2616d81f-9f0d-4cee-80a4-59c1499fafa5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:08:02 crc kubenswrapper[4945]: I0109 02:08:02.157417 4945 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2616d81f-9f0d-4cee-80a4-59c1499fafa5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 09 02:08:02 crc kubenswrapper[4945]: I0109 02:08:02.606748 4945 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sbqx9_must-gather-8r7k8_2616d81f-9f0d-4cee-80a4-59c1499fafa5/copy/0.log" Jan 09 02:08:02 crc kubenswrapper[4945]: I0109 02:08:02.608232 4945 scope.go:117] "RemoveContainer" containerID="c95c4efc4305348b45f966ad768a3e861d8b9f6c369a3f82d199febb532db356" Jan 09 02:08:02 crc kubenswrapper[4945]: I0109 02:08:02.608206 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbqx9/must-gather-8r7k8" Jan 09 02:08:02 crc kubenswrapper[4945]: I0109 02:08:02.611811 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pq2qj" event={"ID":"f8be9015-e4da-49c5-9b97-76c0360f00ac","Type":"ContainerStarted","Data":"a9e6b14c3793e41587ed93bb021e3c78b8ff96fd46109a81ba2d78c66078553e"} Jan 09 02:08:02 crc kubenswrapper[4945]: I0109 02:08:02.648314 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pq2qj" podStartSLOduration=3.174723522 podStartE2EDuration="5.648288099s" podCreationTimestamp="2026-01-09 02:07:57 +0000 UTC" firstStartedPulling="2026-01-09 02:07:59.540503023 +0000 UTC m=+10349.851661969" lastFinishedPulling="2026-01-09 02:08:02.0140676 +0000 UTC m=+10352.325226546" observedRunningTime="2026-01-09 02:08:02.63980593 +0000 UTC m=+10352.950964876" watchObservedRunningTime="2026-01-09 02:08:02.648288099 +0000 UTC m=+10352.959447065" Jan 09 02:08:02 crc kubenswrapper[4945]: I0109 02:08:02.655543 4945 scope.go:117] "RemoveContainer" containerID="96b70a169b658510f745ca889b626e2ddba317d9d0b244e1a81304edcc92129d" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.014657 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" path="/var/lib/kubelet/pods/2616d81f-9f0d-4cee-80a4-59c1499fafa5/volumes" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.610937 4945 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l8cwt"] Jan 09 02:08:04 crc kubenswrapper[4945]: E0109 02:08:04.611677 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerName="gather" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.611756 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerName="gather" Jan 09 02:08:04 crc kubenswrapper[4945]: E0109 02:08:04.611826 4945 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerName="copy" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.611889 4945 state_mem.go:107] "Deleted CPUSet assignment" podUID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerName="copy" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.612208 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerName="copy" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.612288 4945 memory_manager.go:354] "RemoveStaleState removing state" podUID="2616d81f-9f0d-4cee-80a4-59c1499fafa5" containerName="gather" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.613889 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.653617 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l8cwt"] Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.705468 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-catalog-content\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.705570 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llkjt\" (UniqueName: \"kubernetes.io/projected/f91d633a-a83d-46b3-8aaa-7611e5772f91-kube-api-access-llkjt\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.705670 4945 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-utilities\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.807869 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-catalog-content\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.807939 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llkjt\" (UniqueName: \"kubernetes.io/projected/f91d633a-a83d-46b3-8aaa-7611e5772f91-kube-api-access-llkjt\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.808037 4945 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-utilities\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.808355 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-catalog-content\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.808787 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-utilities\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.834863 4945 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llkjt\" (UniqueName: \"kubernetes.io/projected/f91d633a-a83d-46b3-8aaa-7611e5772f91-kube-api-access-llkjt\") pod \"community-operators-l8cwt\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:04 crc kubenswrapper[4945]: I0109 02:08:04.946774 4945 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:05 crc kubenswrapper[4945]: I0109 02:08:05.481826 4945 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l8cwt"] Jan 09 02:08:05 crc kubenswrapper[4945]: I0109 02:08:05.646828 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8cwt" event={"ID":"f91d633a-a83d-46b3-8aaa-7611e5772f91","Type":"ContainerStarted","Data":"df3a4c691744e48ff1687481d7d0e2a63d5604cd99f18ec6ee7e7a8288ea65a3"} Jan 09 02:08:06 crc kubenswrapper[4945]: I0109 02:08:06.660671 4945 generic.go:334] "Generic (PLEG): container finished" podID="f91d633a-a83d-46b3-8aaa-7611e5772f91" containerID="21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670" exitCode=0 Jan 09 02:08:06 crc kubenswrapper[4945]: I0109 02:08:06.660909 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8cwt" event={"ID":"f91d633a-a83d-46b3-8aaa-7611e5772f91","Type":"ContainerDied","Data":"21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670"} Jan 09 02:08:07 crc kubenswrapper[4945]: I0109 02:08:07.674497 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8cwt" event={"ID":"f91d633a-a83d-46b3-8aaa-7611e5772f91","Type":"ContainerStarted","Data":"132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6"} Jan 09 02:08:07 crc kubenswrapper[4945]: I0109 02:08:07.887706 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:08:07 crc kubenswrapper[4945]: I0109 02:08:07.887765 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:08:07 crc kubenswrapper[4945]: I0109 02:08:07.955481 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:08:08 crc kubenswrapper[4945]: I0109 02:08:08.686735 4945 generic.go:334] "Generic (PLEG): container finished" podID="f91d633a-a83d-46b3-8aaa-7611e5772f91" containerID="132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6" exitCode=0 Jan 09 02:08:08 crc kubenswrapper[4945]: I0109 02:08:08.686843 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8cwt" event={"ID":"f91d633a-a83d-46b3-8aaa-7611e5772f91","Type":"ContainerDied","Data":"132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6"} Jan 09 02:08:08 crc kubenswrapper[4945]: I0109 02:08:08.738897 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:08:09 crc kubenswrapper[4945]: I0109 02:08:09.704940 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8cwt" event={"ID":"f91d633a-a83d-46b3-8aaa-7611e5772f91","Type":"ContainerStarted","Data":"cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e"} Jan 09 02:08:09 crc kubenswrapper[4945]: I0109 02:08:09.732903 4945 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l8cwt" podStartSLOduration=3.219923301 podStartE2EDuration="5.732880816s" podCreationTimestamp="2026-01-09 02:08:04 +0000 UTC" firstStartedPulling="2026-01-09 02:08:06.663276711 +0000 UTC m=+10356.974435697" lastFinishedPulling="2026-01-09 02:08:09.176234266 +0000 UTC m=+10359.487393212" observedRunningTime="2026-01-09 02:08:09.722244964 +0000 UTC m=+10360.033403930" watchObservedRunningTime="2026-01-09 02:08:09.732880816 +0000 UTC m=+10360.044039762" Jan 09 02:08:10 crc kubenswrapper[4945]: I0109 02:08:10.993711 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pq2qj"] Jan 09 02:08:10 crc kubenswrapper[4945]: I0109 02:08:10.994167 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pq2qj" podUID="f8be9015-e4da-49c5-9b97-76c0360f00ac" containerName="registry-server" containerID="cri-o://a9e6b14c3793e41587ed93bb021e3c78b8ff96fd46109a81ba2d78c66078553e" gracePeriod=2 Jan 09 02:08:11 crc kubenswrapper[4945]: I0109 02:08:11.726320 4945 generic.go:334] "Generic (PLEG): container finished" podID="f8be9015-e4da-49c5-9b97-76c0360f00ac" containerID="a9e6b14c3793e41587ed93bb021e3c78b8ff96fd46109a81ba2d78c66078553e" exitCode=0 Jan 09 02:08:11 crc kubenswrapper[4945]: I0109 02:08:11.726654 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pq2qj" event={"ID":"f8be9015-e4da-49c5-9b97-76c0360f00ac","Type":"ContainerDied","Data":"a9e6b14c3793e41587ed93bb021e3c78b8ff96fd46109a81ba2d78c66078553e"} Jan 09 02:08:11 crc kubenswrapper[4945]: I0109 02:08:11.726689 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pq2qj" event={"ID":"f8be9015-e4da-49c5-9b97-76c0360f00ac","Type":"ContainerDied","Data":"191cd7dc0f472a1d0cf4614039188d48e9528bf22dd0b35f11e059b6162e1693"} Jan 09 02:08:11 crc kubenswrapper[4945]: I0109 02:08:11.726704 4945 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="191cd7dc0f472a1d0cf4614039188d48e9528bf22dd0b35f11e059b6162e1693" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.387289 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.581763 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-utilities\") pod \"f8be9015-e4da-49c5-9b97-76c0360f00ac\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.581845 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-catalog-content\") pod \"f8be9015-e4da-49c5-9b97-76c0360f00ac\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.581931 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84nqc\" (UniqueName: \"kubernetes.io/projected/f8be9015-e4da-49c5-9b97-76c0360f00ac-kube-api-access-84nqc\") pod \"f8be9015-e4da-49c5-9b97-76c0360f00ac\" (UID: \"f8be9015-e4da-49c5-9b97-76c0360f00ac\") " Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.584252 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-utilities" (OuterVolumeSpecName: "utilities") pod "f8be9015-e4da-49c5-9b97-76c0360f00ac" (UID: "f8be9015-e4da-49c5-9b97-76c0360f00ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.588878 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8be9015-e4da-49c5-9b97-76c0360f00ac-kube-api-access-84nqc" (OuterVolumeSpecName: "kube-api-access-84nqc") pod "f8be9015-e4da-49c5-9b97-76c0360f00ac" (UID: "f8be9015-e4da-49c5-9b97-76c0360f00ac"). InnerVolumeSpecName "kube-api-access-84nqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.605935 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8be9015-e4da-49c5-9b97-76c0360f00ac" (UID: "f8be9015-e4da-49c5-9b97-76c0360f00ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.685770 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.685827 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8be9015-e4da-49c5-9b97-76c0360f00ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.685849 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84nqc\" (UniqueName: \"kubernetes.io/projected/f8be9015-e4da-49c5-9b97-76c0360f00ac-kube-api-access-84nqc\") on node \"crc\" DevicePath \"\"" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.734708 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pq2qj" Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.768514 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pq2qj"] Jan 09 02:08:12 crc kubenswrapper[4945]: I0109 02:08:12.781142 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pq2qj"] Jan 09 02:08:13 crc kubenswrapper[4945]: I0109 02:08:13.577979 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 02:08:13 crc kubenswrapper[4945]: I0109 02:08:13.578084 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 02:08:14 crc kubenswrapper[4945]: I0109 02:08:14.020705 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8be9015-e4da-49c5-9b97-76c0360f00ac" path="/var/lib/kubelet/pods/f8be9015-e4da-49c5-9b97-76c0360f00ac/volumes" Jan 09 02:08:14 crc kubenswrapper[4945]: I0109 02:08:14.948249 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:14 crc kubenswrapper[4945]: I0109 02:08:14.948325 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:15 crc kubenswrapper[4945]: I0109 02:08:15.014251 4945 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:15 crc kubenswrapper[4945]: I0109 02:08:15.821473 4945 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:16 crc kubenswrapper[4945]: I0109 02:08:16.015055 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l8cwt"] Jan 09 02:08:17 crc kubenswrapper[4945]: I0109 02:08:17.801740 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l8cwt" podUID="f91d633a-a83d-46b3-8aaa-7611e5772f91" containerName="registry-server" containerID="cri-o://cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e" gracePeriod=2 Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.803951 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.812453 4945 generic.go:334] "Generic (PLEG): container finished" podID="f91d633a-a83d-46b3-8aaa-7611e5772f91" containerID="cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e" exitCode=0 Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.812505 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8cwt" event={"ID":"f91d633a-a83d-46b3-8aaa-7611e5772f91","Type":"ContainerDied","Data":"cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e"} Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.812558 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l8cwt" event={"ID":"f91d633a-a83d-46b3-8aaa-7611e5772f91","Type":"ContainerDied","Data":"df3a4c691744e48ff1687481d7d0e2a63d5604cd99f18ec6ee7e7a8288ea65a3"} Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.812583 4945 scope.go:117] "RemoveContainer" containerID="cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.812512 4945 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l8cwt" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.850433 4945 scope.go:117] "RemoveContainer" containerID="132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.880805 4945 scope.go:117] "RemoveContainer" containerID="21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.935808 4945 scope.go:117] "RemoveContainer" containerID="cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e" Jan 09 02:08:18 crc kubenswrapper[4945]: E0109 02:08:18.936335 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e\": container with ID starting with cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e not found: ID does not exist" containerID="cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.936368 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e"} err="failed to get container status \"cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e\": rpc error: code = NotFound desc = could not find container \"cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e\": container with ID starting with cfdea21534f6c6a84dddaaaa897950015fe2bfd2b4775a1973453b7b02800c6e not found: ID does not exist" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.936388 4945 scope.go:117] "RemoveContainer" containerID="132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6" Jan 09 02:08:18 crc kubenswrapper[4945]: E0109 02:08:18.936779 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6\": container with ID starting with 132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6 not found: ID does not exist" containerID="132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.936798 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6"} err="failed to get container status \"132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6\": rpc error: code = NotFound desc = could not find container \"132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6\": container with ID starting with 132ab38a3c2c1e72908a49ee68232367f70acb2271053c92de030fa466ed89c6 not found: ID does not exist" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.936810 4945 scope.go:117] "RemoveContainer" containerID="21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670" Jan 09 02:08:18 crc kubenswrapper[4945]: E0109 02:08:18.937086 4945 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670\": container with ID starting with 21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670 not found: ID does not exist" containerID="21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.937142 4945 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670"} err="failed to get container status \"21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670\": rpc error: code = NotFound desc = could not find container \"21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670\": container with ID starting with 21a8ed86077f5c0f52de963e8bbdeec3e401bf837dd3b961240be967ab694670 not found: ID does not exist" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.952954 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-catalog-content\") pod \"f91d633a-a83d-46b3-8aaa-7611e5772f91\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.953166 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-utilities\") pod \"f91d633a-a83d-46b3-8aaa-7611e5772f91\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.953196 4945 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llkjt\" (UniqueName: \"kubernetes.io/projected/f91d633a-a83d-46b3-8aaa-7611e5772f91-kube-api-access-llkjt\") pod \"f91d633a-a83d-46b3-8aaa-7611e5772f91\" (UID: \"f91d633a-a83d-46b3-8aaa-7611e5772f91\") " Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.953970 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-utilities" (OuterVolumeSpecName: "utilities") pod "f91d633a-a83d-46b3-8aaa-7611e5772f91" (UID: "f91d633a-a83d-46b3-8aaa-7611e5772f91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:08:18 crc kubenswrapper[4945]: I0109 02:08:18.960125 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91d633a-a83d-46b3-8aaa-7611e5772f91-kube-api-access-llkjt" (OuterVolumeSpecName: "kube-api-access-llkjt") pod "f91d633a-a83d-46b3-8aaa-7611e5772f91" (UID: "f91d633a-a83d-46b3-8aaa-7611e5772f91"). InnerVolumeSpecName "kube-api-access-llkjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 09 02:08:19 crc kubenswrapper[4945]: I0109 02:08:19.006566 4945 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f91d633a-a83d-46b3-8aaa-7611e5772f91" (UID: "f91d633a-a83d-46b3-8aaa-7611e5772f91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 09 02:08:19 crc kubenswrapper[4945]: I0109 02:08:19.055882 4945 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 09 02:08:19 crc kubenswrapper[4945]: I0109 02:08:19.055916 4945 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91d633a-a83d-46b3-8aaa-7611e5772f91-utilities\") on node \"crc\" DevicePath \"\"" Jan 09 02:08:19 crc kubenswrapper[4945]: I0109 02:08:19.055927 4945 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llkjt\" (UniqueName: \"kubernetes.io/projected/f91d633a-a83d-46b3-8aaa-7611e5772f91-kube-api-access-llkjt\") on node \"crc\" DevicePath \"\"" Jan 09 02:08:19 crc kubenswrapper[4945]: I0109 02:08:19.154296 4945 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l8cwt"] Jan 09 02:08:19 crc kubenswrapper[4945]: I0109 02:08:19.163592 4945 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l8cwt"] Jan 09 02:08:20 crc kubenswrapper[4945]: I0109 02:08:20.019979 4945 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91d633a-a83d-46b3-8aaa-7611e5772f91" path="/var/lib/kubelet/pods/f91d633a-a83d-46b3-8aaa-7611e5772f91/volumes" Jan 09 02:08:43 crc kubenswrapper[4945]: I0109 02:08:43.577988 4945 patch_prober.go:28] interesting pod/machine-config-daemon-vbm95 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 09 02:08:43 crc kubenswrapper[4945]: I0109 02:08:43.578815 4945 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 09 02:08:43 crc kubenswrapper[4945]: I0109 02:08:43.578886 4945 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" Jan 09 02:08:43 crc kubenswrapper[4945]: I0109 02:08:43.580129 4945 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31191e133d2d313e4adf0a2674b2582f584744001ec02279d8ae28794d2aa4e5"} pod="openshift-machine-config-operator/machine-config-daemon-vbm95" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 09 02:08:43 crc kubenswrapper[4945]: I0109 02:08:43.580239 4945 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" podUID="694a1575-6630-406f-93e7-ef55359bc79c" containerName="machine-config-daemon" containerID="cri-o://31191e133d2d313e4adf0a2674b2582f584744001ec02279d8ae28794d2aa4e5" gracePeriod=600 Jan 09 02:08:44 crc kubenswrapper[4945]: I0109 02:08:44.074310 4945 generic.go:334] "Generic (PLEG): container finished" podID="694a1575-6630-406f-93e7-ef55359bc79c" containerID="31191e133d2d313e4adf0a2674b2582f584744001ec02279d8ae28794d2aa4e5" exitCode=0 Jan 09 02:08:44 crc kubenswrapper[4945]: I0109 02:08:44.074819 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerDied","Data":"31191e133d2d313e4adf0a2674b2582f584744001ec02279d8ae28794d2aa4e5"} Jan 09 02:08:44 crc kubenswrapper[4945]: I0109 02:08:44.075351 4945 scope.go:117] "RemoveContainer" containerID="906d9b342c902aa489e93e291877a17af61de01f6f2ebb28992492fde6816713" Jan 09 02:08:44 crc kubenswrapper[4945]: I0109 02:08:44.075093 4945 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vbm95" event={"ID":"694a1575-6630-406f-93e7-ef55359bc79c","Type":"ContainerStarted","Data":"e08672b24dea9e67b3790b986bb9fc46f544ef5514186e719ab7f6cdcd4bce4f"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515130061661024444 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015130061662017362 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015130034676016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015130034677015463 5ustar corecore